Skip to main content
NICE CXone Expert
Expert Success Center

Generative Responses FAQ

This FAQ addresses common inquiries about the large language model (LLM) behind Expert Kernels.

Prepare for AI and content management

How can you prepare to use AI?
  1. Articulate what your goals and expectations for GenAI are.
  2. Determine what problems you have that you think AI can help solve.
  3. Test out public models like Chat GPT to get familiar with LLMs and prompt engineering.
    Examples: Summarize video and meeting transcripts, create a table from a paragraph, create a page summary for existing content
  4. Familiarize yourself with the terminology associated with LLM and generative AI, and the practices of others in your industry.
  5. Understand RAG (Retrieval Augmented Generation) and its use in AI.
How can you prepare your content for AI?

Authoring content following Expert best practices, including guided content framework, is a good start.

When a word is used across your site in different ways, it can confuse the LLM. Understanding where GenAI might have difficulty with your content will empower you to write clearer persona prompts, to reduce the misunderstandings between your customers and the LLM. The first place to look when you do not get the answers a customer expects is in the content.

Take Google for example; Google defines all their products as Google [Product]: Google Maps, Google Search, Google Drive, and so on. When a consumer asks a question like "What is Google?" and the desired answer they are looking for is not specifically called out in the content, they are less likely to obtain that answer in a GenResponse. 

Assume they expect the following answer: "Google is a large company that powers the largest percentage of search traffic on the web" but this answer was nowhere on their Expert site. The result they obtain could be something about all Google products / services. For example: "Google is a company that has Maps, Drive, Search, and various other products." That answer is not wrong, but it is not desirable in this example.

Performance, optimization, and methodology

What are our response times and how are they impacted?
  • Our average response time for Kernels (text chunking) is 500ms.
  • Our average response time for Completions (natural language output) is in the process of being tested.
  • The response length is the biggest factor.
How can you ensure generative searches are good?
  • Follow our GenSearch content best practices.
  • Use persona prompting to establish a personality for your search.
  • Without relevant kernels, GenSearch will not generate a response. This prevents it from relying on previous training to provide an irrelevant or incorrect answer.

Permissions and privacy

How are IDPs setup and managed?
  • All NICE NICE CXone Mpower Copilot customers MUST deploy CXone as their IDP for Expert. If customers have non-agent users, an IDP such as Azure or can be set up to integrate with Expert.
  • Customers who are Expert-only can follow the existing paradigm for deploying IDP integrations.
  • Existing Expert customers that adopt NICE CXone Mpower Copilot after being live on Expert and using a non-CXone IDP can migrate the users from their existing IDP to use CXone as their new IDP. The Expert Support team can facilitate this migration.
How do you protect customers' privacy?

Expert security standards and practices are applied to all aspects of the platform, including generative search.

How do you ensure customer data is secure and not shared or commingled with other customer data?

Expert security standards and practices are applied to all aspects of the platform, including generative search.

What do you log, audit, or otherwise retain about usage?

An event log is maintained, which contains:

  • The date / time a request was made
  • Who made the request
  • The request query

The responses are saved and made available via an API for auditing purposes.

Functionality

Are there advanced options for generative search?

While it was considered for Kernels and generative search, the technology behind semantic and lucene searches are fundamentally different, so advanced search will not be available at this time.

Will Expert have extractive search and keyword suggestions?

We are considering those and other features following the launch of GenSearch.

Can you make your own LLM using your Expert content?

Yes, integrating your preferred chat experience to the Kernels API endpoint enables you to serve up generative responses to users.

What are consumers, customers, and guests?

It is important to be mindful of the distinction between these groups in the context of generative responses and terminology in your content.  Generally, this is how the Expert Product Team describes our constituents.

  • Customers: Expert customers
  • Consumers: The customers of Expert customers
  • Guests: An example of how a customer might refer to their consumers
 

 

  • Was this article helpful?