Skip to main content
NICE CXone Expert
Expert Success Center

Generative Responses FAQ

This FAQ addresses common inquiries about the large language model (LLM) behind Expert Kernels.

Prepare for AI and content management

How can you prepare your content for AI?

Authoring content following Expert best practices, including guided content framework, is a good start.

When a word is used across your site in different ways, it can confuse the LLM. Understanding where GenAI might have difficulty with your content will empower you to write clearer persona prompts, to reduce the misunderstandings between your customers and the LLM. The first place to look when you do not get the answers a customer expects is in the content.

Take Google for example; Google defines all their products as Google [Product]: Google Maps, Google Search, Google Drive, and so on. When a consumer asks a question like "What is Google?" and the desired answer they are looking for is not specifically called out in the content, they are less likely to obtain that answer in a GenResponse. 

Assume they expect the following answer: "Google is a large company that powers the largest percentage of search traffic on the web" but this answer was nowhere on their Expert site. The result they obtain could be something about all Google products / services. For example: "Google is a company that has Maps, Drive, Search, and various other products." That answer is not wrong, but it is not desirable in this example.

LLM model and instance questions

How is the AI model configured / firewalled across different customers? For example, is Customer A's tenant data and the related AI model only available for Customer A, or is it shared?

The tenant data and configuration are only available for Customer A. There is no sharing and the model does not learn based on other customer data. 

Do you use customer data to train / tune the AI model?

No, customer data is not used to train models. In a RAG system like this, the AI only retrieves content from your site. 

How is the instance and its performance monitored by Expert?

We follow DevOps processes for performance to monitor stability, scalability, security, and other metrics.

What are the settings on the instance?

We use default settings from AWS security best practices.

When does Expert update its AI model?

Periodically, based on the availability of new functionality timed with planned releases. For more information about why, when, and how Expert migrates models, see Foundation Model Evolution.

Performance, optimization, and methodology

What are our response times and how are they impacted?

  • Our average response time for Kernels (text chunking) is 500ms.
  • Our average response time for Completions (natural language output) is in the process of being tested.
  • The response length is the biggest factor.

How can you ensure generative searches are good?

  • Follow our GenSearch content best practices.
  • Use persona prompting to establish a personality for your search.
  • Without relevant kernels, GenSearch will not generate a response. This prevents it from relying on previous training to provide an irrelevant or incorrect answer.

Permissions and privacy

How are IDPs setup and managed?

  • All NiCE CXone Mpower Copilot customers MUST deploy CXone as their IDP for Expert. If customers have non-agent users, an IDP such as Azure or can be set up to integrate with Expert.
  • Customers who are Expert-only can follow the existing paradigm for deploying IDP integrations.
  • Existing Expert customers that adopt NiCE CXone Mpower Copilot after being live on Expert and using a non-CXone IDP can migrate the users from their existing IDP to use CXone as their new IDP. The Expert Support team can facilitate this migration.

How do you protect customers' privacy?

Expert security standards and practices are applied to all aspects of the platform, including generative search.

How do you ensure customer data is secure and not shared or commingled with other customer data?

Expert security standards and practices are applied to all aspects of the platform, including generative search.

What do you log, audit, or otherwise retain about usage?

An event log is maintained, which contains:

  • The date / time a request was made
  • Who made the request
  • The request query

The responses are saved and made available via an API for auditing purposes.

Security

How is the LLM instance secured?

We use AWS Foundational Models with IAM security. From our API to LLM completions endpoint is secured through regular SDK updates. The communications is HTTPS over TLSv1.2.

Functionality

Are there advanced options for generative search?

No. While it was considered for Kernels and generative search, the technology behind semantic and lucene searches are fundamentally different.

Will Expert have extractive search and keyword suggestions?

We are considering those and other features following the launch of GenSearch.

Can you make your own LLM using your Expert content?

Yes, integrating your preferred chat experience to the Kernels API endpoint enables you to serve up generative responses to users.

What are consumers, customers, and guests?

There is a distinction between these groups in the context of generative responses and terminology. The Expert Product Team describes our constituents as:

  • Customers: Expert customers
  • Consumers: The customers of Expert customers
  • Guests: An example of how a customer might refer to their consumers

 

  • Was this article helpful?