Skip to main content
NiCE CXone Mpower Expert
Expert Success Center

Understand guardrails in AI / LLM systems

Guardrails in AI systems are safety features that help control what an AI can and cannot do. They ensure your AI assistant provides helpful, accurate, and appropriate responses while avoiding problematic content. You do not need to understand the complex technology behind guardrails, but by understanding the concepts behind them, you can appreciate how guardrails keep GenSearch reliable, trustworthy, and aligned with your organization's needs.

What are guardrails?

In AI systems, guardrails prevent the AI from generating harmful, inaccurate, or inappropriate content. You can think of guardrails like the safety features in a car:

  • Seatbelts and airbags protect you during everyday driving.
  • Lane assist keeps you from veering off course.
  • Collision prevention stops accidents before they happen.

Benefits of guardrails

  • Accuracy and Safety: Guardrails ensure GenSearch responses are based on reliable information, and protect users against harmful, offensive, and inappropriate content.
  • Consistency: Guardrails deliver uniform quality across different user types or groups. This builds confidence in AI tools across your organization, and trust in your brand.
  • Compliance: Customizable guardrails help you meet regulatory and ethical standards.
  • Customization: Because you can set your own guardrails to decide how the AI behaves, guardrails ensure GenSearch reflects your organization's values and needs.

Types of guardrails

AI systems have three important layers of guardrails or protection that work together:

  1. Built-in foundational model guardrails
  2. System prompts
  3. Customer prompts

Built-in foundational model guardrails

These safety features come pre-installed with your AI model, and work automatically in the background.

  • Content filtering: Blocks inappropriate or harmful responses.
  • Bias mitigation: Reduces unfair or prejudiced answers.
  • Safety mechanisms: Prevent misuse of the AI system.

System prompts

System prompts are the instruction manual for your AI, which:

  • Tell it what documents should be used as context to generate a response
  • Apply globally required instructions, such as using provided kernels and not the model's general base knowledge to provide responses
  • Provide structure to the customer prompts so the LLM can understand user input

Customer prompts

These are specific instructions your team members add when using the system to ensure the AI performs correctly for your specific needs. These instructions tell your AI:

  • What role it should play
  • How it should respond to questions
  • What tone to use

Examples of a customer prompt are "Answer in simple language a 10-year-old would understand" and "Provide three options with pros and cons."

    What customer prompts control Example
    Personality & Tone Friendly, professional, formal
    Response format Short answers, detailed explanations, bullet points
    Knowledge boundaries What topics the AI can or cannot discuss
    Safety rules Additional restrictions beyond built-in safeguards

    How the three types of guardrails work together

    When a user asks a question:

    1. The system finds relevant information from your documents.
    2. The AI processes this information.
    3. All three layers of guardrails work together to ensure the response is:
      • Safe and appropriate
      • Based only on approved information
      • Formatted according to company guidelines
      • Tailored to the specific request

      Examples of guardrails in action

      Example 1: Customer service

      Without guardrails: A customer asks about a policy, and the AI makes up an answer based on general knowledge rather than your specific company policies.

      With guardrails: GenSearch only pulls information from your approved policy documents, providing accurate information while following your brand voice guidelines.

      Example 2: Internal research

      Without guardrails: An employee asks a sensitive question, and the AI provides detailed information beyond what they should access.

      With guardrails: GenSearch recognizes the sensitive nature of the request, and responds according to the user's permission level to protect confidential information.

       

      • Was this article helpful?