AgenticFlow AI: ChatGPT in the Flow of Work
HomeCommunityDiscordLogin
  • Get Started
    • AgenticFlow - Build agents that handle sales, marketing, and creative tasks around the clock.
    • Introduction to Large Language Models
    • Templates
    • AgenticFlow MCP
    • API Keys
    • Workflows quickstart
    • Agents quickstart
    • Plans and Credits
    • System Quotas
    • FAQs
    • Affiliate program 💵
  • AGENTS
    • Introduction to agents
    • Agent Templates
    • Create an Agent
    • Customize an Agent
    • Tools and Integrations
  • Workflows
    • Introduction
    • Workflow Templates
    • Creating a Workflow
    • User Inputs - Get Started
      • Text Input
      • Long text input
      • Drop-down
      • Numeric Input
      • File to URL
      • File to Text
      • Checkbox
      • Image Input
      • Audio Input
      • Video Input
      • Multiple Media Input
      • Carousel Select Input
    • Actions - Get Started
      • LLM
        • LLM - Advanced Settings
        • Validators
        • Too Much Text
        • LLM Prompt
        • LLM Output
      • Code - JavaScript
      • Code - Python
      • Python Helper Functions
      • PDF to text
      • Extract Website Content
      • Knowledge Search
      • Audio/Video to Text
      • Insert Data into a Dataset
    • Knowledge
    • Workflow Single Run
    • Workflow Table Run
    • Export Results
    • API Run
    • Parameter Substitution Utility
  • Data
    • Introduction
    • Data Table
  • Use Cases
    • Summarization
      • GPT on My Files
      • GPT on My Website
      • Question-Answering on Data
    • Research
      • Sentiment Analysis
      • Anonymize Text
      • Audio Transcription + High-Level Analysis
  • Sales
    • Teach LLMs to Mimic Your Style
  • Marketing
    • SEO Optimize
    • Automating Creativity Transforming Workflow with AgenticFlow AI (PDF)
  • Policies
    • Security Overview
      • AI Policy
      • Reporting bugs and vulnerabilities
      • Subprocessors
      • DPA
    • Privacy Policy
    • Terms of Service
    • Cookies Policy
  • Advanced Topics
    • Advanced Topics
Powered by GitBook
On this page

Was this helpful?

  1. Workflows
  2. Actions - Get Started
  3. LLM

Validators

We help protect your Workflow against LLM unreliability & hallucinations

PreviousLLM - Advanced SettingsNextToo Much Text

Last updated 9 hours ago

Was this helpful?

One of the biggest roadblocks with using LLMs is when they return invalid content. For example, you may ask for a specific format of JSON, but the LLM returns an explanation that breaks it in your JSON.parse().

To protect against this, Relevance has provided you with a quality control feature called Validators. You can specify a schema you want to apply to your prompt, and this component validates the LLM’s response against the desired and specified schema. If the LLM returns invalid data, in a second step the LLM is asked to fix its response. If that fails, an error message will be returned to your application. This way, you can rest assured your application will always receive valid data or error gracefully.

There are three options available, but we will always be adding more:

  • Regex (regular expression): check if the output matches the provided regular expression

  • Is_json: check if the output is a valid json

  • JsonSchema: checks if the response matches the specified schema. A sample prompt and schema are provided below

is_json

This validates whether the LLM has returned only JSON. It will try to fix it if not. This doesn’t require any additional properties.

The prompt_completion transformation will still return a string, but you will be safe to JSON.parse() it.

jsonschema

If you would like to validate a specific format of JSON, you can take advantage of ! Provide the schema as the schema property.

{
    type: 'object',
    properties: {
        title: { type: 'string' },
        description: { type: 'string' }
    }
}

The prompt_completion transformation will still return a string, but you will be safe to JSON.parse() it.

This validates whether the LLM’s response matches a regex. It will try to fix it if not. This requires a pattern property, in this format:

Relevance automatically adds the noted regex validator to your prompt, meaning that you do not need to double define the regular expression both in the prompt and the validator.

{
    pattern: "[a-z0-9]",
    flags: 'i'
} // equates to /[a-z0-9]/i

regex

​
​
JSONSchema
​