LLM
Large Language Model (LLM) Action in AgenticFlow AI
Direct Access to a Variety of Large Language Models and Many Supporting Functionalities
We believe that Large Language Models (LLMs) like GPT will transform how software is used and the way we work. With AgenticFlow AI, using LLMs is extremely easy, as all the requirements (e.g., access, settings, output handling) are taken care of.
Communication with LLMs is via natural language and in written format. The piece of text that is used for providing information and instruction to an LLM is called a “Prompt”. Each time you use an LLM, you will need to:
Write a good prompt
Choose a model
How to Use an LLM Action
To use an LLM, you need to add an “LLM” action to your workflow (check how to get started with creating a workflow).
Adding an LLM Action
Navigate to the Workflow page.
Click on + Create Workflow or select an existing workflow.
In the empty state or within your workflow, click on + Add Action.
Select LLM from the list of action components.
You can then choose the model you want to use and write your prompt in the base window.
Prompt
A prompt is a written text that includes the information you want to provide to a language model as well as the instruction and expectation. It is important to be clear and explicit. Notes on prompt engineering with real samples are provided at How to Write a Good Prompt.
Access to Input Variables and Other Action Outputs
The prompt input accepts both regular text and variable templating using {{}}
syntax. For instance, if there is an input variable called “my_text”, you can include it in the prompt using {{my_text}}
.
Start entering a variable name, and you will see a list of available variables to choose from.
Model
To use a model to which you have subscribed, make sure to add your API key from the provider. Otherwise, you will be using AgenticFlow's keys, and it will be calculated in the used credit costs.
We provide support for not just GPT, but other vendors such as Cohere and Anthropic. We are always adding to this list. Implement once, with the knowledge that as new models come out, your product can take advantage!
Model Name | Model ID | Provider | Model Specifics |
---|---|---|---|
GPT 4o | openai-gpt4o | OpenAI | GPT 4o SOTA model |
GPT 4 | openai-gpt4 | OpenAI | English data, larger context (compared to GPT 3.5), Strong reasoning, Coding, Layout, 200+ output languages |
GPT 4 NEW | openai-gpt4-0613 | OpenAI | GPT 4 new version (improved accuracy) |
GPT 3.5 | openai-gpt35 | OpenAI | English data, Medium context, Simple reasoning, Coding, Layout |
GPT 3.5 NEW | openai-gpt35-0613 | OpenAI | GPT 3.5 new version (improved accuracy) |
GPT 3.5 16k | openai-gpt35-16k | OpenAI | GPT 3.5 with increased context window |
Claude | anthropic-claude-v1 | Anthropic | Large context, Strong in parsing large texts and documents |
Claude (100k) | anthropic-claude-v1-100k | Anthropic | Claude with increased context window |
Claude Instant | anthropic-claude-instant-v1 | Anthropic | Anthropic’s fastest model |
Claude Instant (100k) | anthropic-claude-instant-v1-100k | Anthropic | Anthropic’s fastest model with increased context window |
Command | cohere-command | Cohere | Cohere model, supports over 100 languages |
Command Light | cohere-command-light | Cohere | Cohere model, easy to retrain |
In the next pages, we will explain more advanced settings for your LLM component, such as:
Conversation history
System prompt
Temperature
Validators
Handling large amounts of text/context
Common Errors
Prompt is Too Long
The error message below indicates that the provided prompt includes more tokens than what the chosen model allows. To resolve the issue, you can use a model that supports a higher number of tokens. For large text inputs, AgenticFlow provides you with techniques to automatically keep the tokens within the accepted range; more information is available at How to Handle Too Much Text.
Example Error:
Token Limit for Each Model
Make sure to check the token limits for each model to avoid exceeding them.
Validation
When there are output validations set for an LLM, AgenticFlow automatically checks the output to confirm its validity. If the output does not match the required setup, the below error will raise. The best solution is to improve your prompt with more explanation or examples.
Example Error:
Too Large Data
When using AgenticFlow to handle large inputs by selecting the most relevant entries, if the input data is too large, you need to upload it as a dataset and use it as knowledge in your workflow. The maximum size for non-knowledge data is 131,072 tokens (~90kb).
Example Error:
Rate Limit
This error happens when the used API key is set to a different rate limit compared to what AgenticFlow uses by default. Trying again with different intervals of pause helps with this issue.
Example Error:
Negative Credits
The following error indicates that the credits are below zero, and you need to top up to continue using the platform.
Example Error:
LLM Run Rate
This error happens when the used API key is set to a different limit compared to what AgenticFlow uses by default. Trying again with a longer pause between each run helps with this issue.
Example Error:
Temperature
There is a temperature parameter under LLM advanced settings. The below error occurs if the entered value is out of the accepted (0,1) range.
Example Error:
History
This error occurs if History is set to an empty array. Either enter values or use the X button on the right side of each row to remove the empty rows.
Example Error:
Plan Limitations
The below error occurs when GPT-4 is used under an AgenticFlow account with a plan that does not support the GPT-4 model.
Example Error:
Last updated