LLM

Large Language Model (LLM) Action in AgenticFlow AI

Direct Access to a Variety of Large Language Models and Many Supporting Functionalities

We believe that Large Language Models (LLMs) like GPT will transform how software is used and the way we work. With AgenticFlow AI, using LLMs is extremely easy, as all the requirements (e.g., access, settings, output handling) are taken care of.

Communication with LLMs is via natural language and in written format. The piece of text that is used for providing information and instruction to an LLM is called a “Prompt”. Each time you use an LLM, you will need to:

  1. Write a good prompt

  2. Choose a model

How to Use an LLM Action

To use an LLM, you need to add an “LLM” action to your workflow (check how to get started with creating a workflow).

Adding an LLM Action

  1. Navigate to the Workflow page.

  2. Click on + Create Workflow or select an existing workflow.

  3. In the empty state or within your workflow, click on + Add Action.

  4. Select LLM from the list of action components.

You can then choose the model you want to use and write your prompt in the base window.

Prompt

A prompt is a written text that includes the information you want to provide to a language model as well as the instruction and expectation. It is important to be clear and explicit. Notes on prompt engineering with real samples are provided at How to Write a Good Prompt.

Access to Input Variables and Other Action Outputs

The prompt input accepts both regular text and variable templating using {{}} syntax. For instance, if there is an input variable called “my_text”, you can include it in the prompt using {{my_text}}.

Start entering a variable name, and you will see a list of available variables to choose from.

Model

To use a model to which you have subscribed, make sure to add your API key from the provider. Otherwise, you will be using AgenticFlow's keys, and it will be calculated in the used credit costs.

We provide support for not just GPT, but other vendors such as Cohere and Anthropic. We are always adding to this list. Implement once, with the knowledge that as new models come out, your product can take advantage!

In the next pages, we will explain more advanced settings for your LLM component, such as:

  • Conversation history

  • System prompt

  • Temperature

  • Validators

  • Handling large amounts of text/context

Common Errors

Prompt is Too Long

The error message below indicates that the provided prompt includes more tokens than what the chosen model allows. To resolve the issue, you can use a model that supports a higher number of tokens. For large text inputs, AgenticFlow provides you with techniques to automatically keep the tokens within the accepted range; more information is available at How to Handle Too Much Text.

Example Error:

400: {"message":"aviary.backend.llm.error_handling.PromptTooLongError: Input too long. Received 5002 tokens, but the maximum input length is 4090 tokens.","internal_message":"aviary.backend.server.openai_compat.openai_exception.OpenAIHTTPException","code":400,"type":"PromptTooLongError","param":{}}

Token Limit for Each Model

Make sure to check the token limits for each model to avoid exceeding them.

Validation

When there are output validations set for an LLM, AgenticFlow automatically checks the output to confirm its validity. If the output does not match the required setup, the below error will raise. The best solution is to improve your prompt with more explanation or examples.

Example Error:

Prompt completion did not pass validation

Too Large Data

When using AgenticFlow to handle large inputs by selecting the most relevant entries, if the input data is too large, you need to upload it as a dataset and use it as knowledge in your workflow. The maximum size for non-knowledge data is 131,072 tokens (~90kb).

Example Error:

Data is too large for the 'Most relevant data'. Consider adding the data to knowledge.

Rate Limit

This error happens when the used API key is set to a different rate limit compared to what AgenticFlow uses by default. Trying again with different intervals of pause helps with this issue.

Example Error:

429: {"message":"Rate limit reached for default-gpt-4 in organization org-... on tokens per min. Limit: 40000 / min. Please try again in 1ms. Contact us through our help center at help.openai.com if you continue to have issues.","type":"tokens","param":null,"code":"rate_limit_exceeded"}

Negative Credits

The following error indicates that the credits are below zero, and you need to top up to continue using the platform.

Example Error:

Organization Entitlement setting negative_credits : {"limit":0} does not allow this action.

LLM Run Rate

This error happens when the used API key is set to a different limit compared to what AgenticFlow uses by default. Trying again with a longer pause between each run helps with this issue.

Example Error:

The maximum number of LLM runs per minute for your OpenAI Plan has been reached. If you are using your own OpenAI Key, please either delete your key to use AgenticFlow's account, or upgrade your OpenAI Plan.
429: {"message":"You exceeded your current quota, please check your plan and billing details.","type":"insufficient_quota","param":null,"code":"insufficient_quota"}

Temperature

There is a temperature parameter under LLM advanced settings. The below error occurs if the entered value is out of the accepted (0,1) range.

Example Error:

400: {"message":"-0.5 is less than the minimum of 0 - 'temperature'","type":"invalid_request_error","param":null,"code":null}

History

This error occurs if History is set to an empty array. Either enter values or use the X button on the right side of each row to remove the empty rows.

Example Error:

Studio transformation prompt_completion input validation error: must be array {"type":"array"} /history

Plan Limitations

The below error occurs when GPT-4 is used under an AgenticFlow account with a plan that does not support the GPT-4 model.

Example Error:

Organization Entitlement setting premium_llm_generation : {"limit":false} does not allow this action.

Last updated