LLM
A comprehensive guide to the fundamental concepts of working with Large Language Models (LLMs).
The LLM (Large Language Model) Action is the core of most AI-powered workflows in AgenticFlow. It's a general-purpose tool that allows you to send instructions (a "prompt") to a powerful language model to perform a vast range of tasks, from writing emails and summarizing text to analyzing data and making decisions.
This guide covers the fundamental concepts that apply to all LLM actions, regardless of the specific provider (like OpenAI or Anthropic).
Core Concepts
1. Prompting: Giving Instructions
The most critical part of using an LLM is the prompt. This is the set of instructions you give the model. A well-crafted prompt is clear, specific, and provides enough context for the model to understand the desired output.
There are two parts to a prompt:
System Prompt: This sets the stage and defines the AI's persona, role, or overall goal. It's the "You are..." part of the instruction.
Good Example:
You are a helpful and cheerful customer support assistant. Your goal is to answer user questions clearly and provide a single, actionable next step.
User Prompt: This is the specific, immediate task you want the model to perform.
Good Example:
A customer is asking for a refund for order #12345. Please draft a polite email explaining that we can offer store credit.
2. Temperature: Controlling Creativity
Temperature is a number (usually between 0 and 1) that controls the randomness and "creativity" of the model's output.
Low Temperature (e.g.,
0.2
): The model will be more focused, deterministic, and predictable. It will choose the most likely next word, which is great for factual tasks like data extraction, classification, or writing code.High Temperature (e.g.,
0.8
): The model will be more creative and surprising. It might choose less common words, leading to more diverse and imaginative outputs. This is ideal for brainstorming, writing marketing copy, or other creative tasks.
3. Max Tokens: Limiting Output Length
Max Tokens sets a limit on the length of the model's response. A "token" is a piece of a word (roughly 4 characters for English). Setting this value helps control costs and ensures the output remains concise. If you expect a short answer (like "yes" or "no"), set a low value like 5
. If you need a long article, you might set it to 2000
or higher.
Advanced Feature: JSON Mode
Many LLM actions offer a JSON Mode. When you enable this, you are instructing the model to only output valid JSON that conforms to a structure you define in the prompt. This is incredibly powerful for structured data extraction.
How to Use JSON Mode
Enable JSON Mode: Check the "JSON Mode" box in the action's configuration.
Define the Structure in Your Prompt: Your prompt must clearly state that you want JSON output and provide an example of the desired structure.
Example: Extracting Contact Information
User Prompt:
Guaranteed Output:
Using JSON mode eliminates the need for messy text parsing in later steps. The output is a clean, structured object ready to be used by other actions like a Google Sheet Action or an API Action.
Configuration
To use the LLM Action, you need to configure its input parameters.
Input Parameters
Model
Dropdown
The specific language model you want to use (e.g., Google Gemini 2.0 Flash
, gpt-4o-mini
). The available models will depend on the connections you have configured.
System Message
Text
An optional instruction that sets the context and persona for the AI. It tells the model how to behave (e.g., "You are a helpful assistant specializing in marketing copy.").
Human Message
Text
This is your prompt. It's the main instruction or question you are giving to the model. You can use variables from previous actions (e.g., {{previous_action.output}}
).
Chat History ID
Text
An optional ID to maintain a memory of the conversation across multiple runs. This is crucial for building conversational chatbots.
Temperature
Number
A value from 0 to 1 that controls the randomness of the output. Lower values (e.g., 0.2) make the output more deterministic, while higher values (e.g., 0.8) make it more creative.
Example Configuration
Output
The LLM Action produces a single output containing the generated text from the model.
Output Parameter
content
Text
The text content generated by the model in response to your prompt.
You can reference this output in subsequent actions using a variable, like {{llm_action_name.content}}
.
Last updated
Was this helpful?