LLM
Action ID: llm
Description
Use a large language model such as GPT
Input Parameters
human_message
string
✓
-
The prompt that is fed to the model (max 640,000 characters)
system_message
string
-
null
Instructions for the AI assistant on how to behave and respond (max 640,000 characters)
chat_history_id
string
-
null
The chat history ID used to retrieve conversation history. If not provided, a new chat history will be created
model
dropdown
-
DeepSeek V3
The AI model to use for generating the response
temperature
number
-
0.5
Controls randomness in responses (0.0-1.0). Higher values result in more creative responses
Output Parameters
content
string
The AI-generated response text
chat_history_id
string
The chat history ID used to maintain conversation context
How It Works
This node processes your text prompts through powerful large language models to generate intelligent responses. When you provide a human message (your question or request), the node sends it to the selected AI model along with any optional system instructions that define the model's behavior. The model analyzes the input, considers any chat history for context continuity, and generates a relevant response. The temperature parameter controls the randomness of outputs, with lower values producing more focused and deterministic responses, while higher values enable more creative and varied outputs.
Usage Examples
Example 1: Simple Question Answering
Input:
human_message: "What are the three primary colors and why are they important in art?"
model: "DeepSeek V3"
temperature: 0.3Output:
content: "The three primary colors are red, blue, and yellow. They are important in art because they cannot be created by mixing other colors, and all other colors can be created by combining these primaries in various proportions. This fundamental principle forms the basis of color theory and allows artists to create a full spectrum of colors from just these three base colors."
chat_history_id: "chat_abc123def456"Example 2: Content Generation with System Instructions
Input:
human_message: "Write a professional email apologizing for a delayed shipment"
system_message: "You are a professional customer service representative. Write concise, empathetic emails that acknowledge issues and provide solutions."
model: "OpenAI GPT-4o Mini"
temperature: 0.7Output:
content: "Subject: Sincere Apologies for Your Delayed Shipment\n\nDear Valued Customer,\n\nI sincerely apologize for the delay in your recent order. We understand how frustrating this must be, and we take full responsibility for not meeting our delivery commitment.\n\nYour package is now en route and should arrive within 2-3 business days. To make up for this inconvenience, we've applied a 15% discount to your next purchase.\n\nThank you for your patience and understanding.\n\nBest regards,\nCustomer Service Team"
chat_history_id: "chat_789ghi012jkl"Example 3: Continuing a Conversation
Input:
human_message: "Can you elaborate on the mixing ratios?"
chat_history_id: "chat_abc123def456"
model: "Google Gemini 2.0 Flash"
temperature: 0.5Output:
content: "Certainly! When mixing primary colors, the ratios determine which secondary or tertiary color you'll create. For example:\n\n- Equal parts red + blue = purple\n- Equal parts blue + yellow = green\n- Equal parts red + yellow = orange\n\nBy adjusting the ratios (like 2 parts red to 1 part blue), you can create variations like violet or burgundy. The possibilities are virtually endless, which is why understanding primary colors is so fundamental to color mixing."
chat_history_id: "chat_abc123def456"Common Use Cases
Customer Support Automation: Generate intelligent responses to customer inquiries, support tickets, and frequently asked questions
Content Creation: Write blog posts, product descriptions, marketing copy, social media content, and advertising text
Data Analysis: Analyze text data, extract insights, summarize reports, and identify patterns in large datasets
Code Assistance: Generate code snippets, explain technical concepts, debug issues, and provide programming guidance
Translation and Localization: Translate content between languages while maintaining context and cultural nuances
Educational Content: Create explanations, tutorials, study guides, and answer student questions on various topics
Conversational AI: Build chatbots, virtual assistants, and interactive conversation systems with maintained context
Error Handling
Empty Message Error
human_message parameter is empty or missing
Ensure human_message contains text before sending to the model
Token Limit Exceeded
Input message exceeds 640,000 character limit
Reduce the length of your prompt or split it into multiple requests
Invalid Chat History
chat_history_id doesn't exist or is expired
Start a new conversation by omitting chat_history_id or verify the ID is correct
Model Unavailable
Selected model is temporarily down or unavailable
Switch to an alternative model or retry after a few minutes
Temperature Out of Range
Temperature value is not between 0.0 and 1.0
Set temperature to a value within the valid range (0.0-1.0)
Rate Limit Error
Too many requests sent in a short time period
Implement request throttling or wait before sending additional requests
Invalid System Message
System message contains unsupported content or formatting
Simplify system message and ensure it contains valid instructions only
Notes
Model Selection: DeepSeek V3 offers excellent performance for most tasks. Use GPT-4o Mini for cost efficiency, Gemini for multimodal capabilities, and Claude for detailed analysis
Temperature Control: Use 0.0-0.3 for factual, deterministic responses (data analysis, extraction). Use 0.7-1.0 for creative tasks (content generation, brainstorming)
System Messages: Define the model's role, tone, and behavior. Examples: "You are a technical expert", "Respond concisely", "Use a friendly tone"
Chat History: Reuse chat_history_id to maintain conversation context across multiple turns. Essential for building conversational experiences
Prompt Engineering: Clear, specific prompts produce better results. Include context, desired format, and any constraints in your prompt
Character Limits: Both human_message and system_message support up to 640,000 characters, allowing for extensive context and detailed instructions
Response Quality: More advanced models (Gemini Pro, Claude) provide better reasoning but cost more. Balance quality needs with budget constraints
Workflow Integration: Chain multiple LLM nodes together for complex tasks like draft-review-refine workflows or multi-step analysis
Last updated
Was this helpful?