Ask LLM Advance

Action ID: pml_llm

Description

Access advanced large language models with chat history support. This node provides sophisticated AI responses with conversation memory and customizable model selection.

Connection

Name
Description
Required
Category

PixelML Connection

The PixelML connection to use for the assistant.

True

pixelml

Input Parameters

Name
Type
Required
Default
Description

human_message

string

-

The prompt that is fed to the model. Max length: 640,000 characters

model

string

-

The AI model to use to generate the response

system_message

string

-

-

System message that tells the model how to behave. Max length: 640,000 characters

chat_history_id

string

-

-

Chat history ID to retrieve conversation context. If not provided, a new chat history will be created

temperature

number

-

0.5

Temperature for response generation. Higher values result in more creative responses. Range: 0 to 1

View JSON Schema

Input Schema

{
  "description": "LLM advance node input.",
  "properties": {
    "human_message": {
      "description": "The prompt that is fed to the model.",
      "maxLength": 640000,
      "title": "Prompt",
      "type": "string"
    },
    "system_message": {
      "anyOf": [
        {
          "maxLength": 640000,
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "This represents a system message, which tells the model how to behave.",
      "title": "System Message"
    },
    "chat_history_id": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "The chat history id used to retrieve the chat history. If not provided, a new chat history will be created.",
      "title": "Chat History ID"
    },
    "model": {
      "description": "The AI model to use to generate the response.",
      "title": "Model",
      "type": "string"
    },
    "temperature": {
      "default": 0.5,
      "description": "The temperature to use when generating the response. Higher temperatures will result in more creative responses.",
      "maximum": 1,
      "minimum": 0,
      "title": "Temperature",
      "type": "number"
    }
  },
  "required": [
    "human_message",
    "model"
  ],
  "title": "LLMAdvanceInput",
  "type": "object"
}

How It Works

This node provides access to various large language models through PixelML's unified API with conversation memory capabilities. When you send a prompt, the node checks if a chat history ID is provided. If so, it retrieves previous conversation context and includes it with your new message. The selected AI model then processes your prompt along with the system message (which defines the AI's behavior) and chat history, generating a contextually relevant response. The node returns both the response content and a chat history ID for maintaining conversation continuity across multiple interactions.

Usage Examples

Example 1: Simple One-Time Query

Input:

human_message: "Explain the benefits of serverless architecture in 3 bullet points"
model: "gpt-4"
system_message: "You are a technical consultant who explains concepts clearly and concisely"
temperature: 0.3

Output:

content: "• **Cost Efficiency**: Pay only for actual compute time used, eliminating costs for idle server capacity\n• **Auto-Scaling**: Automatically handles traffic spikes without manual intervention or capacity planning\n• **Reduced Operations**: No server management, patching, or infrastructure maintenance required"
chat_history_id: "ch_abc123xyz789"

Example 2: Multi-Turn Conversation

Input (Turn 1):

human_message: "I'm building an e-commerce platform. What database should I use?"
model: "claude-3-sonnet"
system_message: "You are an experienced software architect who asks clarifying questions"
temperature: 0.5

Output (Turn 1):

content: "To recommend the right database, I need to understand your requirements better:\n\n1. What's your expected scale (users, transactions per day)?\n2. Do you need complex queries or mainly simple lookups?\n3. Are you prioritizing consistency or availability?\n4. What's your team's expertise?"
chat_history_id: "ch_def456uvw012"

Input (Turn 2):

human_message: "We expect 10,000 daily users, need complex product search, prioritize consistency, and our team knows SQL well"
model: "claude-3-sonnet"
chat_history_id: "ch_def456uvw012"
temperature: 0.5

Output (Turn 2):

content: "Based on your requirements, I recommend **PostgreSQL**:\n\n• Handles 10K users easily with proper indexing\n• Excellent full-text search for product queries\n• ACID compliance ensures consistency\n• Your team's SQL expertise directly applies\n• Rich ecosystem of e-commerce extensions"
chat_history_id: "ch_def456uvw012"

Example 3: Creative Content Generation

Input:

human_message: "Write a product description for a smart water bottle that tracks hydration"
model: "gpt-4"
system_message: "You are a creative copywriter who writes compelling, benefit-focused product descriptions"
temperature: 0.8

Output:

content: "Stay perfectly hydrated with the HydroSmart™ Bottle—your personal hydration coach in the palm of your hand. This intelligent water bottle doesn't just hold your water; it actively tracks your intake, reminds you when it's time to drink, and syncs with your fitness apps to ensure you're meeting your daily goals. With a sleek, leak-proof design and up to 30 days of battery life, staying healthy has never been this effortless."
chat_history_id: "ch_ghi789mno345"

Common Use Cases

  • Conversational AI Applications: Build chatbots and virtual assistants with conversation memory across multiple exchanges

  • Content Generation: Create marketing copy, blog posts, product descriptions, and creative content with customizable tone

  • Code Assistance: Get programming help, code reviews, debugging suggestions, and technical explanations

  • Customer Support: Provide intelligent, context-aware responses to customer inquiries with conversation history

  • Data Analysis: Ask questions about data, get insights, and request follow-up analysis in a conversational manner

  • Document Summarization: Summarize long documents, articles, or reports with follow-up questions for clarification

  • Language Translation: Translate text with context preservation and ability to refine translations through conversation

Error Handling

Error Type
Cause
Solution

Empty Prompt

human_message field is empty or null

Provide a valid prompt in the human_message field

Invalid Model

Model identifier doesn't exist or isn't available

Check PixelML documentation for available model names

Invalid Chat History ID

Chat history ID doesn't exist or has expired

Start a new conversation without chat_history_id or verify the ID is correct

Message Too Long

human_message or system_message exceeds 640,000 characters

Shorten your message or split it into multiple requests

Invalid Temperature

Temperature outside 0-1 range

Set temperature between 0 (deterministic) and 1 (creative)

Rate Limit Exceeded

Too many requests sent in short time period

Implement delays between requests or upgrade your PixelML plan

PixelML Connection Error

Invalid or missing PixelML API credentials

Verify your PixelML connection is properly configured with valid API key

Notes

  • Chat History Management: Save the chat_history_id from responses to maintain conversation context in subsequent requests

  • System Message Impact: The system message shapes the AI's behavior and tone—be specific about the role and style you want

  • Temperature Control: Use low temperature (0.0-0.3) for factual, consistent responses; high (0.7-1.0) for creative, varied outputs

  • Model Selection: Different models have different strengths—GPT-4 for reasoning, Claude for long contexts, smaller models for speed

  • Context Windows: Be aware of model context limits when using long chat histories or system messages

  • Cost Optimization: Smaller, faster models cost less—use them for simple tasks and reserve larger models for complex reasoning

  • Conversation Expiry: Chat histories may expire after a period of inactivity, so handle expired IDs gracefully

  • Prompt Engineering: Well-crafted prompts with clear instructions and examples produce better results than vague requests

Output Parameters

Name
Type
Description

content

string

The response content from the LLM

chat_history_id

string

The chat history ID for conversation continuity

View JSON Schema

Output Schema

{
  "description": "LLM advance node output.",
  "properties": {
    "content": {
      "title": "Content",
      "type": "string"
    },
    "chat_history_id": {
      "title": "The chat history id used to retrieve the chat history.",
      "type": "string"
    }
  },
  "required": [
    "content",
    "chat_history_id"
  ],
  "title": "LLMAdvanceNodeOutput",
  "type": "object"
}

Last updated

Was this helpful?