Ask ChatGPT

Action ID: openai_ask_chat_gpt

Description

Use OpenAI's ChatGPT models to ask questions and get AI-generated responses. This node supports multiple GPT models including GPT-4.1, GPT-4o, O3, and O1 series.

Provider

OpenAI

Connection

Name
Description
Required
Category

OpenAI Connection

The OpenAI connection to use for the chat.

openai

Input Parameters

Name
Type
Required
Default
Description

model

dropdown

-

gpt-4o-mini

The model to use for the chat. Available options: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4o, gpt-4o-mini, o3, o3-mini, o1, o1-mini

prompt

string

-

The question to ask the model

temperature

number

-

0.9

Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Range: 0.0 to 1.0

max_tokens

integer

-

2048

The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model

top_p

number

-

1.0

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass

frequency_penalty

number

-

0.0

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim

presence_penalty

number

-

0.6

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics

system_message

string

-

"You are a helpful assistant."

Instructions for the AI assistant on how to behave and respond

View JSON Schema
{
  "description": "Ask ChatGPT node input.",
  "properties": {
    "model": {
      "default": "gpt-4o-mini",
      "description": "The model to use for the chat.",
      "title": "Model",
      "type": "string"
    },
    "prompt": {
      "description": "The question to ask the model.",
      "title": "Question",
      "type": "string"
    },
    "temperature": {
      "default": 0.9,
      "description": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
      "maximum": 1.0,
      "minimum": 0.0,
      "title": "Temperature",
      "type": "number"
    },
    "max_tokens": {
      "default": 2048,
      "description": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model.",
      "title": "Maximum Tokens",
      "type": "integer"
    },
    "top_p": {
      "default": 1.0,
      "description": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.",
      "title": "Top P",
      "type": "number"
    },
    "frequency_penalty": {
      "default": 0.0,
      "description": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
      "title": "Frequency penalty",
      "type": "number"
    },
    "presence_penalty": {
      "default": 0.6,
      "description": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
      "title": "Presence penalty",
      "type": "number"
    },
    "system_message": {
      "default": "You are a helpful assistant.",
      "description": "Instructions for the AI assistant on how to behave and respond.",
      "title": "System Message",
      "type": "string"
    }
  },
  "required": [
    "prompt"
  ],
  "title": "AskChatGPTInput",
  "type": "object"
}

Output Parameters

Name
Type
Description

content

string

The AI-generated response from ChatGPT

View JSON Schema
{
  "description": "Ask ChatGPT node output.",
  "properties": {
    "content": {
      "title": "Content",
      "type": "string"
    }
  },
  "required": [
    "content"
  ],
  "title": "AskChatGPTOutput",
  "type": "object"
}

How It Works

This node sends your prompt to OpenAI's ChatGPT API along with your configuration parameters. The AI model processes your question using the specified system message as context, then generates a response based on the temperature and other settings you've configured. The response is returned as text that can be used by subsequent nodes in your workflow.

Usage Examples

Example 1: Simple Question

Input:

prompt: "What is the capital of France?"
model: "gpt-4o-mini"
temperature: 0.1

Output:

content: "The capital of France is Paris."

Example 2: Creative Writing

Input:

prompt: "Write a short poem about the ocean"
model: "gpt-4o"
temperature: 0.9
system_message: "You are a creative poet who writes in a romantic style."

Output:

content: "Beneath the azure sky so wide,
The ocean whispers with the tide..."

Example 3: Data Analysis

Input:

prompt: "Analyze this sales data and provide insights: [data here]"
model: "gpt-4.1"
temperature: 0.3
max_tokens: 1000
system_message: "You are a data analyst providing clear, actionable insights."

Common Use Cases

  • Content Generation: Create blog posts, product descriptions, or marketing copy

  • Customer Support: Generate automated responses to common customer inquiries

  • Data Analysis: Get insights and summaries from structured or unstructured data

  • Translation: Translate text between languages with context awareness

  • Code Generation: Generate code snippets or explain technical concepts

  • Brainstorming: Generate ideas, suggestions, or creative solutions

  • Summarization: Condense long documents into key points

Error Handling

Error Type
Cause
Solution

Authentication Error

Invalid or missing OpenAI API key

Verify your OpenAI connection is properly configured with a valid API key

Rate Limit Error

Too many requests in a short period

Implement delays between requests or upgrade your OpenAI plan

Token Limit Exceeded

Prompt + response exceeds model's context window

Reduce prompt length or decrease max_tokens parameter

Invalid Model

Model name doesn't exist or access not granted

Check model availability and ensure your API key has access to the selected model

Timeout Error

Request took too long to process

Reduce max_tokens or try a faster model like gpt-4o-mini

Content Policy Violation

Prompt contains prohibited content

Review and modify your prompt to comply with OpenAI's usage policies

Notes

  • Model Selection: Choose a model based on your needs. GPT-4o-mini is cost-effective for most tasks, while GPT-4.1 offers superior reasoning for complex queries. The O1 and O3 series are optimized for specific use cases.

  • Temperature Control: Lower temperature (0.0-0.3) for factual, deterministic responses. Higher temperature (0.7-1.0) for creative content.

  • Token Limits: Be mindful of max_tokens setting. Different models have different context windows. Adjust based on your expected response length.

  • System Messages: Craft clear system messages to guide the AI's behavior, tone, and response format.

  • Cost Optimization: Use gpt-4o-mini for simple tasks to minimize costs. Reserve advanced models for complex reasoning tasks.

Last updated

Was this helpful?