# Ask ChatGPT

**Action ID:** `openai_ask_chat_gpt`

## Description

Use OpenAI's ChatGPT models to ask questions and get AI-generated responses. This node supports multiple GPT models including GPT-4.1, GPT-4o, O3, and O1 series.

## Provider

**OpenAI**

## Connection

| Name              | Description                                | Required | Category |
| ----------------- | ------------------------------------------ | :------: | -------- |
| OpenAI Connection | The OpenAI connection to use for the chat. |     ✓    | openai   |

## Input Parameters

| Name               | Type     | Required | Default                        | Description                                                                                                                                                                               |
| ------------------ | -------- | :------: | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| model              | dropdown |     -    | gpt-4o-mini                    | The model to use for the chat. Available options: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4o, gpt-4o-mini, o3, o3-mini, o1, o1-mini                                                      |
| prompt             | string   |     ✓    | -                              | The question to ask the model                                                                                                                                                             |
| temperature        | number   |     -    | 0.9                            | Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Range: 0.0 to 1.0               |
| max\_tokens        | integer  |     -    | 2048                           | The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model                                        |
| top\_p             | number   |     -    | 1.0                            | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass                                    |
| frequency\_penalty | number   |     -    | 0.0                            | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim |
| presence\_penalty  | number   |     -    | 0.6                            | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics              |
| system\_message    | string   |     -    | "You are a helpful assistant." | Instructions for the AI assistant on how to behave and respond                                                                                                                            |

<details>

<summary>View JSON Schema</summary>

```json
{
  "description": "Ask ChatGPT node input.",
  "properties": {
    "model": {
      "default": "gpt-4o-mini",
      "description": "The model to use for the chat.",
      "title": "Model",
      "type": "string"
    },
    "prompt": {
      "description": "The question to ask the model.",
      "title": "Question",
      "type": "string"
    },
    "temperature": {
      "default": 0.9,
      "description": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
      "maximum": 1.0,
      "minimum": 0.0,
      "title": "Temperature",
      "type": "number"
    },
    "max_tokens": {
      "default": 2048,
      "description": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model.",
      "title": "Maximum Tokens",
      "type": "integer"
    },
    "top_p": {
      "default": 1.0,
      "description": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.",
      "title": "Top P",
      "type": "number"
    },
    "frequency_penalty": {
      "default": 0.0,
      "description": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
      "title": "Frequency penalty",
      "type": "number"
    },
    "presence_penalty": {
      "default": 0.6,
      "description": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
      "title": "Presence penalty",
      "type": "number"
    },
    "system_message": {
      "default": "You are a helpful assistant.",
      "description": "Instructions for the AI assistant on how to behave and respond.",
      "title": "System Message",
      "type": "string"
    }
  },
  "required": [
    "prompt"
  ],
  "title": "AskChatGPTInput",
  "type": "object"
}
```

</details>

## Output Parameters

| Name    | Type   | Description                            |
| ------- | ------ | -------------------------------------- |
| content | string | The AI-generated response from ChatGPT |

<details>

<summary>View JSON Schema</summary>

```json
{
  "description": "Ask ChatGPT node output.",
  "properties": {
    "content": {
      "title": "Content",
      "type": "string"
    }
  },
  "required": [
    "content"
  ],
  "title": "AskChatGPTOutput",
  "type": "object"
}
```

</details>

## How It Works

This node sends your prompt to OpenAI's ChatGPT API along with your configuration parameters. The AI model processes your question using the specified system message as context, then generates a response based on the temperature and other settings you've configured. The response is returned as text that can be used by subsequent nodes in your workflow.

## Usage Examples

### Example 1: Simple Question

**Input:**

```
prompt: "What is the capital of France?"
model: "gpt-4o-mini"
temperature: 0.1
```

**Output:**

```
content: "The capital of France is Paris."
```

### Example 2: Creative Writing

**Input:**

```
prompt: "Write a short poem about the ocean"
model: "gpt-4o"
temperature: 0.9
system_message: "You are a creative poet who writes in a romantic style."
```

**Output:**

```
content: "Beneath the azure sky so wide,
The ocean whispers with the tide..."
```

### Example 3: Data Analysis

**Input:**

```
prompt: "Analyze this sales data and provide insights: [data here]"
model: "gpt-4.1"
temperature: 0.3
max_tokens: 1000
system_message: "You are a data analyst providing clear, actionable insights."
```

## Common Use Cases

* **Content Generation**: Create blog posts, product descriptions, or marketing copy
* **Customer Support**: Generate automated responses to common customer inquiries
* **Data Analysis**: Get insights and summaries from structured or unstructured data
* **Translation**: Translate text between languages with context awareness
* **Code Generation**: Generate code snippets or explain technical concepts
* **Brainstorming**: Generate ideas, suggestions, or creative solutions
* **Summarization**: Condense long documents into key points

## Error Handling

| Error Type               | Cause                                            | Solution                                                                          |
| ------------------------ | ------------------------------------------------ | --------------------------------------------------------------------------------- |
| Authentication Error     | Invalid or missing OpenAI API key                | Verify your OpenAI connection is properly configured with a valid API key         |
| Rate Limit Error         | Too many requests in a short period              | Implement delays between requests or upgrade your OpenAI plan                     |
| Token Limit Exceeded     | Prompt + response exceeds model's context window | Reduce prompt length or decrease max\_tokens parameter                            |
| Invalid Model            | Model name doesn't exist or access not granted   | Check model availability and ensure your API key has access to the selected model |
| Timeout Error            | Request took too long to process                 | Reduce max\_tokens or try a faster model like gpt-4o-mini                         |
| Content Policy Violation | Prompt contains prohibited content               | Review and modify your prompt to comply with OpenAI's usage policies              |

## Notes

* **Model Selection**: Choose a model based on your needs. GPT-4o-mini is cost-effective for most tasks, while GPT-4.1 offers superior reasoning for complex queries. The O1 and O3 series are optimized for specific use cases.
* **Temperature Control**: Lower temperature (0.0-0.3) for factual, deterministic responses. Higher temperature (0.7-1.0) for creative content.
* **Token Limits**: Be mindful of max\_tokens setting. Different models have different context windows. Adjust based on your expected response length.
* **System Messages**: Craft clear system messages to guide the AI's behavior, tone, and response format.
* **Cost Optimization**: Use gpt-4o-mini for simple tasks to minimize costs. Reserve advanced models for complex reasoning tasks.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.agenticflow.ai/reference/nodes/openai_ask_chat_gpt.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
