# Ask ChatGPT

**Action ID:** `openai_ask_chat_gpt`

## Description

Use AI to ask a question to ChatGPT with customizable parameters for controlling the model's behavior.

## Category

Popular

## Provider

OpenAI

## Connection

| Name              | Description                                | Required | Category |
| ----------------- | ------------------------------------------ | -------- | -------- |
| OpenAI Connection | The OpenAI connection to use for the chat. | True     | openai   |

## Input Parameters

| Name               | Type     | Required | Default                        | Description                                                                                                                                                                               |
| ------------------ | -------- | :------: | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| model              | dropdown |     -    | gpt-4o-mini                    | The model to use for the chat. Available options: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4o, gpt-4o-mini, o3, o3-mini, o1, o1-mini                                                      |
| prompt             | string   |     ✓    | -                              | The question to ask the model                                                                                                                                                             |
| temperature        | number   |     -    | 0.9                            | Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Range: 0.0 to 1.0               |
| max\_tokens        | integer  |     -    | 2048                           | The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model                                        |
| top\_p             | number   |     -    | 1.0                            | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass                                    |
| frequency\_penalty | number   |     -    | 0.0                            | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim |
| presence\_penalty  | number   |     -    | 0.6                            | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics               |
| system\_message    | string   |     -    | "You are a helpful assistant." | Instructions for the AI assistant on how to behave and respond                                                                                                                            |

<details>

<summary>View JSON Schema</summary>

```json
{
  "type": "object",
  "properties": {
    "model": {
      "type": "string",
      "default": "gpt-4o-mini",
      "title": "Model",
      "description": "The model to use for the chat.",
      "enum": [
        "gpt-4.1",
        "gpt-4.1-mini",
        "gpt-4.1-nano",
        "gpt-4o",
        "gpt-4o-mini",
        "o3",
        "o3-mini",
        "o1",
        "o1-mini"
      ]
    },
    "prompt": {
      "type": "string",
      "title": "Question",
      "description": "The question to ask the model."
    },
    "temperature": {
      "type": "number",
      "default": 0.9,
      "title": "Temperature",
      "minimum": 0.0,
      "maximum": 1.0,
      "description": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive."
    },
    "max_tokens": {
      "type": "integer",
      "default": 2048,
      "title": "Maximum Tokens",
      "description": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model."
    },
    "top_p": {
      "type": "number",
      "default": 1.0,
      "title": "Top P",
      "description": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass."
    },
    "frequency_penalty": {
      "type": "number",
      "default": 0.0,
      "title": "Frequency penalty",
      "description": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."
    },
    "presence_penalty": {
      "type": "number",
      "default": 0.6,
      "title": "Presence penalty",
      "description": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics."
    },
    "system_message": {
      "type": "string",
      "default": "You are a helpful assistant.",
      "title": "System Message",
      "description": "Instructions for the AI assistant on how to behave and respond."
    }
  },
  "required": ["prompt"]
}
```

</details>

## Output Parameters

| Name    | Type   | Description                       |
| ------- | ------ | --------------------------------- |
| content | string | The response content from ChatGPT |

<details>

<summary>View JSON Schema</summary>

```json
{
  "type": "object",
  "properties": {
    "content": {
      "type": "string",
      "title": "Content",
      "description": "The response content from ChatGPT."
    }
  },
  "required": ["content"]
}
```

</details>

## How It Works

This node sends your prompt to OpenAI's ChatGPT API along with your configuration parameters. The API uses the specified model to process your question, considering the system message as context for how it should behave. Temperature, top\_p, and penalty parameters control the randomness and diversity of the response. The model generates text based on these parameters and returns the content, which can then be used by subsequent nodes in your workflow.

## Usage Examples

### Example 1: Simple Question

**Input:**

```
prompt: "What are the main benefits of using workflow automation?"
model: "gpt-4o-mini"
temperature: 0.3
system_message: "You are a helpful assistant."
```

**Output:**

```
content: "Workflow automation offers several key benefits: 1) Increased efficiency by reducing manual tasks, 2) Fewer errors through consistent processes, 3) Cost savings from reduced labor needs, 4) Better scalability as processes can handle more volume, and 5) Improved employee satisfaction by eliminating repetitive work."
```

### Example 2: Creative Writing

**Input:**

```
prompt: "Write a short product description for a smart home device"
model: "gpt-4o"
temperature: 0.8
presence_penalty: 0.6
system_message: "You are a creative marketing copywriter."
```

**Output:**

```
content: "Transform your living space with the SmartHub Pro - where cutting-edge technology meets effortless convenience. This sleek device seamlessly orchestrates your entire home ecosystem, learning your preferences and adapting to your lifestyle. From lighting to climate control, security to entertainment, everything responds to your voice or a simple tap."
```

### Example 3: Data Analysis Request

**Input:**

```
prompt: "Analyze this sales data and provide insights: Q1: $50K, Q2: $65K, Q3: $58K, Q4: $72K"
model: "gpt-4.1"
temperature: 0.2
frequency_penalty: 0.3
system_message: "You are a data analyst. Provide clear, concise insights."
```

**Output:**

```
content: "Sales Analysis: Total annual revenue of $245K shows positive growth trend with 44% increase from Q1 to Q4. Key insights: 1) Strong Q2 growth (+30%) indicates successful spring initiatives, 2) Q3 dip (-11%) suggests seasonal weakness, 3) Q4 peak (+24% from Q3) demonstrates year-end momentum. Recommendation: Focus on sustaining Q4 strategies and addressing Q3 seasonal challenges."
```

## Common Use Cases

* **Customer Support Automation**: Generate intelligent responses to customer inquiries and support tickets
* **Content Creation**: Write blog posts, product descriptions, email copy, and marketing materials
* **Code Assistance**: Get help with coding tasks, debugging, and code explanations
* **Data Analysis**: Analyze data sets and generate insights, summaries, and recommendations
* **Language Translation**: Translate content between languages while preserving context and tone
* **Research Summarization**: Condense long documents or articles into concise summaries
* **Conversational AI**: Build chatbots and virtual assistants for websites and applications

## Error Handling

| Error Type               | Cause                                            | Solution                                                                  |
| ------------------------ | ------------------------------------------------ | ------------------------------------------------------------------------- |
| Authentication Error     | Invalid or missing OpenAI API key                | Verify your OpenAI connection is properly configured with a valid API key |
| Rate Limit Error         | Too many requests in a short period              | Implement delays between requests or upgrade your OpenAI plan             |
| Token Limit Exceeded     | Prompt + response exceeds model's context window | Reduce prompt length or decrease max\_tokens parameter                    |
| Invalid Model            | Model name doesn't exist or access not granted   | Verify the model name and check if it's available in your OpenAI account  |
| Timeout Error            | Request took too long to process                 | Reduce max\_tokens or try a faster model like gpt-4o-mini                 |
| Content Policy Violation | Prompt violates OpenAI's usage policies          | Revise prompt to comply with OpenAI's content policy guidelines           |

## Notes

* **Model Selection**: The model selector provides various GPT versions ranging from GPT-4.1 down to O1-mini. Choose based on your accuracy and speed requirements. GPT-4.1 offers superior reasoning, while gpt-4o-mini is faster and more cost-effective.
* **Temperature Control**: Temperature controls creativity - lower values (0.1-0.5) produce more focused, deterministic outputs, higher values (0.7-1.0) produce more creative, diverse responses.
* **Token Limits**: Token limits vary by model. Default is 2048, but some models support different maximums. Monitor your token usage to optimize costs.
* **System Messages**: The system message sets the AI's role and behavior throughout the conversation. Craft clear, specific system messages for best results.
* **Penalties**: Frequency and presence penalties help control repetition and topic variety in responses. Use frequency\_penalty to reduce repetitive phrases, and presence\_penalty to encourage diverse topics.
* **Cost Optimization**: Use mini models for simple tasks to minimize costs. Reserve full GPT-4 models for complex reasoning or analysis tasks.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.agenticflow.ai/reference/nodes/ask-chatgpt.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
