# Ask Claude

**Action ID:** `claude_ask`

## Description

Ask Claude anything you want! This node supports multiple Claude models for generating AI-powered responses to your questions.

## Provider

**Anthropic Claude**

## Connection

| Name              | Description                                | Required | Category |
| ----------------- | ------------------------------------------ | :------: | -------- |
| Claude Connection | The Claude connection to use for the chat. |     ✓    | claude   |

## Input Parameters

| Name            | Type     | Required | Default                        | Description                                                                                                                                                                                                                                                |
| --------------- | -------- | :------: | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| model           | dropdown |     -    | claude-3-haiku-20240307        | The model to use for the chat. Available options: claude-3-haiku-20240307, claude-3-sonnet-20240229, claude-3-opus-20240229, claude-3-5-sonnet-latest, claude-3-5-haiku-latest, claude-3-7-sonnet-latest, claude-opus-4-20250514, claude-sonnet-4-20250514 |
| prompt          | string   |     ✓    | -                              | The question to ask the model                                                                                                                                                                                                                              |
| images          | array    |     -    | -                              | The images to use for the chat. Supported formats: JPG, PNG, JPEG                                                                                                                                                                                          |
| temperature     | number   |     -    | 0.5                            | Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Range: 0.0 to 1.0                                                                                |
| max\_tokens     | integer  |     -    | 1000                           | The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model                                                                                                         |
| system\_message | string   |     -    | "You are a helpful assistant." | Instructions for the AI assistant on how to behave and respond.                                                                                                                                                                                            |

<details>

<summary>View JSON Schema</summary>

```json
{
  "description": "Ask Claude node input.",
  "properties": {
    "model": {
      "default": "claude-3-haiku-20240307",
      "description": "The model to use for the chat.",
      "enum": [
        "claude-3-haiku-20240307",
        "claude-3-sonnet-20240229",
        "claude-3-opus-20240229",
        "claude-3-5-sonnet-latest",
        "claude-3-5-haiku-latest",
        "claude-3-7-sonnet-latest",
        "claude-opus-4-20250514",
        "claude-sonnet-4-20250514"
      ],
      "title": "Model",
      "type": "string"
    },
    "prompt": {
      "description": "The question to ask the model.",
      "title": "Question",
      "type": "string"
    },
    "images": {
      "default": null,
      "description": "The images to use for the chat.",
      "items": {
        "type": "string"
      },
      "title": "Images",
      "type": "array"
    },
    "temperature": {
      "default": 0.5,
      "description": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
      "maximum": 1.0,
      "minimum": 0.0,
      "title": "Temperature",
      "type": "number"
    },
    "max_tokens": {
      "default": 1000,
      "description": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model.",
      "title": "Maximum Tokens",
      "type": "integer"
    },
    "system_message": {
      "default": "You are a helpful assistant.",
      "description": "Instructions for the AI assistant on how to behave and respond.",
      "title": "System Message",
      "type": "string"
    }
  },
  "required": [
    "prompt"
  ],
  "title": "AskClaudeInput",
  "type": "object"
}
```

</details>

## Output Parameters

| Name    | Type   | Description              |
| ------- | ------ | ------------------------ |
| content | string | The response from Claude |

<details>

<summary>View JSON Schema</summary>

```json
{
  "description": "Ask Claude node output.",
  "properties": {
    "content": {
      "title": "Response",
      "type": "string",
      "description": "The response from Claude."
    }
  },
  "required": [
    "content"
  ],
  "title": "AskClaudeOutput",
  "type": "object"
}
```

</details>

## How It Works

This node sends your prompt to Claude's API along with your configuration parameters. Claude processes your question using the specified system message as context, then generates a response based on the temperature and other settings you've configured. If you provide images, they are analyzed alongside the text prompt. The response is returned as text that can be used by subsequent nodes in your workflow.

## Usage Examples

### Example 1: Simple Question

**Input:**

```
prompt: "What is the capital of France?"
model: "claude-3-5-haiku-latest"
temperature: 0.1
```

**Output:**

```
content: "The capital of France is Paris."
```

### Example 2: Image Analysis

**Input:**

```
prompt: "Describe what you see in this image"
model: "claude-3-5-sonnet-latest"
temperature: 0.5
images: ["https://example.com/photo.jpg"]
```

**Output:**

```
content: "This image shows a beautiful sunset over the ocean with golden orange hues reflecting on the water..."
```

### Example 3: Creative Task

**Input:**

```
prompt: "Write a short haiku about technology"
model: "claude-3-5-sonnet-latest"
temperature: 0.8
system_message: "You are a creative poet. Write in a traditional haiku format."
```

**Output:**

```
content: "Circuits hum with life,
Silicon dreams take their flight,
Future, coded now."
```

## Common Use Cases

* **Content Generation**: Create blog posts, articles, product descriptions, and marketing copy
* **Customer Support**: Generate thoughtful, personalized responses to customer inquiries
* **Data Analysis**: Analyze and summarize data, identify patterns and insights
* **Image Analysis**: Describe, analyze, and extract information from images
* **Translation and Localization**: Translate content between languages while maintaining tone and meaning
* **Code Assistance**: Generate, review, or explain code snippets across multiple programming languages
* **Research and Summarization**: Process long documents and create concise summaries

## Error Handling

| Error Type           | Cause                                            | Solution                                                                  |
| -------------------- | ------------------------------------------------ | ------------------------------------------------------------------------- |
| Authentication Error | Invalid or missing Claude API key                | Verify your Claude connection is properly configured with a valid API key |
| Rate Limit Error     | Too many requests in a short period              | Implement delays between requests or upgrade your Claude plan             |
| Token Limit Exceeded | Prompt + response exceeds model's context window | Reduce prompt length or decrease max\_tokens parameter                    |
| Invalid Model        | Model name doesn't exist or access not granted   | Verify the model name and check if it's available in your Claude account  |
| Timeout Error        | Request took too long to process                 | Reduce max\_tokens or try a faster model like claude-3-haiku              |
| Invalid Image Format | Unsupported image format provided                | Ensure images are in JPG or PNG format only                               |

## Notes

* **Model Selection**: Choose a model based on your needs. Claude 3 Haiku is cost-effective for simple tasks, while Claude Opus and Sonnet models offer superior reasoning for complex queries.
* **Temperature Control**: Lower temperature (0.0-0.3) for factual, deterministic responses. Higher temperature (0.7-1.0) for creative content.
* **Image Support**: You can provide images (JPG or PNG) alongside your prompt for analysis and discussion.
* **Token Limits**: Be mindful of max\_tokens setting. Adjust based on your expected response length and model context window.
* **System Messages**: Craft clear system messages to guide Claude's behavior, tone, and response format.
* **Cost Optimization**: Use Haiku models for simple tasks to minimize costs. Reserve Sonnet and Opus models for complex reasoning or analysis.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.agenticflow.ai/reference/nodes/claude_ask.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
