# Insert Data

**Action ID:** `insert_dataset_rows`

## Description

Insert new rows into a dataset.

## Input Parameters

| Name    | Type     | Required | Default | Description                                                                |
| ------- | -------- | :------: | ------- | -------------------------------------------------------------------------- |
| dataset | dropdown |     ✓    | -       | The dataset to insert rows into.                                           |
| rows    | array    |     ✓    | -       | List of rows to insert. Each row is a dict mapping column names to values. |

<details>

<summary>View JSON Schema</summary>

```json
{
  "description": "Insert Dataset Rows node input.",
  "properties": {
    "dataset": {
      "description": "The dataset to insert rows into.",
      "title": "Dataset",
      "type": "string"
    },
    "rows": {
      "description": "List of rows to insert. Each row is a dict mapping column names to values (e.g., [{'name': 'Alice', 'age': 25}, {'name': 'Bob', 'age': 30}]).",
      "items": {
        "type": "object"
      },
      "title": "Rows",
      "type": "array"
    }
  },
  "required": ["dataset", "rows"],
  "title": "InsertDatasetRowsNodeInput",
  "type": "object"
}
```

</details>

## Output Parameters

| Name            | Type    | Description                             |
| --------------- | ------- | --------------------------------------- |
| inserted\_count | integer | Number of rows that were inserted.      |
| success         | boolean | Whether the insert operation succeeded. |
| row\_ids        | array   | List of IDs of the inserted rows.       |

<details>

<summary>View JSON Schema</summary>

```json
{
  "description": "Insert Dataset Rows node output.",
  "properties": {
    "inserted_count": {
      "description": "Number of rows that were inserted.",
      "title": "Inserted Count",
      "type": "integer"
    },
    "success": {
      "description": "Whether the insert operation succeeded.",
      "title": "Success",
      "type": "boolean"
    },
    "row_ids": {
      "default": [],
      "description": "List of IDs of the inserted rows.",
      "items": {
        "type": "string"
      },
      "title": "Row IDs",
      "type": "array"
    }
  },
  "required": ["inserted_count", "success"],
  "title": "InsertDatasetRowsNodeOutput",
  "type": "object"
}
```

</details>

## How It Works

This node inserts new rows into a specified dataset. Each row is provided as a dictionary object mapping column names to values. The node validates the dataset ID format (26-character ULID) and ensures the rows array is not empty before proceeding. After successful insertion, it returns the count of inserted rows, a success status, and the unique IDs assigned to each new row for tracking and reference purposes.

## Usage Examples

### Example 1: Insert Single User

**Input:**

```
dataset: "01K8ZM9T72FNBZAGA629KZFXR5"
rows: [
  {
    "name": "Alice Johnson",
    "email": "alice@example.com",
    "age": 28,
    "department": "Engineering"
  }
]
```

**Output:**

```
inserted_count: 1
success: true
row_ids: ["01K9ABC1234567890DEFGHIJKL"]
```

### Example 2: Bulk Insert Multiple Records

**Input:**

```
dataset: "01K8ZM9T72FNBZAGA629KZFXR5"
rows: [
  {
    "name": "Bob Smith",
    "email": "bob@example.com",
    "age": 35,
    "department": "Sales"
  },
  {
    "name": "Carol White",
    "email": "carol@example.com",
    "age": 29,
    "department": "Marketing"
  },
  {
    "name": "David Brown",
    "email": "david@example.com",
    "age": 42,
    "department": "Operations"
  }
]
```

**Output:**

```
inserted_count: 3
success: true
row_ids: [
  "01K9ABC1234567890DEFGHIJKL",
  "01K9ABC1234567890DEFGHIJKM",
  "01K9ABC1234567890DEFGHIJKN"
]
```

### Example 3: Insert Product Data

**Input:**

```
dataset: "01K8ZM9T72FNBZAGA629KZFXR5"
rows: [
  {
    "product_id": "SKU-1001",
    "name": "Wireless Mouse",
    "price": 29.99,
    "stock": 150,
    "category": "Electronics"
  },
  {
    "product_id": "SKU-1002",
    "name": "USB Keyboard",
    "price": 49.99,
    "stock": 85,
    "category": "Electronics"
  }
]
```

**Output:**

```
inserted_count: 2
success: true
row_ids: [
  "01K9XYZ7890123456ABCDEFGHI",
  "01K9XYZ7890123456ABCDEFGHJ"
]
```

### Example 4: Insert Event Log

**Input:**

```
dataset: "01K8ZM9T72FNBZAGA629KZFXR5"
rows: [
  {
    "event_type": "user_login",
    "user_id": "user_123",
    "timestamp": "2024-01-15T10:30:00Z",
    "ip_address": "192.168.1.100",
    "status": "success"
  }
]
```

**Output:**

```
inserted_count: 1
success: true
row_ids: ["01K9LOG4567890123ABCDEFGHI"]
```

### Example 5: Insert with Nested Data

**Input:**

```
dataset: "01K8ZM9T72FNBZAGA629KZFXR5"
rows: [
  {
    "customer_name": "Acme Corp",
    "contact_email": "contact@acme.com",
    "metadata": {
      "industry": "Manufacturing",
      "size": "Enterprise",
      "region": "North America"
    },
    "tags": ["premium", "partner"]
  }
]
```

**Output:**

```
inserted_count: 1
success: true
row_ids: ["01K9CUS7890123456ABCDEFGHI"]
```

## Common Use Cases

* **Bulk Data Import**: Add multiple records to a dataset in a single operation
* **User Registration**: Insert new user accounts and profiles into user datasets
* **Transaction Recording**: Log transactions, orders, or events as they occur
* **Data Migration**: Transfer data from external sources into AgenticFlow datasets
* **Form Submissions**: Store form data submitted through web applications or APIs
* **Workflow Results**: Save output from workflow executions for future reference
* **Batch Processing**: Insert processed records from ETL pipelines or data transformations
* **Inventory Management**: Add new products or stock entries to inventory datasets
* **Audit Logging**: Record system events, user actions, or changes for compliance

## Error Handling

| Error Type              | Cause                                   | Solution                                                    |
| ----------------------- | --------------------------------------- | ----------------------------------------------------------- |
| Dataset Not Found       | Dataset ID doesn't exist                | Verify the dataset ID is correct and the dataset exists     |
| Invalid Dataset ID      | Dataset ID format is incorrect          | Ensure dataset ID is a 26-character ULID                    |
| Dataset ID Required     | Dataset parameter is empty              | Provide a valid dataset ID                                  |
| Empty Rows              | Rows array is empty                     | Provide at least one row to insert                          |
| Rows Required           | Rows parameter is missing               | Include the rows parameter with data to insert              |
| Schema Mismatch         | Column names don't match dataset schema | Verify column names match the dataset's expected fields     |
| Invalid Data Type       | Value type doesn't match column type    | Ensure values match the expected data types for each column |
| Missing Required Fields | Required columns are not provided       | Include all required columns in each row                    |
| Duplicate Key           | Unique constraint violation             | Check for duplicate values in unique columns                |
| Insert Failed           | Server error during insertion           | Retry the operation or check server logs                    |

## Notes

* **Row Format**: Each row must be a dictionary object with column names as keys and values as the data to insert.
* **Validation**: The node validates that the dataset ID is a valid 26-character ULID format before insertion.
* **Non-Empty Requirement**: The rows array must contain at least one row; empty arrays will cause an error.
* **Schema Matching**: Column names in each row should match the dataset's schema. Unknown columns may be ignored or cause errors depending on configuration.
* **Batch Efficiency**: Inserting multiple rows in a single operation is more efficient than multiple single-row inserts.
* **Row IDs**: The returned row\_ids array contains the unique identifiers assigned to each inserted row, useful for tracking and updates.
* **Success Flag**: Check the success field to verify the operation completed without errors.
* **Data Types**: Ensure values match expected data types (strings, numbers, booleans, objects, arrays) for each column.
* **Atomic Operation**: The insert operation is typically atomic - either all rows succeed or the operation fails.
* **Performance**: For very large bulk inserts (thousands of rows), consider breaking into smaller batches to avoid timeouts.
* **JSON Support**: The rows parameter uses JSON format, supporting nested objects and arrays for complex data structures.
* **Dynamic Dropdown**: The dataset field dynamically lists available datasets in your workspace.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.agenticflow.ai/reference/nodes/insert_data.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
