Insert Data
Action ID: insert_dataset_rows
Description
Insert new rows into a dataset.
Input Parameters
dataset
dropdown
β
-
The dataset to insert rows into.
rows
array
β
-
List of rows to insert. Each row is a dict mapping column names to values.
View JSON Schema
{
"description": "Insert Dataset Rows node input.",
"properties": {
"dataset": {
"description": "The dataset to insert rows into.",
"title": "Dataset",
"type": "string"
},
"rows": {
"description": "List of rows to insert. Each row is a dict mapping column names to values (e.g., [{'name': 'Alice', 'age': 25}, {'name': 'Bob', 'age': 30}]).",
"items": {
"type": "object"
},
"title": "Rows",
"type": "array"
}
},
"required": ["dataset", "rows"],
"title": "InsertDatasetRowsNodeInput",
"type": "object"
}Output Parameters
inserted_count
integer
Number of rows that were inserted.
success
boolean
Whether the insert operation succeeded.
row_ids
array
List of IDs of the inserted rows.
View JSON Schema
{
"description": "Insert Dataset Rows node output.",
"properties": {
"inserted_count": {
"description": "Number of rows that were inserted.",
"title": "Inserted Count",
"type": "integer"
},
"success": {
"description": "Whether the insert operation succeeded.",
"title": "Success",
"type": "boolean"
},
"row_ids": {
"default": [],
"description": "List of IDs of the inserted rows.",
"items": {
"type": "string"
},
"title": "Row IDs",
"type": "array"
}
},
"required": ["inserted_count", "success"],
"title": "InsertDatasetRowsNodeOutput",
"type": "object"
}How It Works
This node inserts new rows into a specified dataset. Each row is provided as a dictionary object mapping column names to values. The node validates the dataset ID format (26-character ULID) and ensures the rows array is not empty before proceeding. After successful insertion, it returns the count of inserted rows, a success status, and the unique IDs assigned to each new row for tracking and reference purposes.
Usage Examples
Example 1: Insert Single User
Input:
Output:
Example 2: Bulk Insert Multiple Records
Input:
Output:
Example 3: Insert Product Data
Input:
Output:
Example 4: Insert Event Log
Input:
Output:
Example 5: Insert with Nested Data
Input:
Output:
Common Use Cases
Bulk Data Import: Add multiple records to a dataset in a single operation
User Registration: Insert new user accounts and profiles into user datasets
Transaction Recording: Log transactions, orders, or events as they occur
Data Migration: Transfer data from external sources into AgenticFlow datasets
Form Submissions: Store form data submitted through web applications or APIs
Workflow Results: Save output from workflow executions for future reference
Batch Processing: Insert processed records from ETL pipelines or data transformations
Inventory Management: Add new products or stock entries to inventory datasets
Audit Logging: Record system events, user actions, or changes for compliance
Error Handling
Dataset Not Found
Dataset ID doesn't exist
Verify the dataset ID is correct and the dataset exists
Invalid Dataset ID
Dataset ID format is incorrect
Ensure dataset ID is a 26-character ULID
Dataset ID Required
Dataset parameter is empty
Provide a valid dataset ID
Empty Rows
Rows array is empty
Provide at least one row to insert
Rows Required
Rows parameter is missing
Include the rows parameter with data to insert
Schema Mismatch
Column names don't match dataset schema
Verify column names match the dataset's expected fields
Invalid Data Type
Value type doesn't match column type
Ensure values match the expected data types for each column
Missing Required Fields
Required columns are not provided
Include all required columns in each row
Duplicate Key
Unique constraint violation
Check for duplicate values in unique columns
Insert Failed
Server error during insertion
Retry the operation or check server logs
Notes
Row Format: Each row must be a dictionary object with column names as keys and values as the data to insert.
Validation: The node validates that the dataset ID is a valid 26-character ULID format before insertion.
Non-Empty Requirement: The rows array must contain at least one row; empty arrays will cause an error.
Schema Matching: Column names in each row should match the dataset's schema. Unknown columns may be ignored or cause errors depending on configuration.
Batch Efficiency: Inserting multiple rows in a single operation is more efficient than multiple single-row inserts.
Row IDs: The returned row_ids array contains the unique identifiers assigned to each inserted row, useful for tracking and updates.
Success Flag: Check the success field to verify the operation completed without errors.
Data Types: Ensure values match expected data types (strings, numbers, booleans, objects, arrays) for each column.
Atomic Operation: The insert operation is typically atomic - either all rows succeed or the operation fails.
Performance: For very large bulk inserts (thousands of rows), consider breaking into smaller batches to avoid timeouts.
JSON Support: The rows parameter uses JSON format, supporting nested objects and arrays for complex data structures.
Dynamic Dropdown: The dataset field dynamically lists available datasets in your workspace.
Last updated