Code Execution enables your AI agent to write and run Python code in a secure, isolated sandbox environment. This powerful capability allows agents to perform complex data analysis, mathematical computations, file processing, and system operations that go beyond simple text generation.
Prerequisites
Before enabling code execution:
PixelML Connection Required: You must add a PixelML connection in your workspace connections
Credit-Based Usage: Code execution consumes credits based on session runtime
Usage Tracking: Charges are calculated per second of active code session time
To set up:
Navigate to Workspace Settings β Connections
Add a new PixelML connection with your API key
Credits are automatically deducted during code execution sessions
π§ What Is Code Execution?
Code Execution provides your AI agent with four specialized tools:
Execute Python Code: Run Python scripts in a persistent sandbox environment
Upload Files to Sandbox: Transfer files from Drive storage to the code execution environment
Download Files from Sandbox: Save generated files back to Drive storage
Execute Shell Commands: Run system commands for package installation and file operations
Combining Code Execution with Skills
When you enable both Code Execution and Skills for your agent, you unlock a powerful capability: your agent can read code examples and templates from Skills, then execute them in the sandbox.
This combination is inspired by Anthropic's Claude Skills with Code Executionβwhere skills provide procedural knowledge and code templates, while code execution enables the agent to run and adapt those templates for your specific tasks.
How It Works:
Agent discovers relevant skill using the skill browser tool
Agent reads code templates or examples from the skill using skill reader
Agent adapts the code to your specific requirements
Agent executes the adapted code in the sandbox using code execution
Agent delivers results or saves outputs to Drive
Example Workflow:
Benefits of This Combination:
β Standardized implementations: Agent follows proven patterns from Skills
β Repeatable workflows: Same skill templates work across different datasets
β Best practices built-in: Skills encode expert knowledge and error handling
β Faster execution: Agent doesn't need to write code from scratch
β Organization consistency: All agents use the same approved code patterns
To Enable This Capability:
Navigate to Tab 5: Skills & Capabilities and add relevant skills (see Skills documentation)
Navigate to Tab 6: Code Execution and toggle ON
Ensure PixelML connection is configured in workspace settings
Your agent can now read Skills and execute code from them
π‘ Key Capabilities
Data Analysis & Processing
Mathematical Computations
Image Processing
File Format Conversions
Web Data Fetching
ποΈ How Code Execution Works
Session Lifecycle Architecture
Each agent turn (response to a user message) gets its own isolated sandbox session:
Key Characteristics:
Turn-Based Sessions: Each agent response gets a fresh sandbox session
Intra-Turn Persistence: Variables, imports, and files persist during a single agent turn
Session Isolation: Each turn gets its own independent sandbox environment
Automatic Cleanup: Sessions are terminated when the agent completes its response
State Reset: The next agent turn starts with a completely fresh environment
On-Demand Creation: Sessions are only created when the agent needs to execute code
Important: Unlike a persistent conversation-wide session, you cannot reference variables or files from previous agent responses. Each turn is completely isolated.
Example: Multi-Step Analysis Within a Single Turn
Important Note: All three steps must happen in the same agent response. If the agent completes its response after Step 1, the next turn will have a fresh session where df is no longer available.
π File Transfer System
Upload Files to Sandbox
Transfer files from Drive storage to the sandbox for processing:
Supported File Types:
Text Files: .txt, .md, .py, .js, .json, .csv, .xml, .html, .css, .yaml, .sql, etc.
Files in the sandbox are temporary and deleted when the agent turn ends
Always download important results to Drive for persistence across turns
Set "replace": True to overwrite existing files in Drive
Default behavior prevents accidental overwrites
Files uploaded in one turn will NOT be available in the next turn's session
β‘ Shell Command Execution
Package Installation
Install Python packages as needed:
Important: Packages are lost when the session ends. Since each agent turn starts a fresh session, packages must be reinstalled if needed in subsequent turns.
System Operations
Perform file and system operations:
π Security & Isolation
Sandbox Isolation Features
File Type Restrictions
Only safe file types are allowed for upload/download:
Resource Quotas
Cost & Credit Management
Requirement: A valid PixelML connection must be configured in Workspace Settings β Connections before code execution can be used.
π Common Use Cases
1. Data Analysis Agent
Example Conversation:
2. Image Processing Agent
3. Report Generation Agent
4. Data Transformation Agent
5. Scientific Computing Agent
6. Skills + Code Execution Agent
Scenario: Agent with both Skills and Code Execution enabled
Example Conversation:
Key Advantages:
Consistency: Same proven code pattern used every time
Governance: Organization controls which code patterns are available
Learning: New team members see best practices in action
π― Best Practices
Session Management
File Management
Error Handling
Performance Optimization
Skills + Code Execution Best Practices
When using Skills with Code Execution enabled:
Skill Template Design Tips:
π¨ Common Issues & Solutions
Package Not Found
File Not Found
Session State Lost
Timeout Errors
File Download Conflicts
π Advanced Patterns
Iterative Data Processing
Multi-Format Output
Combining Internet Data with Drive Files
π Learning Resources
Essential Python Libraries
Code Execution Examples
β Code Execution Checklist
Before deploying your agent with code execution:
Core Setup
Execution Design
Security & Compliance
Skills Integration (if using Skills + Code Execution)
Code execution transforms your agent from a conversational assistant into a powerful computational engine capable of data analysis, file processing, and complex problem-solving.
User: "Analyze the sales data in /data/q4_sales.csv using the data analysis patterns"
Agent Process:
1. Lists available skills β Finds "data-analysis-toolkit" skill
2. Reads skill file: "data-analysis-toolkit/examples/sales-analysis.py"
3. Adapts the template code to use "q4_sales.csv" and requested metrics
4. Uploads CSV file to sandbox
5. Executes adapted code in sandbox
6. Downloads generated charts and reports to Drive
7. Presents analysis to user
# Analyze CSV data
import pandas as pd
import matplotlib.pyplot as plt
# Load and analyze data
df = pd.read_csv('sales_data.csv')
summary = df.describe()
df.groupby('region')['revenue'].sum().plot(kind='bar')
plt.savefig('revenue_by_region.png')
# Complex calculations
import numpy as np
from scipy import stats
# Statistical analysis
data = [23, 45, 67, 89, 12, 34, 56, 78]
mean = np.mean(data)
std_dev = np.std(data)
confidence_interval = stats.t.interval(0.95, len(data)-1,
loc=mean,
scale=stats.sem(data))
# Convert between formats
import json
import csv
# JSON to CSV conversion
with open('data.json', 'r') as f:
data = json.load(f)
with open('output.csv', 'w', newline='') as f:
writer = csv.DictWriter(f, fieldnames=data[0].keys())
writer.writeheader()
writer.writerows(data)
# Download data from the internet
import urllib.request
# Fetch data directly in sandbox
url = 'https://api.example.com/data.json'
urllib.request.urlretrieve(url, 'downloaded_data.json')
User sends message
β
Agent starts processing
β
Code Session Created (when first code tool is needed)
β
Code Execution 1 (variables stored)
β
Code Execution 2 (can access previous variables)
β
Code Execution 3 (state persists within this turn)
β
Agent completes response β Session Terminated
β
Next user message β New Session Created
# All within ONE agent response (same session):
# Step 1: Load data
import pandas as pd
df = pd.read_csv('sales_data.csv')
# Step 2: Process data (df is still available in same turn)
monthly_sales = df.groupby('month')['revenue'].sum()
# Step 3: Generate visualization (both variables still accessible)
import matplotlib.pyplot as plt
monthly_sales.plot(kind='line')
plt.savefig('sales_trend.png')
# File operations
ls -la # List files
mkdir output # Create directory
cat data.txt # View file contents
find . -name "*.csv" # Find files
# Download from internet
curl -o dataset.csv https://example.com/data.csv
wget https://example.com/large_file.zip
# Archive operations
zip -r archive.zip data/
tar -czf backup.tar.gz files/
Security Boundaries:
β Isolated execution environment per agent turn
β No access to host system
β Separate file system per session
β Network restrictions on outbound connections
β Resource limits (CPU, memory, execution time)
β Automatic session cleanup after each turn
β Zero state carry-over between sessions
Allowed File Extensions:
- Text & Code: 30+ extensions
- Images: 7 formats
- Documents: PDF only
- Media: Video and audio formats
Blocked for Security:
- Executables (.exe, .dll, .so)
- Scripts (.sh, .bat) - except in sandbox
- Archives (.zip, .tar) - except within sandbox
- System files (.sys, .ini, .conf) - except within sandbox
Execution Limits:
- Timeout: 30 seconds per code execution
- Memory: Limited per session
- Storage: Temporary file storage only (deleted after turn)
- Network: Controlled internet access
- Billing: Credit consumption based on session runtime
Billing Model:
- Credits charged per second of active code session time
- Session starts when first code tool is called in a turn
- Session ends when agent completes its response
- Only active execution time is billed (not idle time)
- Usage tracked in real-time in workspace dashboard
Cost Optimization Tips:
β Complete operations efficiently within single turns
β Minimize unnecessary package installations
β Use efficient algorithms and vectorized operations
β Cache results in Drive to avoid recomputation
β Monitor usage patterns and optimize workflows
Workflow:
1. User uploads CSV file to Drive
2. Agent uploads file to sandbox
3. Agent analyzes data with pandas
4. Agent generates visualizations with matplotlib
5. Agent downloads charts back to Drive
6. Agent presents findings to user
User: "Analyze the sales data in /data/files/q4_sales.csv"
Agent:
1. Uploads file to sandbox
2. Executes: pd.read_csv('q4_sales.csv').describe()
3. Identifies trends and insights
4. Creates visualization
5. Downloads chart to Drive
6. Presents analysis with visual
Workflow:
1. Fetch data from multiple sources
2. Process and analyze in sandbox
3. Generate charts and visualizations
4. Create PDF report with reportlab
5. Download final report to Drive
Transformations:
- CSV to JSON conversion
- Excel to database format
- Log file parsing
- Text extraction from PDFs
- Format standardization
- Data cleaning and validation
Capabilities:
- Statistical analysis with scipy
- Machine learning with scikit-learn
- Numerical computations with numpy
- Symbolic math with sympy
- Optimization problems
- Simulation modeling
Workflow:
1. User requests a task that matches a skill pattern
2. Agent browses available skills using skill_browser
3. Agent reads relevant code template from skill using skill_reader
4. Agent adapts template code to user's specific data/requirements
5. Agent executes adapted code in sandbox
6. Agent processes results and presents findings
User: "Process the customer feedback CSV and generate a sentiment analysis report"
Agent:
1. Finds "sentiment-analysis" skill via skill_browser
2. Reads "sentiment-analysis/examples/csv-analysis.py" via skill_reader
3. Adapts template to user's specific CSV structure
4. Uploads customer_feedback.csv to sandbox
5. Installs required packages (pip install textblob pandas)
6. Executes adapted sentiment analysis code
7. Generates visualization and summary report
8. Downloads results to Drive: /data/reports/sentiment_report.pdf
9. Presents findings: "Analysis complete! 73% positive, 18% neutral, 9% negative"
DO:
β Complete all related operations within a single agent response
β Install required packages at the start of the turn
β Download important results to Drive within the same turn
β Use meaningful variable names for complex analyses
β Chain multiple code executions efficiently in one turn
DON'T:
β Assume variables from previous turns are available
β Expect packages installed in previous turns to persist
β Store critical data only in sandbox
β Run extremely long computations (>30 seconds)
β Rely on session state across different agent turns
β Expect files to persist between turns without downloading to Drive
Best Practices:
1. Upload only necessary files to sandbox
2. Use clear, descriptive filenames
3. Organize files in directories when needed
4. Download results promptly after generation
5. Clean up large temporary files to save space
# Robust code with error handling
try:
import pandas as pd
df = pd.read_csv('data.csv')
result = df.groupby('category')['value'].sum()
print(result)
except FileNotFoundError:
print("Error: data.csv not found. Please upload the file first.")
except Exception as e:
print(f"Error during analysis: {e}")
Optimization Tips:
1. Use vectorized operations (numpy/pandas) instead of loops
2. Filter data early to reduce processing time
3. Use appropriate data types (int32 vs int64)
4. Cache intermediate results in variables
5. Limit plot complexity for faster rendering
DO:
β Create skill templates with clear parameter placeholders
β Include comprehensive error handling in skill code
β Document expected file structures in SKILL.md
β Provide both basic and advanced examples in skills
β Test skill code templates before deploying to agents
β Version skill code as requirements evolve
β Include package requirements in skill documentation
β Design skills for single-turn execution when possible
DON'T:
β Assume skill code will work without adaptation
β Create overly complex skills requiring multi-turn execution
β Hard-code file paths in skill templates
β Forget to document required Python packages in skills
β Mix multiple unrelated code patterns in one skill
β Ignore session lifecycle when designing skill workflows
β Create skills that depend on persistent state across turns
# GOOD: Parameterized skill template with clear placeholders
"""
SKILL TEMPLATE: CSV Analysis
Replace YOUR_FILE_NAME with your actual CSV file path
Replace COLUMN_NAME with the column you want to analyze
"""
import pandas as pd
import matplotlib.pyplot as plt
# Load data (adapt file path)
df = pd.read_csv('YOUR_FILE_NAME')
# Analyze (adapt column name)
summary = df['COLUMN_NAME'].describe()
print(summary)
# Visualize
df['COLUMN_NAME'].hist(bins=20)
plt.savefig('analysis_result.png')
Problem: ModuleNotFoundError: No module named 'pandas'
Solution: Run shell command: pip install pandas
Note: Reinstall packages at the start of each new conversation
Problem: FileNotFoundError: [Errno 2] No such file or directory: 'data.csv'
Solution: Use _code_upload_to_session to transfer file from Drive first
Problem: Variable 'df' not found in subsequent execution
Cause: Previous agent turn completed (session was terminated)
Solution: Each agent turn is isolated - reload data and reinstall packages
Tip: Save intermediate results to Drive if multi-turn processing is needed
Problem: Code execution exceeded 30-second timeout
Solution:
- Break computation into smaller chunks
- Optimize algorithm efficiency
- Reduce data size being processed
- Consider pre-processing data in workflow nodes
Problem: Error: File already exists in Drive
Solution: Set "replace": True in download request, or use different filename
# Process data in chunks for large datasets
chunk_size = 10000
chunks = []
for chunk in pd.read_csv('large_file.csv', chunksize=chunk_size):
processed = chunk.groupby('category')['value'].sum()
chunks.append(processed)
final_result = pd.concat(chunks).groupby(level=0).sum()
# Generate results in multiple formats
import json
# Create results dictionary
results = {
'total_sales': 150000,
'top_products': ['Product A', 'Product B'],
'trend': 'increasing'
}
# Save as JSON
with open('results.json', 'w') as f:
json.dump(results, f, indent=2)
# Save as text report
with open('results.txt', 'w') as f:
f.write(f"Total Sales: ${results['total_sales']:,}\n")
f.write(f"Top Products: {', '.join(results['top_products'])}\n")
f.write(f"Trend: {results['trend']}\n")
# Save as CSV
import csv
with open('results.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(['Metric', 'Value'])
for key, value in results.items():
writer.writerow([key, str(value)])
# Download data from web, combine with Drive data
import urllib.request
import pandas as pd
# Fetch external data
urllib.request.urlretrieve('https://api.example.com/data.csv', 'external_data.csv')
# Load external and Drive data
external_df = pd.read_csv('external_data.csv')
local_df = pd.read_csv('local_data.csv') # Uploaded from Drive
# Combine and analyze
combined = pd.concat([external_df, local_df])
analysis = combined.groupby('region')['sales'].sum()
Data Analysis:
- pandas: Data manipulation and analysis
- numpy: Numerical computing
- scipy: Scientific computing
Visualization:
- matplotlib: Static plots and charts
- seaborn: Statistical data visualization
- plotly: Interactive visualizations
File Processing:
- PIL/Pillow: Image processing
- PyPDF2: PDF manipulation
- openpyxl: Excel file handling
Machine Learning:
- scikit-learn: ML algorithms
- statsmodels: Statistical modeling
- xgboost: Gradient boosting
# Statistical Analysis Example
import pandas as pd
import numpy as np
from scipy import stats
data = pd.read_csv('experiment_results.csv')
group_a = data[data['group'] == 'A']['score']
group_b = data[data['group'] == 'B']['score']
# Perform t-test
t_stat, p_value = stats.ttest_ind(group_a, group_b)
print(f"T-statistic: {t_stat:.3f}")
print(f"P-value: {p_value:.3f}")
print(f"Significant: {'Yes' if p_value < 0.05 else 'No'}")