# CLI Walkthrough — Build a Demo Stack

You're about to build a live AgenticFlow stack — **2 workflows, 2 agents, and a multi-agent workforce** — driven entirely by an AI coding agent on your desktop (Claude Code, Cursor, OpenAI Codex, Gemini CLI, or **Ishi** once it's available).

You don't touch the AgenticFlow UI for any of the building. You paste prompts, your AI does the work, and at the end every resource it built is **live in your AgenticFlow workspace** — ready to show anyone, including customers.

This is what the whole AgenticFlow ecosystem feels like when it works. Follow along.

***

## What you'll need (2 minutes)

1. **An AgenticFlow account.** Sign up at [app.agenticflow.ai](https://app.agenticflow.ai) — workspace and project auto-create.
2. **An API key.** Go to **Settings → API Keys** in the web UI, create one, copy it.
3. **A desktop AI coding agent.** Any of:
   * Claude Code (recommended for this walkthrough)
   * Cursor
   * OpenAI Codex CLI
   * Gemini CLI
   * Ishi (AgenticFlow's first-party desktop agent — see [ecosystem overview](https://docs.agenticflow.ai/welcome-to-agenticflow/ecosystem))
4. **Node.js 18+** on your laptop.

## Step 1 — Install the CLI (1 minute)

Open a terminal and run:

```bash
npm install -g @pixelml/agenticflow-cli
agenticflow login        # paste your API key when prompted
agenticflow bootstrap --json | jq '.auth'
```

If the last command prints `{"authenticated": true, ...}`, you're good.

> **No global install?** Every prompt below works without it — your AI will fall back to `npx --yes @pixelml/agenticflow-cli`. `af bootstrap --json` returns an `invocation` block that tells AI operators exactly how to call the CLI; the AI Toolkit skill (Step 2) carries the same guidance. You shouldn't need to think about invocation — just paste the prompts and let your AI figure it out.

## Step 2 — Install the AI Toolkit in your AI host (1 minute, optional but recommended)

The AI Toolkit teaches your AI agent how to drive AgenticFlow correctly. Without it, your AI will still figure things out — but it'll over-engineer or pick the wrong primitive more often. **With it, your AI picks the right thing first try.**

**In Claude Code:**

```
/plugin marketplace add PixelML/agenticflow-skill
/plugin install agenticflow-plugin@agenticflow-ai-toolkit
```

**In Gemini CLI:**

```bash
gemini extensions install https://github.com/PixelML/agenticflow-skill
```

**In Cursor:** open the Cursor Marketplace, search "AgenticFlow", install.

**In OpenAI Codex CLI:** `/plugins` → search AgenticFlow → Add to Codex.

**In VS Code / other hosts:** Command Palette → "Chat: Install Plugin From Source" → paste `https://github.com/PixelML/agenticflow-skill`.

When the install finishes, restart the AI host so the new skill loads.

***

{% hint style="info" %}
**If your AI stumbles** ("not found", `ModuleNotFoundError`, or tries `npx af` and gets a wrong package), paste this one-liner:

> Use `agenticflow <subcommand>` if installed, else `npx --yes @pixelml/agenticflow-cli <subcommand>`. Don't use `af` (2-letter name, often collides) and never `npx af` (wrong package). Continue.
> {% endhint %}

## Step 3 — Build your first workflow (\~2 minutes)

**Open your AI host (Claude Code / Cursor / etc.) and paste this message:**

> Using the AgenticFlow CLI, build the simplest thing that fetches a URL and summarizes it. Pick an interesting Wikipedia article as the demo URL, deploy, run once, show me the summary.
>
> Leave the workflow deployed. At the end, print the Web UI link for it. One-line note: why this rung of the composition ladder?

### What you'll see the AI do

The AI will run these commands in order (roughly):

1. `af bootstrap --json` — orient itself
2. `af blueprints list --kind workflow --json` — find the workflow blueprints
3. `af workflow init --blueprint summarize-url --name "Demo · URL Summarizer" --json` — deploy
4. `af workflow run --workflow-id <id> --input '{"url":"https://en.wikipedia.org/wiki/..."}' --json` — run
5. `af workflow run-status --run-id <id> --json` — poll until done
6. Print the 3-bullet summary
7. Print the Web UI link

**Expected wall time:** 30-60 seconds.

### What to do next

**Click the Web UI link your AI printed.** You'll see the workflow canvas: a trigger node → a web\_retrieval node → an llm node → output. This is rung 2 of the composition ladder — the simplest useful workflow shape.

You can click **Run** in the UI with a different URL to prove it's a real, persistent resource.

***

## Step 4 — Build a second workflow with an HTTP API call (\~2 minutes)

**Paste this into your AI:**

> Using the AgenticFlow CLI, deploy a workflow that calls a public JSON API and explains the response in plain English. Pick an interesting endpoint yourself (GitHub, a weather API, jsonplaceholder, etc.). Run it once. Show me the explanation.
>
> Leave the workflow deployed. Print the Web UI link at the end.
>
> Get the workspace\_id from `af bootstrap --json`.

### What you'll see

Same pattern as Step 3, but with `api-summary` blueprint. The AI picks a real public API (likely GitHub or jsonplaceholder), calls it, parses the JSON response, and an LLM explains what each field means in plain English.

### What to do next

Click the Web UI link. Compare this workflow's canvas to Step 3's. Notice the shape is almost identical — just different nodes. **That's the point**: workflows are composable at the node level. Swap `web_retrieval` for `api_call` and you have a different deterministic tool.

***

## Step 5 — Build a flexible agent (\~2 minutes)

Now climb the ladder. Workflows are deterministic. **Agents are flexible** — same plugins are available, but an LLM decides which to call per turn.

**Paste this into your AI:**

> Using the AgenticFlow CLI, deploy a research agent that answers current-events questions with cited web sources. Test it with a real question about a technology release from the last 60 days — confirm the response cites real URLs (not training-data paraphrase).
>
> Leave the agent deployed. At the end, print the Web UI links for the agent AND the specific conversation thread.
>
> Get workspace\_id from `af bootstrap --json`.

### What you'll see

The AI deploys the `research-assistant` blueprint, runs it with a recent-news question, and reports back with real URLs in the answer. If you paste the URLs into your browser, they'll lead to real pages (OpenAI docs, Anthropic blog posts, news articles — whatever the agent found).

**Wall time:** 1-2 minutes (web\_search + web\_retrieval take longer than a deterministic workflow).

### What to do next

Click the **thread URL**. You'll see the full conversation — your message, the agent's tool calls (you can expand each `web_search` call to see the exact query it ran and the URLs it retrieved), and the final answer with citations.

**This is the "glass box" philosophy**: every tool call the agent made is visible and auditable. Compare that to a chat bot that gives you an answer and hopes you trust it.

***

## Step 6 — Build an agent that writes AND generates images (\~3 minutes)

Same rung as Step 5, different plugin loadout — this agent has `agenticflow_generate_image` instead of `api_call`.

**Paste this into your AI:**

> Using the AgenticFlow CLI, deploy an agent that drafts written content AND generates a matching image. Pick a topic yourself (e.g. "LinkedIn post on AI agents in 2026"). Run it once — show me both the written draft AND evidence the image plugin fired.
>
> Leave the agent deployed. Print the Web UI links for the agent AND the specific thread.
>
> Get workspace\_id from `af bootstrap --json`.

### What you'll see

A written draft (150-300 words, whatever topic you gave it) plus either an image URL or an image descriptor in the response. The agent autonomously decided to use `web_search` first (for current context), then `agenticflow_generate_image` (for the visual).

### What to do next

Open the thread URL. You'll see the agent's tool-call sequence: search → compose → generate image. **The order isn't hardcoded** — the agent decided. If you re-run with a different prompt, the order might change. That's the difference between a workflow (deterministic) and an agent (flexible).

***

## Step 7 — Build a multi-agent workforce (\~3 minutes)

Top of the ladder. Rung 6. **When genuine multi-agent coordination helps**, you use a workforce.

**Paste this into your AI:**

> Using the AgenticFlow CLI, deploy a multi-agent team that investigates a "compare X vs Y" question. Two researchers work in parallel, a synthesizer merges their findings. Pick a realistic X vs Y (something a developer or business audience would find interesting).
>
> Deploy, publish it so it has a public URL, kick off a run, wait for completion, show me the synthesizer's final unified answer.
>
> Leave the workforce and all its agents deployed. Print the Web UI canvas link and the public run URL at the end.

### What you'll see

The AI deploys the `parallel-research` blueprint. Behind the scenes, that creates **four real agents** (Coordinator, Researcher A, Researcher B, Synthesizer) and wires them into a graph: trigger → coordinator → \[A || B in parallel] → synthesizer → output.

The AI publishes it, kicks off a run, waits for all four agents to finish, then shows you the synthesizer's unified answer with attribution like *"(Researcher A)"* and *"(Researcher B)"* tags.

**Wall time:** 2-3 minutes (four LLM agents plus synthesis).

### What to do next

1. **Click the workforce canvas link.** You'll see the full DAG visually — four agent nodes, edges showing the fan-out + fan-in, a trigger, an output.
2. **Click the public run URL.** Send this to anyone — no AgenticFlow account needed. They can invoke your workforce right from the browser.

**This is the moment the ladder pays off.** Your first workflow (Step 3) and this workforce (Step 7) use the same underlying platform — but the workforce solves a problem a single workflow or agent can't: independent parallel investigation by multiple agents followed by structured synthesis.

***

## Step 8 — Ask your AI to pick the rung for a custom task (\~2 minutes)

This is the most important step. Up to now you've been telling the AI which rung to build on. Now you hand it an open-ended task and let it decide.

**Paste this into your AI — replace the bracketed line with whatever you want:**

> Using the AgenticFlow CLI, do this task for me:
>
> ```
> <REPLACE THIS LINE WITH YOUR OWN TASK — ANYTHING>
> ```
>
> Pick the lowest rung of the composition ladder that solves it (workflow < agent < workforce). Deploy, run once with realistic input, show me the output.
>
> Leave what you deployed. Print the Web UI link at the end. One-line note: why that rung?

### Example tasks to try

| Task you paste                                    | Expected rung | Why                          |
| ------------------------------------------------- | ------------- | ---------------------------- |
| "Summarize this Reuters article URL: https\://…"  | 2 (workflow)  | Deterministic transform      |
| "Explain what this API endpoint returns"          | 2 (workflow)  | Same — deterministic         |
| "Answer questions about the latest OpenAI models" | 3 (agent)     | Open-ended tool use          |
| "Write a blog post with a hero image"             | 3 (agent)     | Agent routes between plugins |
| "Compare AWS vs GCP for AI startups"              | 6 (workforce) | Parallel investigation helps |
| "Set up an Amazon Singapore seller team"          | 6 (workforce) | Vertical-team blueprint fits |

If your AI picks wrong, the AI Toolkit skill (Step 2) isn't installed properly. The skill has the rung-picking rule baked in — without it, your AI is guessing from `af --help`.

***

## Your workspace, right now

Run this in your terminal to see everything you built:

```bash
af bootstrap --json | jq '{
  agents: [.agents[] | select(.name | startswith("Demo")) | .name],
  workforces: [.workforces[] | select(.name | startswith("Demo")) | .name]
}'
af workflow list --name-contains "Demo" --fields id,name --json
```

You should see (at minimum):

* **Workflows**: Demo · URL Summarizer, Demo · API Explainer
* **Agents**: Demo · Research Agent, Demo · Content Creator, plus 4 agents inside the workforce
* **Workforces**: Demo · Parallel Research Team

Every one of these is live. Click any Web UI link you collected and it's there — the canvas, the history, the trace log.

## Showing this to someone

The two most dramatic things to show a visitor:

1. **The workforce's public URL** from Step 7. No auth required. Send it over email, put it on a slide, they can run your workforce themselves.
2. **An agent thread** from Step 5 or 6. Expand the tool calls so they can see the agent chose `web_search`, saw the results, chose `web_retrieval` on the most promising URL, then composed the answer. The decision-making is visible, not hidden.

## When you want to reset

The stack persists until you tear it down. When you're ready:

```bash
# Delete everything named "Demo · …" — scoped to this walkthrough only
af workforce list --name-contains "Demo" --fields id --json \
  | jq -r '.[].id' | xargs -I{} af workforce delete --workforce-id {} --json

af agent list --name-contains "Demo" --fields id --json \
  | jq -r '.[].id' | xargs -I{} af agent delete --agent-id {} --json

af workflow list --name-contains "Demo" --fields id --json \
  | jq -r '.[].id' | xargs -I{} af workflow delete --workflow-id {} --json
```

Or leave them. The whole point of a demo stack is it stays runnable.

***

## Troubleshooting

**"My AI host reports `af: command not found` or `af not found`."**

You skipped the global install in Step 1 (or it failed silently). Easiest fix: paste this follow-up:

> Use `agenticflow <subcommand>` if installed, or `npx --yes @pixelml/agenticflow-cli <subcommand>` otherwise. Continue the task.

Or install globally: `npm install -g @pixelml/agenticflow-cli`, then re-paste the original prompt.

**"My AI reports `ModuleNotFoundError`, a Python traceback, or some unrelated tool when calling `af`."**

**Name collision.** `af` is a generic 2-letter command that other tools claim — Python packages with broken venvs, homebrew formulas, shell aliases. The one on your system isn't ours.

Diagnose:

```bash
command -v af && type af
# Then try the canonical name:
command -v agenticflow && agenticflow --version
```

**Fix:** use `agenticflow` (the full canonical binary name — the CLI installs BOTH `agenticflow` and `af`) instead of `af`. `agenticflow` is 11 characters long and unlikely to collide with anything. Paste:

> Skip `af` entirely — name collision with another tool on my system. Use `agenticflow <subcommand>` instead. If `agenticflow` isn't on PATH either, fall back to `npx --yes @pixelml/agenticflow-cli <subcommand>`. Continue the task.

**"My AI tried `npx af` and got a different tool / weird errors."**

`npx af` treats `af` as a package name and fetches whatever package is named `af` on npm — **not our CLI**. The only reliable npx invocation is `npx --yes @pixelml/agenticflow-cli <subcommand>` (full package name, with `--yes` to auto-accept the install prompt). Paste the one-liner from the previous bullet to correct the AI.

**"I don't have the AI Toolkit installed but my AI keeps making wrong picks."**

Go back to Step 2 and install it. Then restart your AI host. The Toolkit is \~150 lines of routing rules that make a meaningful difference.

**"Step 5 or 6 returned `status: "completed_empty"`."**

The agent exhausted its recursion limit in a tool loop. This was common on older CLI versions; v1.10.1+ ships with `recursion_limit: 100` by default and it's rare now. If you hit it, bump the limit:

```bash
af agent update --agent-id <id> --patch --body '{"recursion_limit":100}' --json
```

Then re-run the `af agent run` call.

**"Step 7's public URL 404s when I open it in a browser."**

You're opening the *run* endpoint, which is POST-only. Construct the correct public URL — it's `https://agenticflow.ai/workforce/public/<public_key>` (no `/run`).

**"`af workforce run` returns 400 — `Failed to retrieve user info`."**

This is a known backend limitation on API-key auth for that specific command. The walkthrough uses `af workforce publish` + a direct `curl` to the public endpoint to work around it. If your AI used `af workforce run` instead, point it at the public-URL path.

**"My workspace shows different agent/workflow counts than expected."**

Some steps create resources inside other resources (the workforce in Step 7 creates 4 agents). Run the `af bootstrap --json | jq` command in the "Your workspace, right now" section for the real inventory filtered to `Demo · …` prefixed resources.

***

## Where to go from here

* [Ecosystem overview](https://docs.agenticflow.ai/welcome-to-agenticflow/ecosystem) — the composition ladder explained, plus why we built it this way
* [CLI Reference](https://docs.agenticflow.ai/developers/cli) — every command, every flag
* [AI Toolkit](https://github.com/PixelML/agenticflow-skill) — the routing layer that makes your AI pick the right rung on the first try
* [Agents concepts](https://docs.agenticflow.ai/ai-agents/03-agents) — the 11-tab agent configuration model in the Web UI
* [Workforce concepts](https://docs.agenticflow.ai/workforce/05-workforce) — how the multi-agent DAG engine works
