# Ecosystem: Platform · CLI · Ishi · AI Toolkit

AgenticFlow is **one platform** surfaced through **multiple layers**. Each layer has a single responsibility. Together they let a human (or their AI agent) go from intent to a deployed, running agent in a single conversation.

<figure><img src="https://487764224-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FZ3ppnJjAH1qBNXEYnDPA%2Fuploads%2F34Ll3Xa79mEk9Xf1IHgm%2Fimage.png?alt=media&#x26;token=cffc0979-4763-4e31-9b79-ed3eaaff1690" alt=""><figcaption></figcaption></figure>

## Why we built it this way

AgenticFlow's engine is powerful — multi-agent systems, MCP servers, code execution, native web search, agent task management, Anthropic-compatible skills, webhooks. But power without accessibility is a paradox: community feedback consistently pointed at configuration barriers, learning curves, and "2,000 pages of docs + 250 videos" failing to close the gap for non-technical users.

The fix is not more documentation. It is a different interface.

So we split the ecosystem into **two aligned parts**:

1. **The engine** — [app.agenticflow.ai](https://app.agenticflow.ai) keeps evolving as the production backend where agents, workforces, workflows, MCP clients, knowledge bases, and runs actually live. The core team is 100% focused on stability, traceability, versioning, and UX.
2. **The accessible interface layer** — a CLI (`af`) that exposes every platform capability in a shape AI agents can drive, plus **desktop AI agents** (Ishi first-party, plus third-party integrations with Claude Code, OpenAI Codex, Cursor, Gemini CLI) that consume the CLI on behalf of humans.

This mirrors an industry-wide shift. [Shopify AI Toolkit](https://x.com/shopify/status/2042335627862032754) takes the same approach — expose the platform as an AI-readable contract, let the assistant do the configuration, let the human focus on intent. AgenticFlow's bet is the same: **you brief your AI, your AI talks to our CLI, the CLI configures AgenticFlow, and AgenticFlow serves your customers.**

## Ishi — AgenticFlow's first-party desktop AI agent

**Ishi** (意志 — Chinese for *intention* + *cornerstone*) is AgenticFlow's first-party desktop app, **available now**. Originally launched as **Claw** in [Office Hours #34](https://docs.agenticflow.ai/changelog) (06 Jan 2026) and rebranded to **Ishi** in Office Hours #35. A dedicated tiger team iterates on it continuously. Ishi is the "cornerstone of your intention" — the bridge between what you want and what AgenticFlow can deliver.

Ishi:

* **Runs locally on your desktop** (macOS, Windows, Linux — privacy-first, glass-box philosophy)
* **Uses the AgenticFlow CLI** as its configuration backbone
* **Reads your local context** (files, current project, task at hand) — something a web UI cannot
* **Talks to the AgenticFlow backend** via the same CLI + API everyone else uses — no special path
* **Can extend AgenticFlow itself**: if you need an integration that doesn't exist yet, Ishi can write a new workflow node (Python/JS) and deploy it via the code-execution capability
* **Can debug for you**: reads the trace log AgenticFlow emits, identifies the failing node, proposes a fix, applies it

Ishi is not a replacement for AgenticFlow. AgenticFlow is still the flagship engine. Ishi is the **accessible interface** that lets a non-technical operator drive that engine conversationally — without learning JSON, APIs, or node-type schemas.

The synergy works both ways: **AgenticFlow provides the deep trace log and platform capabilities; Ishi provides the local context and intent. User talks to Ishi. Ishi builds on AgenticFlow.** Before Ishi, a user did 100% of the configuration manually. With Ishi, the user spends \~10% briefing intent, Ishi does \~80% of the build on AgenticFlow, and the user does the final 10% to validate.

## The six layers

| Layer                 | Lives at                                                                                    | Audience               | Owns                                                |
| --------------------- | ------------------------------------------------------------------------------------------- | ---------------------- | --------------------------------------------------- |
| **Core platform**     | [app.agenticflow.ai](https://app.agenticflow.ai)                                            | Both (UI or API)       | Resources, state, billing, auth, runtime            |
| **Visual UI**         | Same host, browser                                                                          | Human                  | Drag-and-drop building, dashboards, trace viewer    |
| **CLI** (`af`)        | [`@pixelml/agenticflow-cli`](https://www.npmjs.com/package/@pixelml/agenticflow-cli) on npm | Developer + AI agent   | Programmatic access, payload shapes, error envelope |
| **AI Toolkit**        | Plugin marketplaces (Claude Code, Codex, Cursor, Gemini CLI)                                | AI agent in IDE        | Routing — tells your AI which CLI command to run    |
| **Desktop AI agents** | User's laptop                                                                               | Human (conversational) | Local context, chat UX, trace-aware debugging       |
| **Docs** (this site)  | Browser                                                                                     | Human                  | Concepts, integrations, node reference              |

No layer duplicates another's data. Skills in the AI Toolkit point at `af bootstrap` for the live model list. Docs link to CLI help for command reference. CLI links back here for concepts. Desktop agents use the CLI and the platform — they don't hold their own copy of your resources. **The core platform is the single source of truth.**

## Layer diagram

```
                  ┌─────────────────────────────────────────────────┐
                  │           Core Platform                         │
                  │           app.agenticflow.ai                    │   Source of truth:
                  │           (UI + REST API + runtime)             │   agents, workforces,
                  └─────────────────────┬───────────────────────────┘   MCP clients, runs,
                                        │                               trace log.
               ┌────────────────────────┼─────────────────────────┐
               ▼                        ▼                         ▼
     ┌─────────────────┐       ┌──────────────┐         ┌─────────────────┐
     │  af CLI (npm)   │       │  Visual UI   │         │   Docs (this)   │
     │  payload shapes │       │  (browser)   │         │   concepts,     │
     │  error envelope │       │  human-first │         │   node library  │
     └────────┬────────┘       └──────────────┘         └─────────────────┘
              │
              │  consumed by:
              ▼
  ┌───────────────────────────────────────────────────────────────────┐
  │                    Desktop AI agents                              │
  │                                                                   │
  │  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌────────────────────┐ │
  │  │  Ishi    │  │ Claude   │  │ OpenAI   │  │  Cursor            │ │
  │  │ (first-  │  │ Code     │  │ Codex    │  │  Gemini CLI        │ │
  │  │  party)  │  │          │  │ CLI      │  │  (third-party)     │ │
  │  └──────────┘  └──────────┘  └──────────┘  └────────────────────┘ │
  │       ▲             ▲             ▲                 ▲             │
  │       └─────────────┴─────────────┴─────────────────┘             │
  │          each loads the AI Toolkit skill pack, which routes       │
  │          user intent to the right `af` CLI command                │
  └───────────────────────────────────────────────────────────────────┘
                              ▲
                              │
                          Human user
                          (briefs intent in natural language)
```

The **CLI is the contract**. Whether the human is using Ishi on their laptop, Claude Code in their IDE, Cursor while editing a file, or the AI Toolkit skill pack running in any compatible host — the path to AgenticFlow is the same set of `af` commands with the same error envelope, the same `--dry-run` safety, and the same `af bootstrap` discovery. That shared contract is how multiple desktop agents can coexist without fragmenting the platform.

## What each surface does

### Core Platform

The authoritative home for every resource. When you create an agent via the UI, the CLI, or the API, it lands in the same place. The `_links` block in `af bootstrap --json` returns URLs back into the UI so AI agents can hand off to their human at any point.

### Visual UI

The human path. Drag-and-drop workflow builder, agent 11-tab configuration, workforce graph canvas, connections management, knowledge library, dashboards. Best for design-first workflows, understanding the mental model, and reviewing what an AI agent built.

### `af` CLI

The **API contract for AI operators**. Every platform capability is exposed here with four amplifications that make it safe for autonomous use:

* **Local validation** — `--dry-run` on create/deploy commands catches shape errors before the network round-trip
* **Structured errors** — every failure returns `{schema: "agenticflow.error.v1", code, message, hint, details.payload}` with an actionable `hint` pointing at the next command
* **Partial updates** — `af agent update --patch` fetches → merges → PUTs, preserving attached MCP clients and tools
* **Self-description** — `af bootstrap/schema/context/playbook/changelog` returns everything an AI needs in one call

Developers use it directly for scripting. AI agents use it under the direction of the AI Toolkit.

### AI Toolkit

The **routing layer for AI agents** operating inside IDEs (Claude Code, OpenAI Codex, Cursor, Gemini CLI) and our first-party desktop agent (Ishi). Installed once via each host's plugin marketplace. Three narrow flagship skills:

* **`agenticflow-workforce`** — multi-agent DAGs (coordinator → worker agents)
* **`agenticflow-agent`** — single-agent create/run/update
* **`agenticflow-mcp`** — attach external tool providers safely

Each skill has a tight `description` and `triggers[]` that routes user prompts. A `⚠️ When NOT to use` block on every skill points at its sibling to prevent over-engineering (e.g. don't spin up a workforce for a single-bot task).

The toolkit doesn't duplicate CLI knowledge — it tells the AI which CLI command to run and passes live output through.

### Desktop AI agents

The **human-facing conversational layer**. A desktop AI agent reads your local context (files, current task), takes your natural-language intent, and drives the AgenticFlow CLI on your behalf — then reports back with what it built.

**First-party: Ishi.** AgenticFlow's own desktop agent. Privacy-first (glass-box philosophy — local execution, your file access, bring-your-own-key for Claude/GPT/Gemini). Trace-aware — reads the AgenticFlow run log and debugs failures automatically. Can extend AgenticFlow itself by writing new workflow nodes via code-execution. Ships as a macOS/Windows/Linux app.

**Third-party: Claude Code, OpenAI Codex, Cursor, Gemini CLI.** All these hosts can load the AI Toolkit skill pack and get the same CLI-driven integration. This is the same shape [Shopify AI Toolkit](https://x.com/shopify/status/2042335627862032754) uses to expose its platform to AI agents — the CLI is the contract, any compatible host can drive it.

Both paths converge at the same `af` commands with the same `bootstrap`/`schema`/`playbook` discovery surface. Users pick the host they already trust; AgenticFlow works with all of them.

### Docs (you are here)

The human-facing long-form library. Concepts, UI walkthroughs, integration tutorials, the 193+ node library in [Reference](https://docs.agenticflow.ai/reference/nodes), industry use cases, enterprise guidance.

## Design principles

1. **CLI is the API contract for AI.** Anything an AI needs — auth, resources, schemas, shapes, changelog, playbooks, blueprints, marketplace — is in one of `af bootstrap/schema/context/playbook/changelog/blueprints/marketplace`. Every desktop AI agent (Ishi, Claude Code, Codex, Cursor) consumes the same surface.
2. **Toolkit routes, CLI answers.** Skills stay small (\~150 LOC) and point at CLI for live truth.
3. **Docs for humans, skills for AIs.** Overlap is a smell. If content drifts between them, delete one.
4. **Fail loud, hint clearly.** Every 4xx/5xx carries a recovery command in `hint`.
5. **One SoT per concern.** Concepts → docs. Commands → CLI. Routing → AI toolkit. Resources → platform. Local context → desktop agent.
6. **Glass-box desktop agents.** Anything running on the user's machine (Ishi) is privacy-first, local-execution, bring-your-own-key. Users retain trust.
7. **Stability first on the engine.** The core team keeps 100% focus on AgenticFlow platform stability, traceability, versioning, and UX. Desktop-agent work is done by a dedicated tiger team that doesn't compete for engine resources.

## First-touch: human journey

A new user visits the platform:

1. **Sign up** at [app.agenticflow.ai](https://app.agenticflow.ai) → workspace + project auto-created.
2. **Generate an API key** at **Settings → API Keys**.
3. Decide on a path:

   **Path A — Click to build.** Open the UI, follow [Quickstart](https://docs.agenticflow.ai/get-started/01-quickstart). Best for understanding the product.

   **Path B — Script with the CLI.** `npm install -g @pixelml/agenticflow-cli`, then:

   ```bash
   af login                # or set AGENTICFLOW_API_KEY / _WORKSPACE_ID / _PROJECT_ID
   af doctor --json --strict
   af bootstrap --json     # always start here
   ```

   See [CLI Reference](https://docs.agenticflow.ai/developers/cli) for the full surface.

   **Path C — Talk to a desktop AI agent.** Install Ishi (first-party), or load the AgenticFlow AI Toolkit into Claude Code / OpenAI Codex / Cursor / Gemini CLI (third-party). Describe what you want in natural language — the agent handles the CLI calls. No JSON, no payload shapes, no command memorization. Best for non-technical operators who want to drive AgenticFlow conversationally.

   **Path C — Let your AI do it.** Install the AI Toolkit plugin for your IDE, then prompt your AI in plain language: *"Build me a customer support bot for my SaaS"*. The AI handles everything below.
4. Each path converges on the same workspace — switch freely between UI, CLI, and AI-driven work.

**Target time-to-value:** under five minutes from signup to a deployed, runnable agent.

## First-touch: AI-agent journey (under the hood)

When a user in an IDE prompts their AI to build something on AgenticFlow, the AI Toolkit plugin activates. The journey the CLI is designed to support:

```
1. Orient:    af bootstrap --json
              ↳ returns auth, agents, workforces, blueprints, playbooks, whats_new, _links
              ↳ check data_fresh — false means backend unreachable, don't mutate

2. Learn:     af playbook <topic> --json
              ↳ e.g. `first-touch`, `migrate-from-paperclip`, `mcp-client-quirks`

3. Shape:     af schema <resource> [--field <name>] --json
              ↳ payload shape for what you're about to create
              ↳ --field drills into nested shapes (mcp_clients, response_format, etc.)

4. Preview:   af <resource> create --body @file --dry-run --json
              ↳ local validation catches shape errors before network

5. Build:     af <resource> create --body @file --json
              ↳ single agent
          OR: af workforce init --blueprint <id> --name "<name>" --json
              ↳ multi-agent team (auto-creates agents + wires the DAG)

6. Test:      af agent run --agent-id <id> --message "..." --json
          OR: af workforce run --workforce-id <id> --trigger-data '{...}'

7. Iterate:   af agent update --agent-id <id> --patch --body '{"field":"new"}' --json
              ↳ preserves MCP clients, tools, code-execution config

8. Ship:      af workforce publish --workforce-id <id> --json
              ↳ mints a public_key + public_url for the user's teammates

9. Cleanup:   af <resource> delete --<resource>-id <id> --json
              ↳ returns {schema:"agenticflow.delete.v1", deleted:true, id, resource}
```

At every step, a 4xx or 5xx response includes a `hint` that names the recovery command — no guessing.

### The composition ladder

AgenticFlow's three deploy verbs (`workflow`, `agent`, `workforce`) are rungs on a complexity ladder. **Start at the lowest rung that solves the user's problem.** Every rung composes from the rungs below.

```
  Rung 6: WORKFORCE (DAG)        Explicit multi-agent coordination
          ─ trigger → coord → [A || B] → synthesizer → output
          ─ References: nodes · workflows · agents · sub-DAGs

  Rung 5: AGENT + SUB-AGENTS     Lite multi-agent, agent-driven
          ─ triage.sub_agents = [specialist_a, specialist_b]

  Rung 4: AGENT + WORKFLOW TOOL  Flexible control + deterministic body
          ─ agent.tools = [{workflow_template_id: "deep_research"}]

  Rung 3: AGENT + NODE PLUGINS   Single flexible agent picks tools dynamically
          ─ agent.plugins = [web_search, web_retrieval, api_call]

  Rung 2: WORKFLOW ENRICHED      Deterministic + real-world data
          ─ trigger → web_retrieval → llm_summarize → output

  Rung 1: WORKFLOW CHAINED       Deterministic sequential reasoning
          ─ trigger → llm_plan → llm_execute → llm_format → output

  Rung 0: WORKFLOW MINIMAL       "Hello world"
          ─ trigger → llm → output
```

**Deploy verb maps 1:1 to kind:**

| Kind        | CLI                                  | Rungs covered         |
| ----------- | ------------------------------------ | --------------------- |
| `workflow`  | `af workflow init --blueprint <id>`  | 0, 1, 2               |
| `agent`     | `af agent init --blueprint <id>`     | 3 (+ 4, 5 on roadmap) |
| `workforce` | `af workforce init --blueprint <id>` | 6                     |

`af blueprints list [--kind <k>] [--complexity <n>] --json` surfaces every shipped blueprint with its rung so AI operators can filter.

### Choosing: agent vs. workforce vs. workflow

The AI Toolkit skills resolve this automatically from the user's prompt, but the underlying rule is simple:

| User intent                                                                            | Choose                                           | Why                                                                      |
| -------------------------------------------------------------------------------------- | ------------------------------------------------ | ------------------------------------------------------------------------ |
| Deterministic multi-step pipeline (summarize URL, fetch API, chain LLMs)               | **`af workflow`**                                | Rungs 0-2. Reproducible; no agent needed                                 |
| A single chat endpoint, a customer-facing bot, one assistant                           | **`af agent`**                                   | Rung 3. One prompt handles routing. Iterate with `--patch`               |
| Multiple agents that hand off (research → write, triage → specialist, pre-built teams) | **`af workforce`**                               | Rung 6. One command creates the workforce, all agents, and the wired DAG |
| Attach Google Docs/Sheets/Slack/Notion/etc. to an existing agent                       | **`af mcp-clients` + `af agent update --patch`** | Inspect before attach to avoid tool-schema quirks                        |

Don't reach for a workforce when one agent suffices. Don't reach for an agent when a workflow suffices.

### Starter catalogs: blueprints (offline) and marketplace (live)

Two complementary ways to start from a template:

|           | **Blueprint** (ships with CLI)                                                             | **Marketplace** (live backend)                          |
| --------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------- |
| Discovery | `af blueprints list --json`                                                                | `af marketplace list --type <kind> --json`              |
| Storage   | Version-locked to the CLI release                                                          | Hosted, curated, user-contributable                     |
| Network   | None needed to list                                                                        | Backend call per list/get/clone                         |
| Types     | `workflow` · `agent` · `workforce` kinds (rungs 0-6 of the ladder)                         | `agent_template` · `workflow_template` · `mas_template` |
| Deploy    | `af <kind> init --blueprint <id>` — the CLI picks the right verb from the blueprint's kind | `af marketplace try --id <id>` (auto-detects type)      |

**Workflow blueprints (rungs 0-2, 4 total)** — deterministic multi-node flows. Need one LLM-provider connection in the workspace (auto-discovered):

| ID              | Rung | Nodes                    | Best for                          |
| --------------- | ---- | ------------------------ | --------------------------------- |
| `llm-hello`     | 0    | llm                      | Learning the model; one-off Q\&A  |
| `llm-chain`     | 1    | llm\_plan → llm\_execute | Plan-then-execute reasoning       |
| `summarize-url` | 2    | web\_retrieval → llm     | Digesting an article URL          |
| `api-summary`   | 2    | api\_call → llm          | Explaining an unfamiliar JSON API |

**Agent blueprints (rung 3, 3 total)** — single agent with built-in plugins. No connection setup needed:

| ID                   | Plugins                                                   | Best for                               |
| -------------------- | --------------------------------------------------------- | -------------------------------------- |
| `research-assistant` | web\_search, web\_retrieval, api\_call, string\_to\_json  | Current-events research with citations |
| `content-creator`    | web\_search, web\_retrieval, agenticflow\_generate\_image | Blog drafts + hero images              |
| `api-helper`         | api\_call, string\_to\_json, web\_search                  | HTTP API wrappers with analysis        |

**Workforce blueprints (rung 6, 13 total)** — multi-agent DAGs (trigger → coordinator → workers → output, optionally → synthesizer). Require the Workforce feature:

| ID                                                                                   | Agents | Shape                                                  |
| ------------------------------------------------------------------------------------ | ------ | ------------------------------------------------------ |
| `research-pair`                                                                      | 2      | planner → researcher (web\_search + web\_retrieval)    |
| `content-duo`                                                                        | 2      | writer (web) → illustrator (generate\_image)           |
| `api-pipeline`                                                                       | 2      | fetcher (api\_call) → analyst                          |
| `fact-check-loop`                                                                    | 2      | writer → fact\_checker (verify claims via web\_search) |
| `parallel-research`                                                                  | 4      | coordinator → 2 researchers (parallel) → synthesizer   |
| `dev-shop` · `marketing-agency` · `sales-team` · `content-studio` · `support-center` | 2-4    | Vertical teams (generic agents, attach your own tools) |
| `amazon-seller` · `tutor` · `freelancer`                                             | 5      | Domain-specific vertical teams                         |

The first 5 workforce blueprints (`research-pair` through `parallel-research`) have AgenticFlow-native plugins pre-attached to every slot, so they work end-to-end with zero follow-up setup.

**Roadmap:** Rungs 4 (agent + workflow tool) and 5 (agent + sub-agents) are supported by the backend but not yet exposed as CLI blueprints — planned in a follow-up release.

### Copy-paste prompts

Short, minimal-context prompts a user can paste to any AI assistant with `af` access, which then discovers + deploys via the CLI:

```
Set up a research agent that cites sources from the web. Use `af`, test with a real
current-events question, clean up after.
```

```
Deploy a parallel-research workforce via `af`, test with a "compare X vs Y" question.
Confirm both researchers ran in parallel and the synthesizer produced a unified answer.
Clean up after.
```

Full catalog: `af playbook ready-prompts` or [agenticflow-skill/reference/ready-prompts.md](https://github.com/PixelML/agenticflow-skill/blob/main/reference/ready-prompts.md).

## How the surfaces stay in sync

Avoiding drift between four moving parts requires discipline:

* **CLI `playbooks.ts` is the source of truth** for playbook content. AI Toolkit skills regenerate from it (via a forthcoming `sync-from-cli.mjs` mechanism).
* **CLI `changelog.ts` is the source of truth** for version history. Surfaced via `af changelog --json` and consumed by docs + marketing.
* **`af bootstrap`-backed live data** (models, blueprints, workforces, agents) is queried at runtime — never hardcoded in docs or skills.
* **Platform is the SoT for resources.** Docs describe what a workforce IS; CLI is how you create one; platform is where it LIVES.

## Version alignment

* **Platform**: continuous — [app.agenticflow.ai](https://app.agenticflow.ai)
* **CLI**: [`@pixelml/agenticflow-cli@1.10.0`](https://www.npmjs.com/package/@pixelml/agenticflow-cli) (npm, tag-triggered auto-publish)
* **SDK**: [`@pixelml/agenticflow-sdk@1.6.0`](https://www.npmjs.com/package/@pixelml/agenticflow-sdk) (shipped alongside CLI)
* **AI Toolkit**: `v4.3.0` — distributed to Claude, Codex, Cursor, Gemini plugin marketplaces
* **Ishi**: desktop AI agent (first-party) — **available now** (launched OH#34 as Claw, rebranded OH#35)
* **Docs**: this GitBook — continuously updated

When a new CLI version ships, `af changelog --json` surfaces the changes. The AI Toolkit's `scripts/sync-from-cli.mjs` will pull them into the skill content automatically (planned — manual sync in the interim).

## Next steps

* [Quickstart](https://docs.agenticflow.ai/get-started/01-quickstart) — five-minute human onboarding
* [AgenticFlow CLI](https://docs.agenticflow.ai/developers/cli) — developer-facing CLI overview
* [CLI Command Reference](https://docs.agenticflow.ai/developers/agenticflow-cli-capabilities) — every command, every flag
* [API Overview](https://docs.agenticflow.ai/developers/api) — REST contract below the CLI
* [Agents](https://docs.agenticflow.ai/ai-agents/03-agents) — single-agent concepts
* [Workforce](https://docs.agenticflow.ai/workforce/05-workforce) — multi-agent orchestration concepts
* [Integrations](https://docs.agenticflow.ai/integrations/07-integrations) — MCP providers and 300+ tools

### Install the AI Toolkit

* **Claude Code**: `/plugin marketplace add PixelML/agenticflow-skill` then `/plugin install agenticflow-plugin@agenticflow-ai-toolkit`
* **Gemini CLI**: `gemini extensions install https://github.com/PixelML/agenticflow-skill`
* **Cursor**: Install from Cursor Marketplace
* **OpenAI Codex CLI**: `/plugins` → search AgenticFlow → Add to Codex
* **Other / VS Code**: paste `https://github.com/PixelML/agenticflow-skill` into *Chat: Install Plugin From Source*
