Skip to main contentSkip to footer
For agents

Start here

The fastest path to adding Blueprint doctrine to your coding agent or AI runtime.

For engineers setting up a new agent runtime or adding doctrine to an existing one.

What you get

A public MCP endpoint for live doctrine retrieval, deterministic JSON and Markdown exports for local bundles, and installable rule files for Claude Code, Cursor, Windsurf, GitHub Copilot, Gemini CLI, and Codex. All artifacts are generated from the same structured source.

First call to make

Run list_clusters() to confirm the MCP endpoint is live and the session is valid. Then run search_examples(query="orchestration visibility steering", limit=3) as a second proof call. These two calls together validate that retrieval, search, and session handling work end to end.

Choose your install path

Pick the tool you use most: MCP for live retrieval, rule files for always-on guidance inside your IDE, or JSON exports for local offline use. Each path works independently — you do not need MCP to use the rule files, and you do not need the rule files to use MCP.

Installable now
Pick your tool and install the doctrine layer
The public surface is no longer just MCP and a few exports. Use the integrations page to download the shared instruction files, provider-specific rules, and prompt packs that are already ready to use.
AnthropicClaude CodeOpenAICodexCursorCursorWindsurfWindsurfGithubCopilotGitHub CopilotGemini CLIGemini CLIDeepSeekQwen
Shared files like AGENTS.md and llms.txt for cross-tool setups
Provider-specific artifacts for Cursor, Windsurf, GitHub Copilot, Gemini, and open-weight prompts
Only CLI and public OpenAPI stay deferred until there is real package and route parity
Open integrations downloads

Runtime architecture

From agent demos to runtime discipline

A capable model is not a runtime architecture. If agents are going to trigger workflows, load files, use tools, delegate work, and act across channels, the runtime needs clear patterns for control, visibility, and recovery. This cluster helps teams design those patterns deliberately.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows

Configure the blueprint in your client

This page should be enough to get started: the live MCP endpoint, the right downloads, a copy-paste config block, kickoff prompts, and compatibility notes for the main clients.

AnthropicClaude Code
Skill pack + live MCP
Install the blueprint skill first for always-on doctrine context, then add the MCP endpoint for live search, clusters, and examples.

MCP config

{
  "aidesignblueprint": {
    "type": "http",
    "url": "https://aidesignblueprint.com/mcp"
  }
}

Kickoff prompt

Use the blueprint as a doctrine layer. Read the relevant principles first, then query the live MCP for clusters, examples, and assets. Start with list_clusters and propose the next useful lookup.

OpenAICodex
MCP + local exports
Register the MCP endpoint with the same `.mcp.json` block used by the other clients. Keep the JSON or Markdown export as a local fallback when you want a readable doctrine bundle without live calls.

MCP config

{
  "aidesignblueprint": {
    "type": "http",
    "url": "https://aidesignblueprint.com/mcp"
  }
}

Kickoff prompt

Configure the blueprint as an HTTP MCP server and keep the local JSON as fallback. Start with list_principles, then search examples for visibility, orchestration, or steering based on the task.

CursorCursor
Persistent rules + live MCP
Use the `.mdc` file for always-on editor guidance and the MCP endpoint for live principle, cluster, and example lookups.

Rule path

.cursor/rules/blueprint-doctrine.mdc

Kickoff prompt

Always apply the blueprint doctrine. Clarify the relevant principle or cluster first, then use the live MCP to search examples and verify runtime, steering, or approval-boundary patterns.

WindsurfWindsurf
Workspace rule + shared files
Use the workspace rule for persistent guidance, then add AGENTS.md at repo root if you want the same doctrine available across other tools too.

Rule path

.windsurf/rules/blueprint-core.md

Kickoff prompt

Apply the blueprint as a workspace rule. Make boundaries, approvals, and fallback paths explicit first, then use principles and examples to verify the pattern before implementing.

GithubCopilotGitHub Copilot
Repo instructions + shared doctrine
Place the instructions file under `.github`, then use AGENTS.md if you want a doctrine layer that is also readable by other repo-aware clients.

Repo file

.github/copilot-instructions.md

Kickoff prompt

Use the blueprint when generating or reviewing AI-native code. Always check execution boundaries, approval boundaries, runtime visibility, fallback, and reversibility.

Gemini CLIGemini CLI
Project context + llms.txt
Use GEMINI.md as persistent project context, then keep llms.txt or the open-weight prompt packs as quick support for discovery and setup.

Project file

GEMINI.md

Kickoff prompt

Treat the blueprint as project context. Clarify the relevant principle and boundary first, then proceed with implementation, fallback, and runtime review.

DeepSeek
Prompt pack + llms.txt
Use the prompt pack as the starting instruction file for DeepSeek workflows or local open-weight runtimes. Keep llms.txt as the quick discovery reference when you want broader context without MCP.

Prompt file

system-prompt-deepseek.md

Kickoff prompt

Load the prompt pack, make boundaries, approvals, and fallback explicit, then use llms.txt as the lightweight support document when you need a quick doctrine summary.

Qwen
Prompt pack + llms.txt
Use the prompt pack as the initial context file for Qwen workflows or local runtimes without MCP. llms.txt remains the lighter fallback for discovery and pattern recall.

Prompt file

system-prompt-qwen.md

Kickoff prompt

Load the Qwen prompt pack, clarify the execution boundary and fallback first, then use llms.txt when you need a quick doctrine view without live retrieval.

Kickoff prompts
Use these prompts to start from a real task instead of spending the first turn explaining setup and context.

Architecture audit

Use the blueprint as an audit framework. List the clusters first, then propose which principles to use to assess this agent architecture and which examples to read next.

Example lookup

Search examples for orchestration, visibility, and steering. Group them by principle and tell me which ones are worth reading first.

Principle explainer

Explain the most relevant principle for this workflow with its definition, rationale, risk, and one linked example.

Download prompt pack

Recommended paths for current stacks

The example library is still expanding its Claude, TypeScript, and Next.js coverage. In the meantime, these are the strongest paths that already fit the product we have today.

Curated path
Claude workflows
Use the skill pack as the always-on doctrine layer, then move into public MCP for search, filtering, and drill-down. It is the highest-confidence path until native HTTP tool registration is uniform.
Curated path
TypeScript builders
Start from principles, the runtime branch, and inspectable examples to review steering, hand-offs, and visibility. Dedicated TypeScript curation is still expanding, but the working path is already usable.
Curated path
Next.js product teams
Use the blueprint to judge agent surfaces, progressive disclosure, tool visibility, and review loops. The runtime branch remains the strongest entry point for orchestration and safety decisions.
Curated path
Multi-agent runtimes
Start with orchestration, visibility, and trust, then use `search_examples` to isolate patterns for hand-offs, bounded authority, and recovery. It is the closest path to live audit work today.
Install guidance
AnthropicClaude

Claude: import the skill pack zip into a project or workspace skill library. Use /audit-doctrine to run a full doctrine review against the live MCP endpoint.

OpenAICodex

Codex: register the same HTTP server in your MCP config, then keep the JSON or Markdown export as a local fallback when you want the doctrine without a live round-trip.

CursorCursor

Cursor: add the `.mdc` export to your rules setup for always-on doctrine guidance.

WindsurfWindsurf

Windsurf: copy the workspace rule into `.windsurf/rules/blueprint-core.md`, then add AGENTS.md if you want the same doctrine readable by multiple tools.

GithubCopilotGitHub Copilot

GitHub Copilot: place the file in `.github/copilot-instructions.md`, then use AGENTS.md as the shared repo-level doctrine layer.

Gemini CLIGemini CLI

Gemini CLI: keep `GEMINI.md` at project root and use llms.txt or the open-weight prompt packs as local support.

DeepSeek

DeepSeek: use `system-prompt-deepseek.md` as the initial prompt file and keep llms.txt as the local support document for discovery and quick recall.

Qwen

Qwen: use `system-prompt-qwen.md` as the initial prompt file and llms.txt as the lightweight fallback when you want the doctrine without live retrieval.

JSON / Markdown

Tooling: ingest the JSON or Markdown export if you need a local, read-only doctrine bundle.

Continue through the rest of the blueprint

The agent surface is a distribution branch, not a replacement for the handbook. Use it together with the core content paths below.

Expand into runtime architecture
Use the runtime branch when the question shifts from coding-agent doctrine to triggers, schedules, context hubs, and runtime safety for broader agent systems.
Read the doctrine first
Principles remain the canonical design language for delegation, visibility, orchestration, and approval boundaries.
Inspect implementation examples
Examples give agents and humans the same inspectable source material for concrete runtime patterns.
Use courses for human skill-building
Learning pages stay focused on people building judgment and evidence before deeper validation loops arrive.
See the plan-to-agent mapping
The tiered structure — from free access to team packages — will be available at launch.
Keep certification as the human review path
Certification remains the evidence and review layer while the agent surface distributes the doctrine into tooling.
Move from public doctrine into a private practitioner loop
The public MCP endpoint and downloadable skill packs are intentionally strong. Pro starts when you need to apply that standard privately to your own workflow and keep the evidence over time.
Run protected reviews against a real workflow instead of only querying the public doctrine
Save findings, evidence notes, and report history across repeated runs
Use authenticated MCP tools when the work moves from exploration to private product validation
Let your agent trigger a human escalation — support, a partnership conversation, or an agency call — from within the session, without breaking the loop
See how Pro works
Is MCP the whole runtime model for these agents?

No. MCP is the transport and discovery layer for tools, resources, and prompts. Agent identity, task instructions, context hubs, tiered loading, budgets, turn caps, and session history belong to the broader runtime architecture layer, which is covered in the runtime branch.