Skip to main contentSkip to footer
ExamplescriptadvancedRunnablerouting-dag

Level 2: Prompt Chains & Routing — Deterministic DAGs

Multiple LLM calls in a fixed sequence. Code controls the flow, not the model.

Key Facts

Level
advanced
Runtime
Python • Pydantic + Python Dotenv
Pattern
Deterministic routing with explicit stage-by-stage visibility
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Level 2: Prompt Chains &… -> Route with explicit logic -> Run the agent task -> Classification -> Code routing -> Resolution

Entry

Level 2: Prompt Chains &…

Process

Route with explicit logic

Outcome

Classification

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Deterministic routing with explicit stage-by-stage visibility
Multiple model calls, but fixed orchestration
Code chooses the route, not the model
Escalation remains a visible branch
Source references
Library entry
agents-agent-complexity-2-prompt-chains
Source path
content/example-library/sources/agents/agent-complexity/2-prompt-chains.py
Libraries
pydantic, python-dotenv
Runtime requirements
Local repo environment
Related principles
Design for delegation rather than direct manipulation, Replace implied magic with clear mental models, Represent delegated work as a system, not merely as a conversation, Optimise for steering, not only initiating

Model context

Model-agnosticLocal-viableNo tool calling requiredLow reasoning requirement

Sequential prompt chaining is structurally enforced. Each step is deterministic; model quality affects output quality but not pattern correctness.

2-prompt-chains.py

python
"""
Level 2: Prompt Chains & Routing — Deterministic DAGs
Multiple LLM calls in a fixed sequence. Code controls the flow, not the model.
"""

from enum import Enum

from pydantic import BaseModel
from pydantic_ai import Agent
from dotenv import load_dotenv
import nest_asyncio

load_dotenv()

nest_asyncio.apply()


# --- Models ---


class Category(str, Enum):
    BILLING = "billing"
    TECHNICAL = "technical"
    GENERAL = "general"


class TicketClassification(BaseModel):
    category: Category
    confidence: float


class Resolution(BaseModel):
    response: str
    escalate: bool


# --- Agents (each is a single focused LLM call) ---


classifier = Agent(
    "anthropic:claude-sonnet-4-6",
    output_type=TicketClassification,
    system_prompt="Classify the customer ticket into a category. Be precise.",
)

billing_handler = Agent(
    "anthropic:claude-sonnet-4-6",
    output_type=Resolution,
    system_prompt=(
        "You handle billing issues. Generate a resolution. "
        "Set escalate=true if a refund over $100 is needed."
    ),
)

technical_handler = Agent(
    "anthropic:claude-sonnet-4-6",
    output_type=Resolution,
    system_prompt=(
        "You handle technical issues. Generate a resolution. "
        "Set escalate=true if the issue requires engineering intervention."
    ),
)

general_handler = Agent(
    "anthropic:claude-sonnet-4-6",
    output_type=Resolution,
    system_prompt="You handle general inquiries. Be helpful and concise.",
)


# --- DAG: classify → route → handle → validate ---


HANDLERS = {
    Category.BILLING: billing_handler,
    Category.TECHNICAL: technical_handler,
    Category.GENERAL: general_handler,
}


def process_ticket(ticket: str) -> Resolution:
    classification = classifier.run_sync(ticket)
    print(
        f"Classified as: {classification.output.category} ({classification.output.confidence:.0%})"
    )

    handler = HANDLERS[classification.output.category]
    result = handler.run_sync(ticket)

    if result.output.escalate:
        print("Escalating to human agent")

    return result.output


if __name__ == "__main__":
    ticket = (
        "I was charged twice for my subscription last month. "
        "Order ID: #12345. The duplicate charge was $49.99."
    )
    resolution = process_ticket(ticket)
    print(f"\nResponse: {resolution.response}")
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

classifier = Agent(
HANDLERS = {
classification = classifier.run_sync(ticket)
handler = HANDLERS[classification.output.category]
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Submit a support issue to the pipeline.
Watch the system classify the issue and choose the handler deterministically.
Inspect the final resolution and whether escalation was triggered.
SandboxDeterministic routing with explicit stage-by-stage visibility
Classify, route, then resolve

This simulation shows the product pattern behind deterministic orchestration: code owns the sequence, while each model call stays narrow and inspectable.

UX explanation

The user still delegates once, but the experience now has visible internal stages: classify the ticket, route to the right handler, and then produce a resolution. The system should reveal that sequence instead of pretending the result arrived magically.

AI design explanation

Each model call is scoped to one job. Classification does not resolve, and handlers do not decide the route. Deterministic product code chooses the next step, keeping the workflow inspectable and easy to debug.

Interaction walkthrough

  1. 1Submit a support issue to the pipeline.
  2. 2Watch the system classify the issue and choose the handler deterministically.
  3. 3Inspect the final resolution and whether escalation was triggered.

Pipeline input

ClassifierDeterministic route mapResolution handler

1. Classification

Run the classifier to expose the first system decision.

2. Code routing

The route becomes visible after classification.

3. Resolution

The chosen handler produces the final structured resolution.

What the learner should notice

  • Multiple model calls, but fixed orchestration
  • Code chooses the route, not the model
  • Escalation remains a visible branch
Used in courses and paths

This example currently stands on its own in the library, but it still connects to the principle system and the broader example family.

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows