Skip to main contentSkip to footer
ExamplescriptadvancedRunnableticket-classifier

Level 1: Augmented LLM — Single API Call

One model call with structured output, system prompt, and context. No loops, no tools.

Key Facts

Level
advanced
Runtime
Python • Pydantic + Python Dotenv
Pattern
Single-step delegation with explicit structured output
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Level 1: Augmented LLM —… -> Run the agent task -> User intent -> Model judgment -> Structured output -> One model call only

Entry

Level 1: Augmented LLM —…

Process

Run the agent task

Outcome

Model judgment

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Single-step delegation with explicit structured output
One model call only
Structured output constrains the result shape
No tool calls, loops, or hidden orchestration
Source references
Library entry
agents-agent-complexity-1-augmented-llm
Source path
content/example-library/sources/agents/agent-complexity/1-augmented-llm.py
Libraries
pydantic, python-dotenv
Runtime requirements
Local repo environment
Related principles
Design for delegation rather than direct manipulation, Replace implied magic with clear mental models, Represent delegated work as a system, not merely as a conversation, Optimise for steering, not only initiating

Model context

Model-agnosticLocal-viableNo tool calling requiredLow reasoning requirement

Augmented LLM pattern adds context retrieval and memory on top of a base model call. The pattern works regardless of which model handles inference.

1-augmented-llm.py

python
"""
Level 1: Augmented LLM — Single API Call
One model call with structured output, system prompt, and context. No loops, no tools.
"""

from pydantic import BaseModel
from pydantic_ai import Agent
from dotenv import load_dotenv
import nest_asyncio

load_dotenv()


nest_asyncio.apply()


class TicketClassification(BaseModel):
    category: str
    priority: str
    summary: str
    can_auto_resolve: bool


agent = Agent(
    "anthropic:claude-sonnet-4-6",
    output_type=TicketClassification,
    system_prompt=(
        "You are a customer support classifier. "
        "Classify incoming tickets by category (billing, technical, general), "
        "priority (low, medium, high), and whether they can be auto-resolved."
    ),
)

result = agent.run_sync(
    "I was charged twice for my subscription last month. "
    "Order ID: #12345. Please refund the duplicate charge."
)

print(result.output)
# category='billing' priority='high' summary='Duplicate subscription charge, requesting refund for order #12345' can_auto_resolve=True
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

class TicketClassification(BaseModel):
agent = Agent(
output_type=TicketClassification
result = agent.run_sync(
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Choose or edit a support request.
Run a single classification pass.
Inspect the structured output before deciding the next product action.
SandboxSingle-step delegation with explicit structured output
Single-call classification surface

A learner can simulate the exact UX pattern behind a Level 1 augmented LLM: provide one request, apply one bounded model call, and inspect one structured result.

UX explanation

The user expresses intent once, the system applies one bounded classification task, and the result comes back in a format that is easy to inspect. There is no hidden looping, no background orchestration, and no tool routing.

AI design explanation

This pattern keeps model scope narrow. The system prompt defines the job, the schema defines the output contract, and the product can show exactly what the model is allowed to decide.

Interaction walkthrough

  1. 1Choose or edit a support request.
  2. 2Run a single classification pass.
  3. 3Inspect the structured output before deciding the next product action.

User request

scriptSingle model callInspectable output

Applied UX pattern

The user expresses intent once, the system applies one bounded classification task, and the result comes back in a format that is easy to inspect. There is no hidden looping, no background orchestration, and no tool routing.

System output

Run the single-call flow to inspect what the product could show after one bounded model decision.

Why this is an agentic UX lesson

  • One model call only
  • Structured output constrains the result shape
  • No tool calls, loops, or hidden orchestration
Used in courses and paths

This example currently stands on its own in the library, but it still connects to the principle system and the broader example family.

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows