Skip to main contentSkip to footer
ExamplescriptintermediateRunnablehuman-approval

Human-in-the-Loop: Structured Output with Router

Key Facts

Level
intermediate
Runtime
Python • OpenAI API
Pattern
Inspectable flow with visible system boundaries
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Human-in-the-Loop: Structured Output with Router -> Validate structured output -> User request -> System execution -> Reviewable output -> Establish trust through inspectability

Start

Human-in-the-Loop: Structured Output with Router

Checkpoint

Validate structured output

Outcome

User request

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Inspectable flow with visible system boundaries
Establish trust through inspectability
Make hand-offs, approvals, and blockers explicit
Optimise for steering, not only initiating
Source references
Library entry
models-openai-10-human-in-the-loop-1-structured-output
Source path
content/example-library/sources/models/openai/10-human-in-the-loop/1-structured-output.py
Libraries
openai, pydantic, python-dotenv
Runtime requirements
OPENAI_API_KEY
Related principles
Establish trust through inspectability, Make hand-offs, approvals, and blockers explicit, Optimise for steering, not only initiating

1-structured-output.py

python
"""
Human-in-the-Loop: Structured Output with Router

This pattern uses Pydantic models and a router to:
1. Analyze the user request and create action plan(s)
2. Pause for approval on sensitive actions
3. Execute each action

No tool calls - only chained LLM calls with structured output.

Run: python 1-structured-output.py
"""

from typing import Literal, Self
from openai import OpenAI
from pydantic import BaseModel, model_validator
from dotenv import load_dotenv

load_dotenv()

client = OpenAI()

# --------------------------------------------------------------
# Account State
# --------------------------------------------------------------

BALANCE = 1000.0


# --------------------------------------------------------------
# Action Functions
# --------------------------------------------------------------


def get_balance() -> str:
    return f"Current balance: ${BALANCE:.2f}"


def transfer_money(to_account: str, amount: float) -> str:
    global BALANCE
    if amount > BALANCE:
        return f"Error: Insufficient funds. Current balance: ${BALANCE:.2f}"
    BALANCE -= amount
    return f"Transferred ${amount:.2f} to {to_account}. New balance: ${BALANCE:.2f}"


def deposit_money(amount: float) -> str:
    global BALANCE
    BALANCE += amount
    return f"Deposited ${amount:.2f}. New balance: ${BALANCE:.2f}"


# --------------------------------------------------------------
# Structured Output Models
# --------------------------------------------------------------


class Action(BaseModel):
    action_type: Literal["check_balance", "transfer", "deposit"]
    to_account: str | None = None
    amount: float | None = None
    requires_confirmation: bool = False

    @model_validator(mode="after")
    def enforce_confirmation_rule(self) -> Self:
        if self.action_type == "transfer" and self.amount and self.amount > 100:
            self.requires_confirmation = True
        return self


class ActionPlan(BaseModel):
    actions: list[Action]


# --------------------------------------------------------------
# Example confirmation enforcement
# --------------------------------------------------------------

confirmation_example = Action(
    action_type="transfer",
    to_account="Alice",
    amount=500,  # over $100
    requires_confirmation=False,  # but we set to False
)

print(confirmation_example.model_dump_json(indent=2))


# --------------------------------------------------------------
# Human-in-the-Loop with Prompt Chaining
# --------------------------------------------------------------


def run_with_confirmation(prompt: str) -> str:
    # Step 1: Analyze request and create action plan
    response = client.responses.parse(
        model="gpt-4o",
        instructions=(
            "You are a banking assistant. Analyze the user request and create a list of actions. "
            "Use 'check_balance' for balance inquiries, 'transfer' for money transfers, "
            "'deposit' for adding money to the account. "
            "Set requires_confirmation=True for transfers over $100. "
            "Extract to_account and amount for transfers, amount for deposits."
        ),
        temperature=0,
        input=prompt,
        text_format=ActionPlan,
    )

    plan = response.output_parsed
    print(plan.model_dump_json(indent=2))
    results = []

    # Step 2: Execute each action (router)
    for action in plan.actions:
        if action.action_type == "check_balance":
            results.append(get_balance())

        elif action.action_type == "transfer":
            # Human-in-the-loop: Check if confirmation is needed
            if action.requires_confirmation:
                print("\n⚠️  Approval Required")
                print(f"Transfer ${action.amount:.2f} to {action.to_account}")

                if input("\nApprove? (y/n): ").strip().lower() != "y":
                    results.append(
                        f"Transfer of ${action.amount:.2f} to {action.to_account} cancelled by user."
                    )
                    continue

            result = transfer_money(action.to_account, action.amount)
            results.append(result)

        elif action.action_type == "deposit":
            result = deposit_money(action.amount)
            results.append(result)

    return "\n".join(results)


# --------------------------------------------------------------
# Demo
# --------------------------------------------------------------

if __name__ == "__main__":
    print("=" * 60)
    print("Human-in-the-Loop: Structured Output with Prompt Chaining")
    print("=" * 60)

    # Multiple actions - check balance and small transfer
    print("\n--- Check Balance + Small Transfer ($50) ---")
    result = run_with_confirmation("Check my balance and transfer $50 to Alice")
    print(f"\nResult:\n{result}")

    # Deposit money
    print("\n--- Deposit ---")
    result = run_with_confirmation("Deposit $200 into my account")
    print(f"\nResult:\n{result}")

    # Large transfer - requires approval
    print("\n--- Large Transfer ($500) ---")
    result = run_with_confirmation("Transfer $500 to Bob for rent")
    print(f"\nResult:\n{result}")
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

Look for output contracts and validation
Look for the exact execution call
Look for what the product could expose to the user
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Read the implementation summary.
Step through the user and system states.
Inspect the source code with the highlighted doctrine decisions in mind.
SandboxInspectable flow with visible system boundaries
Interaction walkthrough

Use the sandbox to step through the user-visible experience, the system work behind it, and the doctrine choice the example is making.

UX explanation

The sandbox explains what the user should see, what the system is doing, and where control or inspectability must remain explicit.

AI design explanation

The page turns raw source into a product-facing pattern: what the model is allowed to decide, what the product should expose, and where deterministic code or review should take over.

Interaction walkthrough

  1. 1Read the implementation summary.
  2. 2Step through the user and system states.
  3. 3Inspect the source code with the highlighted doctrine decisions in mind.

Prompt

Draft firstHuman checkpoint

Draft output

The draft appears here before any final action is taken.

Approval checkpoint

Approval only becomes available after the system exposes a draft.

Why approval is a product pattern

  • Establish trust through inspectability
  • Make hand-offs, approvals, and blockers explicit
  • Optimise for steering, not only initiating
Used in courses and paths

This example currently stands on its own in the library, but it still connects to the principle system and the broader example family.

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows