Skip to main contentSkip to footer
ExamplescriptintermediateRunnableschema-validation

Validation: Ensures LLM outputs match predefined data schemas.

This component provides schema validation and structured data parsing to guarantee consistent data formats for downstream code.

Key Facts

Level
intermediate • Agent Building Blocks
Runtime
Python • OpenAI API
Pattern
Structured extraction with explicit acceptance criteria
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Validation: Ensures LLM outputs match… -> Validate structured output -> Free-form input -> Schema parse -> Safe handoff -> Natural-language input, structured output

Trigger

Validation: Ensures LLM outputs match…

Runtime

Validate structured output

Outcome

Free-form input

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Structured extraction with explicit acceptance criteria
Natural-language input, structured output
Validation creates a reviewable contract
Downstream code can act on the result safely
Source references
Library entry
agents-building-blocks-4-validation
Source path
content/example-library/sources/agents/building-blocks/4-validation.py
Libraries
openai, pydantic, requests
Runtime requirements
OPENAI_API_KEY
Related principles
Design for delegation rather than direct manipulation, Replace implied magic with clear mental models, Establish trust through inspectability, Make hand-offs, approvals, and blockers explicit, Optimise for steering, not only initiating

4-validation.py

python
"""
Validation: Ensures LLM outputs match predefined data schemas.
This component provides schema validation and structured data parsing to guarantee consistent data formats for downstream code.

More info: https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses
"""

from openai import OpenAI
from pydantic import BaseModel


class TaskResult(BaseModel):
    """
    More info: https://docs.pydantic.dev
    """

    task: str
    completed: bool
    priority: int


def structured_intelligence(prompt: str) -> TaskResult:
    client = OpenAI()
    response = client.responses.parse(
        model="gpt-4o",
        input=[
            {
                "role": "system",
                "content": "Extract task information from the user input.",
            },
            {"role": "user", "content": prompt},
        ],
        text_format=TaskResult,
    )
    return response.output_parsed


if __name__ == "__main__":
    result = structured_intelligence(
        "I need to complete the project presentation by Friday, it's high priority"
    )
    print("Structured Output:")
    print(result.model_dump_json(indent=2))
    print(f"Extracted task: {result.task}")
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

class TaskResult(BaseModel):
response = client.responses.parse(
text_format=TaskResult
return response.output_parsed
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Write a natural-language task description.
Parse it into a schema-bound object.
Inspect the structured result and whether the output is valid enough for downstream use.
SandboxStructured extraction with explicit acceptance criteria
Schema-bound extraction and validation

This sandbox shows how validation turns a vague language model response into a structured contract that downstream code can safely depend on.

UX explanation

The user should not have to guess whether the system extracted the right fields. A validation layer lets the product expose the exact shape of the result and whether it is safe to use.

AI design explanation

The model is still doing interpretation, but schema parsing constrains how the result is expressed. That turns a fuzzy completion into something application code can route, store, and verify.

Interaction walkthrough

  1. 1Write a natural-language task description.
  2. 2Parse it into a schema-bound object.
  3. 3Inspect the structured result and whether the output is valid enough for downstream use.

Natural-language input

Pydantic schemaStructured parse

Schema contract

  • `task: str`
  • `completed: bool`
  • `priority: int`

Parsed result

The parsed object appears here once the schema-bound extraction runs.

What validation changes

  • Natural-language input, structured output
  • Validation creates a reviewable contract
  • Downstream code can act on the result safely
Used in courses and paths

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows