Skip to main contentSkip to footer
ExamplescriptintermediateRunnablechat-lab

Messages

Runnable example (intermediate) for script.

Key Facts

Level
intermediate
Runtime
Python
Pattern
Single-turn interaction with inspectable output
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Messages -> User request -> System execution -> Reviewable output -> Apply progressive disclosure to… -> Establish trust through inspectability

Start

Messages

Checkpoint

User request

Outcome

System execution

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Single-turn interaction with inspectable output
Apply progressive disclosure to system agency
Establish trust through inspectability
Make hand-offs, approvals, and blockers explicit
Source references
Library entry
frameworks-pydantic-ai-3-core-concepts-5-messages
Source path
content/example-library/sources/frameworks/pydantic-ai/3-core-concepts/5-messages.py
Libraries
None listed
Runtime requirements
Local repo environment
Related principles
Apply progressive disclosure to system agency, Establish trust through inspectability, Make hand-offs, approvals, and blockers explicit, Represent delegated work as a system, not merely as a conversation

5-messages.py

python
import json

import nest_asyncio
from pydantic_ai import Agent, ModelMessage

nest_asyncio.apply()

# --------------------------------------------------------------
# Accessing messages from results
# --------------------------------------------------------------

agent = Agent("openai:gpt-4o-mini", instructions="Be a helpful assistant.")

result = agent.run_sync("Tell me a joke")
print(result.output)

print("\n--- All Messages ---")
print(result.all_messages())

print("\n--- New Messages ---")
print(result.new_messages())

# --------------------------------------------------------------
# Continue conversation with message history
# --------------------------------------------------------------

result1 = agent.run_sync("What is the capital of France?")
print(result1.output)

result2 = agent.run_sync(
    "What's the population?", message_history=result1.new_messages()
)
print(result2.output)

print("\n--- Full Conversation History ---")
for msg in result2.all_messages():
    print(f"{msg.kind}: {msg}")

# --------------------------------------------------------------
# Store and load messages as JSON
# --------------------------------------------------------------

result = agent.run_sync("What is 2 + 2?")

messages_json = result.all_messages_json()
print("\n--- Messages as JSON ---")
print(json.loads(messages_json)[:1])

loaded_messages = json.loads(messages_json)
print(f"\nLoaded {len(loaded_messages)} messages from JSON")

# --------------------------------------------------------------
# Processing message history - keep only recent messages
# --------------------------------------------------------------


def keep_recent_messages(messages: list[ModelMessage]) -> list[ModelMessage]:
    """Keep only the last 3 messages to manage token usage."""
    return messages[-3:] if len(messages) > 3 else messages


history_agent = Agent(
    "openai:gpt-4o-mini",
    instructions="Be concise.",
    history_processors=[keep_recent_messages],
)

msg_history = []
for i in range(5):
    result = history_agent.run_sync(f"Message {i + 1}", message_history=msg_history)
    msg_history = result.all_messages()
    print(f"Turn {i + 1}: {len(msg_history)} messages in history")
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

Look for output contracts and validation
Look for the exact execution call
Look for what the product could expose to the user
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Enter or load a sample message.
Run one request/response turn.
Compare the visible reply with the product contract the example is teaching.
SandboxSingle-turn interaction with inspectable output
Message lab

Use this lab to test one request at a time, inspect the visible response, and understand which parts of the behavior stay deterministic.

UX explanation

These examples work best when the product makes the turn contract explicit: what the user submits, what the system returns, and what stays outside scope.

AI design explanation

The model does one bounded job while the product keeps the control boundary. The interaction should expose prompt, response, and output shape without pretending to broader autonomy.

Interaction walkthrough

  1. 1Enter or load a sample message.
  2. 2Run one request/response turn.
  3. 3Compare the visible reply with the product contract the example is teaching.

User message

Single turnInspectable output

Visible reply

The visible response appears here.

Product contract

Shows what the product should keep explicit.

Control boundary

Makes the deterministic boundary visible.

Used in courses and paths

This example currently stands on its own in the library, but it still connects to the principle system and the broader example family.

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows