Skip to main contentSkip to footer
ExamplescriptintermediateRunnablememory-lab

Memory: Stores and retrieves relevant information across interactions.

This component maintains conversation history and context to enable coherent multi-turn interactions.

Key Facts

Level
intermediate • Agent Building Blocks
Runtime
Python • OpenAI API
Pattern
Memory-aware assistance with legible context
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Memory: Stores and retrieves relevant… -> Store reusable memory -> User request -> System execution -> Reviewable output -> Design for delegation rather…

Trigger

Memory: Stores and retrieves relevant…

Runtime

Store reusable memory

Outcome

User request

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Memory-aware assistance with legible context
Design for delegation rather than direct manipulation
Replace implied magic with clear mental models
Establish trust through inspectability
Source references
Library entry
agents-building-blocks-2-memory
Source path
content/example-library/sources/agents/building-blocks/2-memory.py
Libraries
openai, requests
Runtime requirements
OPENAI_API_KEY
Related principles
Design for delegation rather than direct manipulation, Replace implied magic with clear mental models, Establish trust through inspectability, Make hand-offs, approvals, and blockers explicit, Optimise for steering, not only initiating

2-memory.py

python
"""
Memory: Stores and retrieves relevant information across interactions.
This component maintains conversation history and context to enable coherent multi-turn interactions.

More info: https://platform.openai.com/docs/guides/conversation-state?api-mode=responses
"""

from openai import OpenAI

client = OpenAI()


def ask_joke_without_memory():
    response = client.responses.create(
        model="gpt-4o-mini",
        input=[
            {"role": "user", "content": "Tell me a joke about programming"},
        ],
    )
    return response.output_text


def ask_followup_without_memory():
    response = client.responses.create(
        model="gpt-4o-mini",
        input=[
            {"role": "user", "content": "What was my previous question?"},
        ],
    )
    return response.output_text


def ask_followup_with_memory(joke_response: str):
    response = client.responses.create(
        model="gpt-4o-mini",
        input=[
            {"role": "user", "content": "Tell me a joke about programming"},
            {"role": "assistant", "content": joke_response},
            {"role": "user", "content": "What was my previous question?"},
        ],
    )
    return response.output_text


if __name__ == "__main__":
    # First: Ask for a joke
    joke_response = ask_joke_without_memory()
    print(joke_response, "\n")

    # Second: Ask follow-up without memory (AI will be confused)
    confused_response = ask_followup_without_memory()
    print(confused_response, "\n")

    # Third: Ask follow-up with memory (AI will remember)
    memory_response = ask_followup_with_memory(joke_response)
    print(memory_response)
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

Look for output contracts and validation
Look for the exact execution call
Look for what the product could expose to the user
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Enter a message or load a memory scenario.
Run the context recall step.
Inspect both the recalled memory and the personalized reply.
SandboxMemory-aware assistance with legible context
Memory lab

This experience makes it clear what the system remembers, what it retrieves, and how that context changes the reply.

UX explanation

Memory examples are only trustworthy when the user can understand which information was recalled and why it changes the response.

AI design explanation

Memory is not just persistence. It is a product choice about what to store, what to recall, and how to make the effect of context inspectable.

Interaction walkthrough

  1. 1Enter a message or load a memory scenario.
  2. 2Run the context recall step.
  3. 3Inspect both the recalled memory and the personalized reply.

Message

Recalled memoryNew memory

Recalled context

The recalled context appears here.

Memory to store

The suggested memory write appears here.

Visible reply

The personalized reply appears here.

Used in courses and paths

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows