Skip to main contentSkip to footer
ExamplescriptbeginnerRunnableresearch-brief

Retrieval

Runnable example (beginner) for script using openai, pydantic.

Key Facts

Level
beginner
Runtime
Python • OpenAI API
Pattern
Context-backed research with explicit evidence
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Retrieval -> Retrieve relevant context -> User request -> System execution -> Reviewable output -> Design for delegation rather…

Start

Retrieval

Checkpoint

Retrieve relevant context

Outcome

User request

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Context-backed research with explicit evidence
Design for delegation rather than direct manipulation
Make hand-offs, approvals, and blockers explicit
Represent delegated work as a system, not merely as a conversation
Source references
Library entry
workflows-1-introduction-4-retrieval
Source path
content/example-library/sources/workflows/1-introduction/4-retrieval.py
Libraries
openai, pydantic
Runtime requirements
OPENAI_API_KEY
Related principles
Design for delegation rather than direct manipulation, Make hand-offs, approvals, and blockers explicit, Represent delegated work as a system, not merely as a conversation, Optimise for steering, not only initiating

4-retrieval.py

python
import json
import os

from openai import OpenAI
from pydantic import BaseModel, Field

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

"""
docs: https://platform.openai.com/docs/guides/function-calling
"""

# --------------------------------------------------------------
# Define the knowledge base retrieval tool
# --------------------------------------------------------------


def search_kb(question: str):
    """
    Load the whole knowledge base from the JSON file.
    (This is a mock function for demonstration purposes, we don't search)
    """
    with open("kb.json", "r") as f:
        return json.load(f)


# --------------------------------------------------------------
# Step 1: Call model with search_kb tool defined
# --------------------------------------------------------------

tools = [
    {
        "type": "function",
        "function": {
            "name": "search_kb",
            "description": "Get the answer to the user's question from the knowledge base.",
            "parameters": {
                "type": "object",
                "properties": {
                    "question": {"type": "string"},
                },
                "required": ["question"],
                "additionalProperties": False,
            },
            "strict": True,
        },
    }
]

system_prompt = "You are a helpful assistant that answers questions from the knowledge base about our e-commerce store."

messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": "What is the return policy?"},
]

completion = client.chat.completions.create(
    model="gpt-4o",
    messages=messages,
    tools=tools,
)

# --------------------------------------------------------------
# Step 2: Model decides to call function(s)
# --------------------------------------------------------------

completion.model_dump()

# --------------------------------------------------------------
# Step 3: Execute search_kb function
# --------------------------------------------------------------


def call_function(name, args):
    if name == "search_kb":
        return search_kb(**args)


for tool_call in completion.choices[0].message.tool_calls:
    name = tool_call.function.name
    args = json.loads(tool_call.function.arguments)
    messages.append(completion.choices[0].message)

    result = call_function(name, args)
    messages.append(
        {"role": "tool", "tool_call_id": tool_call.id, "content": json.dumps(result)}
    )

# --------------------------------------------------------------
# Step 4: Supply result and call model again
# --------------------------------------------------------------


class KBResponse(BaseModel):
    answer: str = Field(description="The answer to the user's question.")
    source: int = Field(description="The record id of the answer.")


completion_2 = client.beta.chat.completions.parse(
    model="gpt-4o",
    messages=messages,
    tools=tools,
    response_format=KBResponse,
)

# --------------------------------------------------------------
# Step 5: Check model response
# --------------------------------------------------------------

final_response = completion_2.choices[0].message.parsed
final_response.answer
final_response.source

# --------------------------------------------------------------
# Question that doesn't trigger the tool
# --------------------------------------------------------------

messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": "What is the weather in Tokyo?"},
]

completion_3 = client.beta.chat.completions.parse(
    model="gpt-4o",
    messages=messages,
    tools=tools,
)

completion_3.choices[0].message.content
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

Look for output contracts and validation
Look for the exact execution call
Look for what the product could expose to the user
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Enter a question or load a sample query.
Run the search or retrieval step.
Review the final brief with sources and retrieved context.
SandboxContext-backed research with explicit evidence
Research brief lab

This sandbox shows how a search or retrieval request should expose query planning, retrieved context, and the final answer.

UX explanation

The user should not only see a final answer. The product should reveal what was searched, what context shaped the response, and where the system boundary stops.

AI design explanation

These examples combine retrieval, web search, or file context with a synthesis step. The best surface exposes search plan, useful evidence, and a reviewable output.

Interaction walkthrough

  1. 1Enter a question or load a sample query.
  2. 2Run the search or retrieval step.
  3. 3Review the final brief with sources and retrieved context.

Research question

Search planRetrieved context

Plan

The search plan appears here.

Final brief

The final brief appears here alongside the context used.

Used in courses and paths

This example currently stands on its own in the library, but it still connects to the principle system and the broader example family.

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows