Skip to main contentSkip to footer
ExamplescriptintermediateRunnableresearch-brief

Search Handbook

Runnable example (intermediate) for script using docling, openai.

Key Facts

Level
intermediate
Runtime
Python • OpenAI API
Pattern
Context-backed research with explicit evidence
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Search Handbook -> Retrieve relevant context -> User request -> System execution -> Reviewable output -> Apply progressive disclosure to…

Trigger

Search Handbook

Runtime

Retrieve relevant context

Outcome

User request

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Context-backed research with explicit evidence
Apply progressive disclosure to system agency
Expose meaningful operational state, not internal complexity
Establish trust through inspectability
Source references
Library entry
context-web-3-search-handbook
Source path
content/example-library/sources/context/web/3-search-handbook.py
Libraries
docling, openai, pydantic, python-dotenv
Runtime requirements
OPENAI_API_KEY
Related principles
Apply progressive disclosure to system agency, Expose meaningful operational state, not internal complexity, Establish trust through inspectability, Represent delegated work as a system, not merely as a conversation

3-search-handbook.py

python
# --------------------------------------------------------------
# Search Handbook with Dynamic Tool Calls
# --------------------------------------------------------------

import json
from pathlib import Path
from typing import List

from openai import OpenAI
from pydantic import BaseModel

client = OpenAI()

MODEL = "gpt-4.1-nano"
HANDBOOK_PATH = Path(__file__).parent / "data" / "handbook.md"


# --------------------------------------------------------------
# Define the output models
# --------------------------------------------------------------


class Citation(BaseModel):
    text: str
    section: str


class HandbookAnswer(BaseModel):
    answer: str
    citations: List[Citation]


# --------------------------------------------------------------
# Handbook search function (called as a tool)
# --------------------------------------------------------------


def search_handbook(query: str) -> str:
    """Retrieve the handbook content for the agent to interpret.

    Note: The query parameter is accepted but not used - we return the full handbook.
    This simulates Retrieval Augmented Generation (RAG). In a real application with
    large handbooks or contexts, you would implement semantic search, filtering, or
    chunking to retrieve only relevant sections based on the query.

    Returns: The full handbook content as a string
    """
    if not HANDBOOK_PATH.exists():
        return "Handbook not found."

    handbook_content = HANDBOOK_PATH.read_text(encoding="utf-8")
    return handbook_content


# --------------------------------------------------------------
# Define the tool
# --------------------------------------------------------------

tools = [
    {
        "type": "function",
        "name": "search_handbook",
        "description": "Retrieve the AI implementation handbook content. Use this when the user asks questions about AI implementation requirements, regulations, or procedures. The handbook contains policies, regulations, and guidelines for Dutch government organizations.",
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "The question or query - used for context, but the full handbook will be returned",
                },
            },
            "required": ["query"],
            "additionalProperties": False,
        },
        "strict": True,
    }
]


# --------------------------------------------------------------
# Agent function that uses tools dynamically
# --------------------------------------------------------------


def call_function(name: str, args: dict) -> str:
    if name == "search_handbook":
        return search_handbook(**args)
    raise ValueError(f"Unknown function: {name}")


def ask_agent(query: str) -> HandbookAnswer:
    """Ask the agent a question. It will decide whether to search the handbook."""
    input_messages = [{"role": "user", "content": query}]

    response = client.responses.create(
        model=MODEL,
        input=input_messages,
        tools=tools,
        instructions="You are a helpful assistant for Dutch government organizations. You can help answer questions about AI implementation policies and regulations by searching the handbook. If asked what you can do, simply explain your capabilities without searching the handbook.",
    )

    tool_calls_made = False
    # Append all output items in order to preserve reasoning relationships
    for output_item in response.output:
        # Append the output item first (includes reasoning if present)
        input_messages.append(output_item)

        if output_item.type == "function_call":
            tool_calls_made = True
            name = output_item.name
            args = json.loads(output_item.arguments)
            print(f"Tool called: {name}")
            result = call_function(name, args)
            print(f"Handbook retrieved ({len(result)} chars)")

            # Append function call output after the function call
            input_messages.append(
                {
                    "type": "function_call_output",
                    "call_id": output_item.call_id,
                    "output": result,
                }
            )

    if not tool_calls_made:
        print("No tool call needed, responding directly\n")
        # For direct responses, return structured output without citations
        direct_response = client.responses.parse(
            model=MODEL,
            input=input_messages,
            instructions="You are a helpful assistant for Dutch government organizations.",
            text_format=HandbookAnswer,
        )
        return direct_response.output[-1].content[-1].parsed

    final_response = client.responses.parse(
        model=MODEL,
        input=input_messages,
        tools=tools,
        instructions="You are a helpful assistant for Dutch government organizations. Use the handbook content that was retrieved to answer the user's question. Provide a clear, comprehensive answer. Include only the most important citations (2-4 maximum) that reference the primary sections where the key information comes from. Each citation should include a brief text excerpt and the section number (e.g., '2.1', '3.2'). Do not cite every detail - only cite the main sources.",
        text_format=HandbookAnswer,
    )

    return final_response.output[-1].content[-1].parsed


# --------------------------------------------------------------
# Example queries
# --------------------------------------------------------------

example_queries = [
    "What can you do?",
    "What are the requirements for registering an AI system in the Algorithm Register?",
    "Do I need to perform an IAMA for a chatbot that answers citizen questions?",
]

# Test with example queries
if __name__ == "__main__":
    for query in example_queries:
        print(f"\n{'=' * 60}")
        print(f"Query: {query}")
        print(f"{'=' * 60}\n")
        result = ask_agent(query)
        print(f"Answer: {result.answer}\n")
        if result.citations:
            print("Citations:")
            for citation in result.citations:
                print(f"  Section {citation.section}: {citation.text[:100]}...")
        print()
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

Look for output contracts and validation
Look for the exact execution call
Look for what the product could expose to the user
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Enter a question or load a sample query.
Run the search or retrieval step.
Review the final brief with sources and retrieved context.
SandboxContext-backed research with explicit evidence
Research brief lab

This sandbox shows how a search or retrieval request should expose query planning, retrieved context, and the final answer.

UX explanation

The user should not only see a final answer. The product should reveal what was searched, what context shaped the response, and where the system boundary stops.

AI design explanation

These examples combine retrieval, web search, or file context with a synthesis step. The best surface exposes search plan, useful evidence, and a reviewable output.

Interaction walkthrough

  1. 1Enter a question or load a sample query.
  2. 2Run the search or retrieval step.
  3. 3Review the final brief with sources and retrieved context.

Research question

Search planRetrieved context

Plan

The search plan appears here.

Final brief

The final brief appears here alongside the context used.

Used in courses and paths

This example currently stands on its own in the library, but it still connects to the principle system and the broader example family.

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows