Skip to main contentSkip to footer
ExamplescriptintermediateRunnableguided-flow

Intelligence: The "brain" that processes information and makes decisions using LLMs.

This component handles context understanding, instruction following, and response generation.

Key Facts

Level
intermediate • Agent Building Blocks
Runtime
Python • OpenAI API
Pattern
Inspectable flow with visible system boundaries
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Intelligence: The 'brain' that processes… -> Initialize OpenAI client -> Create model response -> Render the visible result -> This component handles context… -> intelligence

Trigger

Intelligence: The 'brain' that processes…

Runtime

Initialize OpenAI client

Outcome

Create model response

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Inspectable flow with visible system boundaries
Design for delegation rather than direct manipulation
Replace implied magic with clear mental models
Establish trust through inspectability
Source references
Library entry
agents-building-blocks-1-intelligence
Source path
content/example-library/sources/agents/building-blocks/1-intelligence.py
Libraries
openai, requests
Runtime requirements
OPENAI_API_KEY
Related principles
Design for delegation rather than direct manipulation, Replace implied magic with clear mental models, Establish trust through inspectability, Make hand-offs, approvals, and blockers explicit, Optimise for steering, not only initiating

Model context

Model-agnosticLocal-viableNo tool calling requiredLow reasoning requirement

1-intelligence.py

python
"""
Intelligence: The "brain" that processes information and makes decisions using LLMs.
This component handles context understanding, instruction following, and response generation.

More info: https://platform.openai.com/docs/guides/text?api-mode=responses
"""

from openai import OpenAI


def basic_intelligence(prompt: str) -> str:
    client = OpenAI()
    response = client.responses.create(model="gpt-4o", input=prompt)
    return response.output_text


if __name__ == "__main__":
    result = basic_intelligence(prompt="What is artificial intelligence?")
    print("Basic Intelligence Output:")
    print(result)
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

Look for output contracts and validation
Look for the exact execution call
Look for what the product could expose to the user
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Read the implementation summary.
Step through the user and system states.
Inspect the source code with the highlighted doctrine decisions in mind.
SandboxInspectable flow with visible system boundaries
Interaction walkthrough

Use the sandbox to step through the user-visible experience, the system work behind it, and the doctrine choice the example is making.

UX explanation

The sandbox explains what the user should see, what the system is doing, and where control or inspectability must remain explicit.

AI design explanation

The page turns raw source into a product-facing pattern: what the model is allowed to decide, what the product should expose, and where deterministic code or review should take over.

Interaction walkthrough

  1. 1Read the implementation summary.
  2. 2Step through the user and system states.
  3. 3Inspect the source code with the highlighted doctrine decisions in mind.

Visible to the user

Intelligence: The "brain" that processes information and makes decisions using LLMs. This component handles context understanding, instruction following, and response generation.

System work

The product prepares a bounded model or workflow task.

Why it matters

The interface should make the delegated task legible before automation happens.

Used in courses and paths

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows