Skip to main contentSkip to footer
ExamplescriptadvancedRunnabletool-agent

Function Calling

Runnable example (advanced) for script using openai, python-dotenv.

Key Facts

Level
advanced
Runtime
Python • OpenAI API
Pattern
Inspectable flow with visible system boundaries
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Function Calling -> User request -> System execution -> Reviewable output -> Expose meaningful operational state,… -> Establish trust through inspectability

Start

Function Calling

Checkpoint

User request

Outcome

System execution

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Inspectable flow with visible system boundaries
Expose meaningful operational state, not internal complexity
Establish trust through inspectability
Make hand-offs, approvals, and blockers explicit
Source references
Library entry
mcp-crash-course-5-mcp-vs-function-calling-function-calling
Source path
content/example-library/sources/mcp/crash-course/5-mcp-vs-function-calling/function-calling.py
Libraries
openai, python-dotenv
Runtime requirements
OPENAI_API_KEY
Related principles
Expose meaningful operational state, not internal complexity, Establish trust through inspectability, Make hand-offs, approvals, and blockers explicit, Represent delegated work as a system, not merely as a conversation

function-calling.py

python
import json

import openai
from dotenv import load_dotenv
from tools import add

load_dotenv("../.env")

"""
This is a simple example to demonstrate that MCP simply enables a new way to call functions.
"""

# Define tools for the model
tools = [
    {
        "type": "function",
        "function": {
            "name": "add",
            "description": "Add two numbers together",
            "parameters": {
                "type": "object",
                "properties": {
                    "a": {"type": "integer", "description": "First number"},
                    "b": {"type": "integer", "description": "Second number"},
                },
                "required": ["a", "b"],
            },
        },
    }
]


# Call LLM
response = openai.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Calculate 25 + 17"}],
    tools=tools,
)

# Handle tool calls
if response.choices[0].message.tool_calls:
    tool_call = response.choices[0].message.tool_calls[0]
    tool_name = tool_call.function.name
    tool_args = json.loads(tool_call.function.arguments)

    # Execute directly
    result = add(**tool_args)

    # Send result back to model
    final_response = openai.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "user", "content": "Calculate 25 + 17"},
            response.choices[0].message,
            {"role": "tool", "tool_call_id": tool_call.id, "content": str(result)},
        ],
    )
    print(final_response.choices[0].message.content)
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

Look for output contracts and validation
Look for the exact execution call
Look for what the product could expose to the user
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Read the implementation summary.
Step through the user and system states.
Inspect the source code with the highlighted doctrine decisions in mind.
SandboxInspectable flow with visible system boundaries
Interaction walkthrough

Use the sandbox to step through the user-visible experience, the system work behind it, and the doctrine choice the example is making.

UX explanation

The sandbox explains what the user should see, what the system is doing, and where control or inspectability must remain explicit.

AI design explanation

The page turns raw source into a product-facing pattern: what the model is allowed to decide, what the product should expose, and where deterministic code or review should take over.

Interaction walkthrough

  1. 1Read the implementation summary.
  2. 2Step through the user and system states.
  3. 3Inspect the source code with the highlighted doctrine decisions in mind.

User request

I was charged twice on Feb 1st for my subscription. Please fix this.

Allowed tools onlyAgent chooses orderStructured resolution

Tool trace

The trace appears as the agent decides which tool to call next.

Resolution

The final output should summarize what the agent did, not leave the action implicit.

Autonomy boundary

  • Expose meaningful operational state, not internal complexity
  • Establish trust through inspectability
  • Make hand-offs, approvals, and blockers explicit
Used in courses and paths

This example currently stands on its own in the library, but it still connects to the principle system and the broader example family.

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows