Skip to main contentSkip to footer
ExamplescriptintermediateRunnablehuman-approval

Feedback: Provides strategic points where human judgement is required.

This component implements approval workflows and human-in-the-loop processes for high-risk decisions or complex judgments.

Key Facts

Level
intermediate • Agent Building Blocks
Runtime
Python • OpenAI API
Pattern
Human-in-the-loop approval before final action
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Feedback: Provides strategic points where… -> Pause for approval -> Draft generation -> Approval checkpoint -> Final state -> Draft first, decision second

Start

Feedback: Provides strategic points where…

Checkpoint

Pause for approval

Outcome

Draft generation

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Human-in-the-loop approval before final action
Draft first, decision second
Approval is a visible workflow state
Human judgment remains explicit where risk warrants it
Source references
Library entry
agents-building-blocks-7-feedback
Source path
content/example-library/sources/agents/building-blocks/7-feedback.py
Libraries
openai, requests
Runtime requirements
OPENAI_API_KEY
Related principles
Design for delegation rather than direct manipulation, Replace implied magic with clear mental models, Establish trust through inspectability, Make hand-offs, approvals, and blockers explicit, Optimise for steering, not only initiating

7-feedback.py

python
"""
Feedback: Provides strategic points where human judgement is required.
This component implements approval workflows and human-in-the-loop processes for high-risk decisions or complex judgments.
"""

from openai import OpenAI


def get_human_approval(content: str) -> bool:
    print(f"Generated content:\n{content}\n")
    response = input("Approve this? (y/n): ")
    return response.lower().startswith("y")


def intelligence_with_human_feedback(prompt: str) -> None:
    client = OpenAI()

    response = client.responses.create(model="gpt-4o", input=prompt)
    draft_response = response.output_text

    if get_human_approval(draft_response):
        print("Final answer approved")
    else:
        print("Answer not approved")


if __name__ == "__main__":
    intelligence_with_human_feedback("Write a short poem about technology")
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

def get_human_approval(content: str) -> bool:
draft_response = response.output_text
if get_human_approval(draft_response):
print("Answer not approved")
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Generate a draft response.
Inspect the content before it is finalized.
Approve or reject the draft and observe the resulting workflow state.
SandboxHuman-in-the-loop approval before final action
Human approval as a first-class interaction

This sandbox makes feedback visible as a product step: the system drafts work, a person reviews it, and the outcome remains inspectable instead of buried in hidden policy.

UX explanation

When the cost of being wrong is non-trivial, the experience should make approval an intentional part of the flow. The user needs to see the draft, the decision point, and the resulting state.

AI design explanation

Feedback is not merely a fallback. It is a designed checkpoint where model output pauses before commitment, allowing judgment, correction, and accountability to remain explicit.

Interaction walkthrough

  1. 1Generate a draft response.
  2. 2Inspect the content before it is finalized.
  3. 3Approve or reject the draft and observe the resulting workflow state.

Prompt

Draft firstHuman checkpoint

Draft output

The draft appears here before any final action is taken.

Approval checkpoint

Approval only becomes available after the system exposes a draft.

Why approval is a product pattern

  • Draft first, decision second
  • Approval is a visible workflow state
  • Human judgment remains explicit where risk warrants it
Used in courses and paths

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows