Skip to main contentSkip to footer
Handbook and doctrine

Learn the principles behind trustworthy agentic AI

Each principle page explains what the principle means, why it matters, which failures it prevents, and where to go next if you are fixing trust, visibility, orchestration, or approval design.

Key Facts

Principles
10
Use this for
Shared vocabulary, design review, failure diagnosis, and certification preparation
Clusters
delegation, visibility, trust, orchestration
Linked examples
96 repo examples available for cross-linking
Each page includes
Definition, rationale, implications, risks, and implementation evidence

In this section

From agent demos to runtime discipline

A capable model is not a runtime architecture. If agents are going to trigger workflows, load files, use tools, delegate work, and act across channels, the runtime needs clear patterns for control, visibility, and recovery. This cluster helps teams design those patterns deliberately.

Browse by cluster

Each cluster is a canonical entity home — a machine-readable hub that groups related principles and links to all implementation examples.

Principle 1delegation
Design for delegation rather than direct manipulation

Design experiences around the assignment of work, the expression of intent, the setting of constraints, and the review of results, rather than requiring users to execute each step manually.

In agentic systems, value is created when users can define the desired outcome and rely on the system to carry out appropriate actions within agreed limits. The interface should therefore support delegation as a first-class interaction model.

Principle 2visibility
Ensure that background work remains perceptible

When the system is operating asynchronously or outside the user’s immediate focus, it should provide persistent and proportionate signals that work is continuing.

Users lose confidence when delegated activity becomes invisible. Trust in autonomous systems depends in part on the user’s ability to understand that progress is being made, even when they are not actively observing the process.

Principle 3visibility
Align feedback with the user’s level of attention

The system should calibrate the depth and frequency of feedback according to whether the user is actively engaged, passively monitoring, or temporarily absent.

Not all moments require the same degree of visibility. Some activities require detailed understanding, while others require only reassurance and exception handling. Effective systems distinguish between these modes.

Principle 4trust
Apply progressive disclosure to system agency

Provide the minimum information necessary by default, while enabling users to inspect additional detail when confidence, understanding, or intervention is required.

Different users, and the same user in different contexts, require different levels of transparency. The default experience should remain clear and efficient, while deeper inspection should remain available when justified.

Principle 5delegation
Replace implied magic with clear mental models

The product should help users understand what the system can do, what it is currently doing, what it cannot do, and what conditions govern its behaviour.

Trust is strengthened when users can form accurate expectations. Systems that appear intelligent but remain poorly bounded create confusion, misuse, and misplaced reliance.

Principle 6visibility
Expose meaningful operational state, not internal complexity

Present the state of the system in language and structures that are relevant to the user, rather than exposing low-level internals that do not support action or understanding.

Users need to understand operational truth, but not necessarily implementation detail. Good design translates machine activity into meaningful human-facing status.

Principle 7trust
Establish trust through inspectability

Users should be able to examine how a result was produced when confidence, accountability, or decision quality is important.

Trust is not established through assertion alone. It is established by enabling proportionate verification. Particularly in high-impact contexts, users must be able to inspect evidence, actions, and changes.

Principle 8trust
Make hand-offs, approvals, and blockers explicit

When the system cannot proceed, the reason should be immediately visible, along with any action required from the user or another dependency.

In agentic systems, failure frequently arises not from incorrect reasoning but from unclear responsibility. Users must know when a task is paused, why it is paused, and what will resume progress.

Principle 9orchestration
Represent delegated work as a system, not merely as a conversation

Where work involves multiple steps, agents, dependencies, or concurrent activities, it should be represented as a structured system rather than solely as a message stream.

A conversational log is not always an appropriate representation for operational complexity. Users need mechanisms to understand relationships, concurrency, and progression across delegated tasks.

Principle 10delegation
Optimise for steering, not only initiating

The system should support users not only in starting tasks, but also in guiding, refining, reprioritising, and correcting work while it is underway.

Prompting is an initiation mechanism. It is not, by itself, a sufficient control model for complex or consequential work. Users require the ability to steer ongoing activity without restarting the entire process.

Common failure modes

These are the recurring failures the doctrine is designed to prevent. Each failure maps to one or more principles in the grid above.

Failure modeDescription
AI as interface embellishmentA conventional product is given a text input and labelled intelligent, without any meaningful change in operational model.
Simulated autonomyThe system appears autonomous in language or presentation but cannot act with meaningful independence.
Opaque executionWork occurs in the background without adequate status, accountability, or recoverability.
Excessive operational exposureThe default experience presents unnecessary internal detail, creating cognitive burden without increasing trust.
Conversation as the only coordination modelAll activity is forced into a message thread, even where structured orchestration would be more appropriate.
Silent dependency failureThe system is waiting for user input, access, or approval, but this is not made visible in time.
No steering modelUsers can start work, but they cannot meaningfully guide or correct it once execution begins.
How should a principle page help a practitioner?

It should clarify the design judgment, expose the failure signature, and connect the principle to evidence. The page should move a team from doctrine to action, not just restate terminology.

How should a principle page help someone continue?

It should turn doctrine into navigation. Each page should connect the principle to examples, courses, labs, and assessment criteria so the user can move from concept to implementation and review.