Skip to main contentSkip to footer
Methodology

Methodology

This page describes how the demo-to-production gap research was conducted, what evidence standards were applied, and what the research deliberately excluded. It exists because the doctrine asks the platform to apply the same scrutiny to its own claims as it asks of other systems.

Source

The evidence base was synthesised from a deep-research source document commissioned via OpenAI's deep-research feature against a brief that defined scope, evidence standards, and exclusion rules. AI assisted with synthesis. Sourcing, evidence standards, and editorial direction were human-set.

What counted as evidence

  • Primary source: the practitioner's own writing, talk, code, or issue
  • Named author with verifiable identity
  • Dated within 18 months where possible. Older sources kept where seminal
  • Direct quote of three sentences or fewer, attributable
  • Quantified where the claim is quantitative
  • Linked URL that resolves

What was excluded

  • Vendor landing-page marketing copy
  • Generic AI-hype thinkpieces with no operator detail
  • Industry-analyst reports with no methodology
  • Anything paywalled where the claim cannot be verified
  • AI-summarised content unless quoting a specific cited source

Open research items

Three commonly-asked figures are flagged as unverified rather than filled with vendor folklore: average time-to-abandon for stalled agent projects, cost overrun rate versus initial estimate, and a universal eval-to-production regression rate. The validator beta is open to teams whose case studies would help close these gaps. Public readiness reviews become public evidence.

Why this page exists

Most marketing-flavoured content does not publish its methodology. Doing so is the operational form of the platform's own doctrine: the discipline asked of others, applied here first.