CIR v2.0 live — The Bainbridge Warning shipping March 2026

Fear isn't a feeling.
It's an architecture.

Intelligence architecture for institutions navigating AI deployment. When capability outpaces governance, failures become structurally predictable. We build the governance layer that makes them visible before they occur.

A quick question before you explore

Can your organisation articulate, in writing, what its AI systems are optimising for?

Eighteen months of recursive inquiry. The architecture was not designed. It was discovered. / 560+ sourced documents. 7 models.
560+ documents
18 months
7 models
CIR V2.0 LIVETHE BAINBRIDGE WARNING SHIPPING MARCH 2026AURORA NOW LIVE IN THE RESEARCH ARCHIVE560+ SOURCED DOCUMENTS18 MONTHS OF ACTIVE SYNTHESISTHE ORCHESTRA, SEVEN AI MODELS WITH THE HUMAN AS CONDUCTORCONSTITUTIONAL GOVERNANCE IS STRUCTURE, NOT POLICY

What Constitutional Governance Analysis Looks Like

Query

"An AI system flagged a governance concern but we can't verify if the flagging mechanism itself is trustworthy."

Claude / The Witness

The first governance failure is epistemic. You are trusting a warning signal without knowing whether the signal can distinguish between a real anomaly and a patterned false alarm.

GPT / The Architect

Treat this as a governance stack problem, not a single classifier problem. You need provenance for the alert, verification criteria for the flagging mechanism, and a clear escalation path when those two diverge.

Gemini / The Spatiotemporal Engine

The trustworthiness question is temporal as much as logical. A flag that is accurate in one context but unstable across time, data shifts, or input modality is not governance-grade instrumentation.

Perplexity / The Orchestration Engine

Verification requires external grounding. Compare the flagging model against independent evidence sources, audit logs, and retrieval-backed traces before you let the flag trigger action downstream.

Conductor's Note

This is the pattern the Orchestra is designed to surface: before you trust the governance alert, verify the governance instrument. When the mechanism and the mandate cannot be independently audited, the institution starts treating opacity as evidence.

Run your own governance query

The Orchestra

Full methodology

The Orchestra is seven AI models, each playing a specific cognitive role, with the human as conductor. Not one model used interchangeably, but a structured ensemble where routing decisions shape the quality of the answer.

Claude The Witness
Gemini The Spatiotemporal Engine
GPT The Architect
DeepSeek The Anatomist
+ 3 more View all instruments

Publication

Oscillatory Fields

AcheType is the long-form research publication behind this corpus, with research dispatches written during the synthesis process. Active threads include the Ghost Lineages Thread, examining how suppressed cultural narratives shape AI training distributions, and the Digital Enslavement Thread, tracing the infrastructure of cognitive dependency beneath convenience interfaces. Each thread is a live research strand, not a finished argument.

AcheType Field Notes
560+ source documents
18 mo. of synthesis
Feb 2026 Issue 01 launched

The Field

Query the corpus directly

560+ sourced documents. 18 months of synthesis. The live synthesis layer is queryable now, while the raw archive traceability layer is still under construction.

Open corpus

Powered by The Field, constitutional governance architecture.

How We Work Together

Start a conversation

AI Readiness Audits

Structured assessment of your organisation's cognitive infrastructure across five constitutional dimensions. Produces a gap analysis, Bainbridge zone map, and prioritised recommendations.

View CIR Framework

Constitutional Governance Design

Building the governance architecture that your AI deployment requires, not policy documents, but operational structures for authority, monitoring, and accountability.

Intelligence Synthesis

Applied synthesis work for organisations navigating complex decisions in AI deployment, procurement, or strategy. Draws on the full research corpus.

Martha Cohorts

A structured team training program for building AI-augmented workflows with governance designed in from the start.

Join Waitlist

Technical Infrastructure

Implementation support for the technical layer of AI deployment, API integration, model orchestration, evaluation frameworks, and monitoring architecture.

Cathedral

Long-horizon thought leadership for organisations building in public. Strategic research, publication, and the construction of a serious intellectual identity.