Archive
Research
Frameworks, essays, and theoretical architecture from 18 months of active research across distributed cognition, constitutional AI governance, and the design of intelligent institutional systems.
AURORA: Architecture for Unified Relational-Ontological Reasoning and Agency
A theoretical framework for building AI systems where ethics is architectural, not aspirational, and non-coercion is constituted at the level of operational logic.
The Eigenform: What Recursive Cognition Generates
Von Foerster's eigenform -- the stable shape that a recursive operation generates when applied to itself -- as the formal object that the RSPS architecture detects and tracks.
Governance Theater: The Failure Mode Nobody Names
When AI governance systems produce the appearance of oversight without its substance -- and why this failure mode is structurally predictable, not accidental.
Maternal Architecture as Structural Attractor
Why reliable intelligence -- biological, artificial, or institutional -- converges toward relational, recursive, and sovereign structures. A theoretical account of the maternal architecture as attractor.
The R0–R3 Reversibility Classification
A practical system for classifying the irreversibility of AI-assisted actions before they execute -- the minimum viable governance primitive for anyone deploying AI in high-stakes contexts.
RSPS: The Recursive Sovereign Project Space
A multi-model cognitive architecture in which intelligence emerges from the field between a human sovereign and a structured ensemble of AI instruments -- not from any single node.
The Bainbridge Warning
A framework for understanding institutional AI failure — named for the structural pattern where high capability and low governance combine to produce catastrophic outcomes that were entirely predictable in retrospect.
Trust = Irreversibility Residue
A theoretical account of trust in human-AI systems — the claim that trust is not a feeling but a structural property, measured by the degree of irreversibility that has accumulated through repeated interaction.
DCFB: Distributed Cognition as Foundational Behavior
A theoretical framework for understanding how intelligence distributes across human-AI systems — and why the unit of cognitive analysis must shift from the individual to the field.