Candidate epistemic-safety

Intent Specification Requirement

Lineage
DCFB Core Constitutional AI
Organisations deploying AI systems must be able to demonstrate, not merely assert, that the system's optimisation target corresponds to their stated organisational intent. This requires live monitoring, interpretability access, and a documented theory of how alignment is maintained over time — not just at point of deployment.

Origin

Synthesised from analysis of the intent gap in AI deployment contexts. Related to CIR Dimension 1: Intent Specification.

Applicability

Applies to organisations deploying AI systems in production contexts with consequential outputs.

Known objections

  • Live monitoring requirements may be technically infeasible for all model types at this stage.
  • Interpretability access is not uniformly available across vendors.

Crystallization threshold

60% supporting content must reach crystallized state
#intent#alignment#monitoring#deployment#governance

Clauses accumulate through active synthesis. Query the corpus →