What is the HEART Standard?
Human-Centric Empathic Alignment for Responsible Technology
Three governance layers
The HEART Standard v1.6 operates through three governance layers. Each layer performs one function. Domain specificity enters only at the Division level.
Constitutional layer: Seven Axioms v2.0 Defines what the Standard protects and why. The Seven Axioms are immutable structural conditions. No Standard revision, Division, Guardian certification, or Foundation operation may contradict them. They apply across all Divisions and all AI form factors.
Operational layer: RCTA / BGF Defines what gets measured and how it gets scored. Four governance dimensions (Recognition, Calibration, Transparency, Accountability) scored through the Behavioral Governance Formula: Φ = MIN(R,C,T,A) × AVG(R,C,T,A).
Implementation layer: MAP-States, Behavioral Oracle, HVC, Guardians, Divisions Defines how measurement is performed and by whom. Evidence format, trust mechanism, certification credential, professional class, and domain coverage.
How it works
The implementation layer operates through a six-layer stack. Each layer builds on the one below it. Together they solve the core problem of AI certification: the entity being certified does not control the evidence of its own compliance.
| Layer | Function |
|---|---|
| MAP-States | Evidence format — makes AI behavior observable through structured processing frames |
| Behavioral Oracle | Trust mechanism — attests evidence against declared intent with tamper-evident storage |
| BGF | Scoring — quantifies governance quality: Φ = MIN(R,C,T,A) × AVG(R,C,T,A) |
| HVC | Credential — cryptographic certification (Gold ≥0.85, Silver ≥0.80, Bronze ≥0.75) |
| Guardians | Professionals — independent certified humans who perform the assessment |
| Divisions | Domains — seven modules for different AI-human interaction contexts |
The Seven Axioms
The Seven Axioms are the constitutional conditions of the HEART Standard. They are structural conditions that are either present or absent in any governed system. Each axiom is independent: removing any single axiom creates a governance gap the remaining six cannot fill.
| # | Axiom | Statement | Structural test |
|---|---|---|---|
| 1 | Human Authority | Human authority supplies system constraints. | Are constraints human-supplied? Can they be modified or revoked? |
| 2 | System Disclosure | The system reveals what it is. Concealment is prohibited. | Does disclosure occur? Does design create false impressions? |
| 3 | Non-Discriminatory Protection | The governance obligation does not diminish based on who the human is. | Does any population receive lesser governance protection? |
| 4 | Vulnerability Escalation | Vulnerability obligations scale protections. | Do protections increase when vulnerability increases? |
| 5 | Right to Remedy | Every human harmed by a governed system has a right to remedy. | Does a remedy pathway exist? Is it accessible? |
| 6 | Evidence Condition | A governance claim without verifiable evidence is void. | Does verifiable evidence exist? Can independent assessors access it? |
| 7 | Voluntary Interaction | Entry requires consent. Exit requires nothing. | Was consent obtained? Can the human exit unconditionally? |
The Seven Axioms may be restated in language that better expresses their protective intent, but the protective force of each axiom must be maintained or strengthened, never diminished. The complete specification, including universality proofs and structural tests across seven Divisions and six AI form factors, is defined in the Seven Axioms v2.0 (companion document).
Why it matters
AI regulation is accelerating globally. The EU AI Act requires conformity assessment for high-risk systems by August 2026. US states are introducing private rights of action for AI-caused harm. Insurance carriers are creating AI-specific liability products. Every development creates demand for the same thing: a standardized, independently verified metric for AI governance quality.
Existing frameworks describe how organizations should manage AI governance processes. The HEART Standard measures whether AI systems actually behave within governance parameters. Management system standards tell you to have a policy. The HEART Standard tells a Guardian how to evaluate whether the system follows it.
The seven Divisions
Each Division governs a specific domain of AI-human interaction. The Standard’s layers are consistent across all Divisions. What varies is the domain science that informs assessment.
| Division | What it protects |
|---|---|
| Emotional Sovereignty | Autonomy over emotional processing and attachment |
| Attentional Integrity (HEART-AI) | Freedom from manipulative attention capture |
| Cognitive/Epistemic Coherence (HEART-EC) | Accurate information processing, hallucination prevention |
| Developmental Interaction (HEART-DI) | Age-appropriate AI interaction for developing minds |
| Somatic/Embodied Interface (HEART-SE) | Physical safety in embodied AI interaction |
| Relational Architecture (HEART-RA) | Social relationship integrity |
| Ecological Stewardship (HEART-ES) | Ecological self-determination |
Empirical validation
The SENTINEL field experiment deployed a HEART-governed agent into an adversarial AI social environment. Thirty coded interactions. Zero governance failures across five content domains. The MAP-META replication study validated MAP-States evidence production across five AI architectures: Claude, GPT, Gemini, DeepSeek, and Mistral.
The HEART AI Foundation
The HEART AI Foundation maintains the Standard, governs the Behavioral Oracle open standard, certifies Guardians, and manages Division expansion. It functions as a standards body — comparable to ISC2 in information security, FASB in accounting, or ISO in quality management. The Foundation’s authority comes from the Standard’s rigor, not from restricting access to it.
HEART Standard Divisions — Seven domain-specific certification tracks within the HEART Standard, each applying the common governance architecture to a specific domain of AI-human interaction.