# 3 The Governance Model: Policy as Code

Governance in an AI-ready ecosystem cannot remain a manual compliance exercise. The sheer volume of automated transactions renders traditional "web-form-and-paper-based" governance obsolete. When software agents interact at machine speed, human oversight cannot effectively occur during the transaction. It must be embedded into the platform architecture itself.

### 3.1 Operationalizing ISO 42001

While frameworks such as the NIST AI RMF provide a necessary vocabulary for identifying risk, we suggest prioritizing ISO 42001 for operationalization because it functions as a certifiable management system. This distinction is critical for implementation. ISO 42001 allows government architects to translate abstract governance controls into concrete technical requirements.

* **Traditional Approach:** A PDF policy stating "Data must be retained for 5 years".
* **AI-Ready Approach:** Policy-as-Code (e.g., Open Policy Agent) enforced at the API Gateway rejects delete requests before the retention period expires.

This transition is achieved through the aforementioned "Policy-as-Code". We must move from static documents that rely on human interpretation to executable rules that enforce themselves within the infrastructure.

### 3.2 Risk-Based Classification System

Here is a Three-Tier Risk Classification to determine the technical controls for AI decision automation:

| Tier 1: Informational   | Low Risk    | Public transit schedules, library catalogs. | <p>Standard Security: TLS, basic access logging.<br></p><p>Autonomy: High. Agents can synthesize answers freely.</p>                                                |
| ----------------------- | ----------- | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Tier 2: Transactional   | Medium Risk | Address changes, vehicle renewal.           | <p>Strong Auth (AAL2): MFA for the delegating user 40.<br></p><p>Idempotency: Must handle retry storms safely41.<br></p>                                            |
| Tier 3: Decision-Making | High Risk   | Welfare grants, visa approvals, tax audits. | <p>Explainability: Metadata must include the logic trace/policy ID.</p><p></p><p>Human-in-the-Loop: Mandatory "break point" for human review before commit.<br></p> |

### 3.3 Identity, Delegation and Mandates

In an ecosystem populated by autonomous software, the traditional concept of a "logged in" session is fundamentally insufficient for the "Internet of Agents". When a machine acts on behalf of a human, simple authentication creates a security gap because it fails to capture the "nuance of intent" - what something looks like might not be what is intended. Therefore, the system must manage delegation via granular context tokens that explicitly define the parameters of the relationship.

To ensure safety and privacy, these tokens must structurally bind four specific constraints:

1. **The (principal) Owner:** The citizen owning the data.
2. **The Delegate:** The AI agent/Digital Twin.
3. **The Scope:** Specific permissions (e.g., "Read Tax History").
4. **The Validity Period:** A strict time window.

To ensure safety and privacy, these tokens must structurally bind the aforementioned four specific constraints. First, they must identify the Principal Owner, the citizen who actually owns the data. Second, they must identify the Delegate, such as the specific AI agent or Digital Twin authorized to act. Third, they must enforce a rigid Scope that limits access to specific permissions, such as the ability to read tax history. Finally, they must define a Validity Period, establishing a strict time window after which the agent's authority automatically expires.

For highly sensitive interactions, we cannot rely on temporary data streams. Instead, architectures should implement secure message rooms. These function as virtual, auditable spaces where every exchange between the Citizen, the Agent and the human Officer is cryptographically signed. This mechanism creates a non-repudiatable record of the transaction, ensuring that if a dispute arises later, the exact sequence of instructions and actions can be mathematically verified.
