The Seven Pillars: Mapping Universal AI Governance Principles Across Six Frameworks
After mapping six major regulatory frameworks, seven universal principles emerge — here's how they connect, where they diverge, and what this means for Singapore financial institutions.
The Seven Pillars
After spending months dissecting regulatory documents from Singapore, the EU, the US, and the UK, I’ve identified seven core principles that every major AI governance framework converges on — albeit with different emphasis and granularity.
This mapping reveals something important: despite jurisdictional differences in regulatory philosophy (prescriptive vs. principles-based vs. voluntary), there is a remarkably consistent skeleton of expectations for governing AI in financial services.
Why This Matters
If you’re a Line 2 risk professional in a Singapore bank, you’re likely grappling with multiple overlapping frameworks: MAS FEAT, the new MAS AI Risk Management consultation paper, SR 11-7 (still the gold standard for model risk), and increasingly the EU AI Act for any cross-border operations. Understanding where these frameworks converge — and where they don’t — is the foundation for building an efficient, non-duplicative governance programme.
The Knowledge Graph
Explore the interactive map below. Hexagons are the seven general principles I’ve extracted. Circles are the source documents. Lines show mapping relationships — click any node to drill into the citations.
The Seven General Principles
1. Explainability & Interpretability
Every framework expects that AI models can be understood — but they define “understood” differently.
"AIDA systems should be designed and implemented in a manner that allows for appropriate understanding."
MAS FEAT takes the most nuanced approach, distinguishing between model-level transparency (how the algorithm works globally) and decision-level transparency (why a specific prediction was made for a specific person). The EU AI Act mandates transparency for high-risk AI but frames it from the deployer’s perspective — can the operator interpret the output? SR 11-7 requires “conceptual soundness” but predates modern XAI techniques.
The gap: SR 11-7’s conceptual soundness requirement was written for traditional statistical models. Applying it to deep learning and LLMs requires significant interpretation. This is where Line 2 judgment becomes critical.
2. Fairness & Non-Discrimination
"AIDA-driven decisions should not systematically disadvantage individuals or groups."
MAS FEAT provides the richest fairness framework in my analysis, defining fairness across individual, group, and systemic dimensions. The EU AI Act makes bias testing mandatory for high-risk systems. But here’s the interesting finding: SR 11-7 has the weakest fairness coverage — it requires outcomes analysis but doesn’t explicitly address demographic fairness.
This matters in practice because many Singapore banks use SR 11-7 as their model risk management backbone. If your validation framework is purely SR 11-7-aligned, you have a fairness gap that MAS FEAT expects you to fill.
3. Accountability & Governance
This principle shows the strongest cross-framework alignment of any in my taxonomy. All six frameworks explicitly require clear ownership, board-level oversight, and defined roles. The Three Lines of Defence model maps cleanly here.
The Alignment Matrix
The heatmap below shows at a glance which frameworks cover which principles strongly:
| Principle | MAS FEAT | MAS AIRM | MAS Info Paper | MAS MindForge Exec | MAS MindForge Ops | MAS MindForge Impl | ABS GenAI | IMDA MGF | EU AI Act | Fed/OCC 11-7 | NIST AI RMF | PRA SS1/23 | ISO 42001 | OECD AI |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Accountability & Oversight C1, C2 | ● | ● | — | ● | ● | — | ● | ● | ● | ● | ● | ● | ● | ● |
| Risk-Based Governance C3, C4, C5 | ◐ | ● | — | ● | ● | — | ● | ● | ● | ◐ | ● | ◐ | ● | ◐ |
| Lifecycle Responsibility C7, C10, C12, C13, C14, C15 | ◐ | ● | — | ● | ● | — | ● | ● | ● | ● | ● | ● | ◐ | ◐ |
| Integrity & Soundness C8, C9, C11, C17 | ● | ● | — | ● |