Principl Atlas the 'e' is yours to explore
Principle Mapping

The Seven Pillars: Mapping Universal AI Governance Principles Across Six Frameworks

After mapping six major regulatory frameworks, seven universal principles emerge — here's how they connect, where they diverge, and what this means for Singapore financial institutions.

Shi Yuan · 22 March 2026 · 12 min
#principle-mapping #governance #mas #eu-ai-act

The Seven Pillars

After spending months dissecting regulatory documents from Singapore, the EU, the US, and the UK, I’ve identified seven core principles that every major AI governance framework converges on — albeit with different emphasis and granularity.

This mapping reveals something important: despite jurisdictional differences in regulatory philosophy (prescriptive vs. principles-based vs. voluntary), there is a remarkably consistent skeleton of expectations for governing AI in financial services.

Why This Matters

If you’re a Line 2 risk professional in a Singapore bank, you’re likely grappling with multiple overlapping frameworks: MAS FEAT, the new MAS AI Risk Management consultation paper, SR 11-7 (still the gold standard for model risk), and increasingly the EU AI Act for any cross-border operations. Understanding where these frameworks converge — and where they don’t — is the foundation for building an efficient, non-duplicative governance programme.

The Knowledge Graph

Explore the interactive map below. Hexagons are the seven general principles I’ve extracted. Circles are the source documents. Lines show mapping relationships — click any node to drill into the citations.

PrincipleSG DocumentIntl DocumentClick · Drag

The Seven General Principles

1. Explainability & Interpretability

Every framework expects that AI models can be understood — but they define “understood” differently.

"AIDA systems should be designed and implemented in a manner that allows for appropriate understanding."
MAS FEAT·Section 3.2·↗ p.15Strong alignment

MAS FEAT takes the most nuanced approach, distinguishing between model-level transparency (how the algorithm works globally) and decision-level transparency (why a specific prediction was made for a specific person). The EU AI Act mandates transparency for high-risk AI but frames it from the deployer’s perspective — can the operator interpret the output? SR 11-7 requires “conceptual soundness” but predates modern XAI techniques.

The gap: SR 11-7’s conceptual soundness requirement was written for traditional statistical models. Applying it to deep learning and LLMs requires significant interpretation. This is where Line 2 judgment becomes critical.

2. Fairness & Non-Discrimination

"AIDA-driven decisions should not systematically disadvantage individuals or groups."
MAS FEAT·Section 2·↗ p.8Strong alignment

MAS FEAT provides the richest fairness framework in my analysis, defining fairness across individual, group, and systemic dimensions. The EU AI Act makes bias testing mandatory for high-risk systems. But here’s the interesting finding: SR 11-7 has the weakest fairness coverage — it requires outcomes analysis but doesn’t explicitly address demographic fairness.

This matters in practice because many Singapore banks use SR 11-7 as their model risk management backbone. If your validation framework is purely SR 11-7-aligned, you have a fairness gap that MAS FEAT expects you to fill.

3. Accountability & Governance

This principle shows the strongest cross-framework alignment of any in my taxonomy. All six frameworks explicitly require clear ownership, board-level oversight, and defined roles. The Three Lines of Defence model maps cleanly here.

The Alignment Matrix

The heatmap below shows at a glance which frameworks cover which principles strongly:

PrincipleMAS
FEAT
MAS
AIRM
MAS
Info Paper
MAS
MindForge Exec
MAS
MindForge Ops
MAS
MindForge Impl
ABS
GenAI
IMDA
MGF
EU
AI Act
Fed/OCC
11-7
NIST
AI RMF
PRA
SS1/23
ISO
42001
OECD
AI
Accountability & Oversight
C1, C2
Risk-Based Governance
C3, C4, C5
Lifecycle Responsibility
C7, C10, C12, C13, C14, C15
Integrity & Soundness
C8, C9, C11, C17
Transparency & Auditability
C6, C11, C15
Capability & Culture
C16, C17
Strong alignment Moderate alignment— Not addressedSG = Singapore frameworksIntl = International frameworks

Key insight: The diagonal strength from Governance → Monitoring shows that all frameworks agree on the full lifecycle expectation. But the off-diagonal weakness in Fairness × SR 11-7 is a real compliance gap for any bank using SR 11-7 as its primary model risk framework.

4-7. Validation, Monitoring, Data Quality, Documentation

(Detailed analysis of each principle continues in the full article…)

What This Means for Singapore Banks

Three actionable takeaways:

First, if you’re building a Line 2 AI risk framework, start with the seven general principles as your control taxonomy. Map your policies to these rather than to any single regulatory document — this gives you coverage across jurisdictions.

Second, the MAS AI Risk Management consultation paper’s four-pillar structure (Governance, Development, Risk Management, Monitoring) maps almost perfectly to my seven principles. Use this as your primary structure, supplemented by SR 11-7 for validation rigour and EU AI Act for risk classification methodology.

Third, invest in the cross-framework citation infrastructure. When regulators ask “how do you address explainability?”, your answer should reference multiple frameworks showing consistent coverage — not a patchwork of disconnected policies.


This is the first article in my Principle Mapping series. Next: a deep dive into how the MAS consultation paper’s four pillars map to SR 11-7’s three-part structure, with practical templates for Line 2 validation.

Explore the full interactive knowledge graph at your-site.com/graph.