The Six Principles of Responsible AI: A MindForge-Anchored Framework for Singapore Banks
MAS MindForge is the new anchor for AI governance in Singapore financial services. Here's how its six core considerations translate into a practical principle taxonomy — and how every major international framework maps against it.
The Six Principles
When MAS published the MindForge AI Risk Management trilogy in 2024, it did something unusual: it gave Singapore financial institutions a structured governance anchor rather than a principles-only framework. The 17 MindForge Considerations are not aspirational — they are the operational baseline MAS expects institutions to demonstrate.
This article extracts six governing principles from MindForge, maps the reinforcing guidance from ABS and IMDA, and shows where the international frameworks (SR 11-7, EU AI Act, NIST AI RMF, MAS FEAT, SS1/23, OECD) converge and diverge. The goal is a single reference taxonomy that a Line 2 practitioner can use to build a non-duplicative, cross-jurisdictionally coherent AI risk governance programme.
Why MindForge as the Anchor
There are good reasons to treat MindForge as the primary bible rather than a framework-neutral taxonomy:
Jurisdictional specificity. MindForge is written by, and for, MAS-regulated financial institutions. It reflects MAS’s supervisory experience and expectations in a way that generic frameworks like NIST AI RMF cannot. When MAS examiners ask about your AI governance, they are measuring against MindForge — not NIST.
Operational granularity. The 17 Considerations in MindForge are more operationally specific than most comparable frameworks. They tell you what to have in place, not just why it matters.
Three-document depth. The Executive Handbook sets governance expectations; the Operationalisation Handbook translates them into process; the Implementation Examples ground them in practice. Together they form a complete implementation guide.
Secondary reinforcement. The ABS GenAI Handbook and IMDA MGF for Agentic AI are explicitly aligned with MindForge. Using MindForge as the anchor means these secondary documents layer on top naturally.
The Six Principles
The 17 MindForge Considerations cluster into six governing principles:
P1 — Clear Human Accountability and Oversight
MindForge Considerations C1, C2
"Financial institutions must ensure that accountability for AI systems is clearly defined, documented, and communicated across all three lines of defence. The use of AI does not diminish the accountability of human decision-makers."
This is the most cross-framework-consistent principle in the taxonomy. Every major framework — MindForge, FEAT, SR 11-7, EU AI Act, NIST AI RMF — requires clear human ownership of AI systems and board-level oversight. The Three Lines of Defence model maps cleanly.
What MindForge adds is operational specificity: C1 requires named roles (not committee-level diffusion), and C2 requires that oversight mechanisms be designed before deployment, not retrofitted. The ABS Handbook reinforces this with concrete expectations for Board-level GenAI risk reporting.
"Senior management and the Board should receive regular reporting on the financial institution's generative AI risk profile, including material incidents and governance gaps."
International alignment: SR 11-7’s model ownership and independent review requirements translate directly. EU AI Act Article 17 (quality management system) and Article 9 (risk management) require comparable accountability structures for high-risk AI.
P2 — Proportionate, Risk-Based AI Governance
MindForge Considerations C3, C4, C5
"A risk-based approach to AI governance means calibrating oversight mechanisms to the level of risk that each AI system poses to the financial institution, its customers, and the broader financial system."
This principle has no direct equivalent in the older framework-neutral taxonomy — it is genuinely MindForge-native. The requirement to develop an institution-specific AI risk taxonomy (C3), define an AI risk appetite (C4), and apply proportionate controls (C5) is a meaningful governance obligation that goes beyond what most international frameworks require at this level of specificity.
The contrast with the EU AI Act is instructive: the EU Act imposes a top-down legislative risk classification (unacceptable / high / limited / minimal). MindForge takes a bottom-up institutional approach — FIs must build their own risk taxonomies calibrated to their specific AI portfolios. Both converge on the principle that governance intensity must be commensurate with risk.
International alignment: NIST AI RMF’s MAP function (understanding the context and risk of AI systems) is the closest analogue. SS1/23 (MAS Technology Risk Management) provides complementary technology risk classification methodology. OECD Principle 1.2 (risk management) aligns at the high level.
P3 — Responsible Use Across the AI Lifecycle
MindForge Considerations C7, C10, C12, C13, C14, C15
"Responsible AI use requires governance to be embedded throughout the AI lifecycle — from the earliest conceptual stages through to decommissioning. An AI system that is well-governed at deployment but poorly monitored in production poses just as much risk as one that was poorly validated."
This principle absorbs the substance of the prior GP-VALID-01 and GP-MONITOR-01 principles. MindForge’s lifecycle framing is more comprehensive: it begins with use-case definition (C7), covers independent model validation before deployment (C10), pre-deployment testing (C12), post-deployment monitoring (C13), change management (C14), and controlled retirement (C15).
SR 11-7 remains the gold standard for the validation dimension — its detailed requirements for independent review, challenger models, back-testing, and sensitivity analysis are more specific than MindForge’s C10. But SR 11-7 was written for statistical models; applying it to LLMs and agentic AI requires significant interpretive work. MindForge’s lifecycle framing is more fit-for-purpose for the full spectrum of modern AI.
International alignment: SR 11-7 (strong, validation focus). EU AI Act Article 9 (risk management system — covers monitoring). NIST AI RMF MANAGE and MEASURE functions. MAS FEAT Section 4 (monitoring expectations for AIDA systems).
P4 — Data, Model, and System Integrity & Soundness
MindForge Considerations C8, C9, C11, C17
"Bias in AI systems often originates in data. Financial institutions should systematically identify and mitigate sources of bias in training data, model design, and output calibration, and document how they have addressed identified bias risks."
This principle requires careful explanation because it absorbs three prior principles: GP-DATA-01 (data quality), GP-FAIR-01 (fairness), and the model-soundness dimension of GP-EXPLAIN-01 (model-level interpretability).
On fairness: MindForge has no standalone Fairness principle. This is a deliberate design choice — MindForge treats fairness as a data-and-model-integrity obligation (C8: ethical/compliant data; C9: data quality and bias controls), not a separate governance pillar. FEAT’s Fairness chapter (Section 2) maps strongly here. For practitioners: if your SR 11-7-based validation framework doesn’t include explicit bias testing methodology, you have a gap that both FEAT and MindForge expect you to fill — but the framing is “data quality and model soundness,” not “fairness compliance.”
On model interpretability: C11 addresses model-level explainability — the ability to understand how input features drive output predictions. This is a technical capability requirement, distinct from the obligation to explain decisions to affected individuals (which lives in P5).
International alignment: FEAT Section 2 (Fairness — strong). EU AI Act Article 10 (data governance for high-risk AI). NIST AI RMF MAP and MEASURE. MAS FEAT Section 4 (Accountability — data governance).
P5 — Transparency, Traceability, and Auditability
MindForge Considerations C6, C11 (decision-level), C15
"Transparency is a two-way obligation: FIs must be transparent to their customers about the use of AI in material decisions, and they must maintain the internal records necessary to be transparent to their regulators."
This principle absorbs the prior GP-DOCUMENT-01 and the decision-transparency dimension of GP-EXPLAIN-01. The two dimensions are:
External transparency (C6): Disclosure to customers when AI is meaningfully involved in material decisions. Customers are entitled to know they are being assessed by an AI system, and to receive an explanation of the outcome commensurate with its impact on them.
Internal auditability (C15): Comprehensive documentation and audit trails — design rationale, validation findings, deployment approvals, change history — sufficient for supervisory review. The documentation standard: would a technically competent reviewer, with no prior knowledge of this system, be able to reconstruct and assess the governance process from the records alone?
"The level and nature of explainability should be commensurate with the impact of the AIDA-driven decision on the individual."
International alignment: EU AI Act Article 13 (transparency for high-risk AI) and Article 12 (record-keeping). NIST AI RMF GOVERN function (documentation as risk practice). MAS FEAT Section 3.2 (transparency and explainability — now mapped here for the decision-transparency dimension).
P6 — Organisational Capability and Responsible AI Culture
MindForge Considerations C16, C17
"Responsible AI governance ultimately depends on people. Frameworks, policies, and tools are only as effective as the people who design, implement, and challenge them. Financial institutions must invest in building genuine AI literacy across the three lines of defence."
This principle is genuinely new — no prior principle in the framework-neutral 7-principle structure covered organisational capability and culture explicitly. MindForge is unusual among governance frameworks in making culture a first-class governance obligation.
C16 requires documented, role-appropriate training for everyone with AI governance responsibilities — from Board members to model validators. C17 requires that senior leadership actively promotes a culture in which AI risks are surfaced without fear of reprisal. MAS assesses culture through thematic reviews and the quality of Board-level AI risk reporting, not just documentation.
The ABS Handbook’s training requirements reinforce this directly. Notably, SR 11-7 does not address culture — it is a technical model risk standard, not an organisational governance framework. If your AI governance programme is SR 11-7-anchored, this is a dimension you need to build from scratch using MindForge and IMDA guidance.
International alignment: ABS GenAI Handbook (strong — training requirements). IMDA MGF for Agentic AI (moderate — responsible deployment principles). OECD Principle 4 (education and capacity building). No direct equivalent in SR 11-7.
The Interactive Knowledge Graph
The graph below maps the six principles (hexagons) against the 14 source documents (circles). Line weight and colour reflect alignment strength. Click any node to see its connections.
Cross-Framework Alignment at a Glance
The matrix below shows which frameworks provide strong, moderate, or no coverage for each principle. Singapore-jurisdiction documents appear on the left; international frameworks on the right.
| Principle | MAS FEAT | MAS AIRM | MAS Info Paper | MAS MindForge Exec | MAS MindForge Ops | MAS MindForge Impl | ABS GenAI | IMDA MGF | EU AI Act | Fed/OCC 11-7 | NIST AI RMF | PRA SS1/23 | ISO 42001 | OECD AI |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Accountability & Oversight C1, C2 | ● | ● | — | ● | ● | — | ● | ● | ● | ● | ● | ● | ● | ● |
| Risk-Based Governance C3, C4, C5 | ◐ | ● | — | ● | ● | — | ● | ● | ● | ◐ | ● | ◐ | ● | ◐ |
| Lifecycle Responsibility C7, C10, C12, C13, C14, C15 | ◐ | ● | — | ● | ● | — | ● | ● | ● | ● | ● | ● | ◐ | ◐ |
| Integrity & Soundness C8, C9, C11, C17 | ● | ● | — | ● |