The 'e' is yours to explore
Principl Atlas maps AI risk governance principles across regulatory frameworks — connecting the written rule to the lived practice.
Why "Principl"?
The name is missing an 'e'. That's intentional. Principl is Principle with the exploration left to the reader.
AI governance frameworks are full of principles. What they rarely do is tell you what those principles actually mean in practice — how a Line 2 risk officer operationalises "explainability," what "proportionate governance" looks like for a credit-scoring model, or why two frameworks both say "accountability" but mean subtly different things.
This project exists to bridge that gap. The 'e' is the exploration.
The Line 1 → Line 2 Journey
I spent the first part of my career in financial services building and using quantitative models — Line 1, close to the models, focused on what they do. Over time, I moved toward risk and governance — Line 2, focused on whether what the models do is appropriate, well-controlled, and explainable to people who weren't in the room when the modelling decisions were made.
The shift changed how I read regulatory documents. A quant reads a governance framework looking for constraints. A risk professional reads the same document looking for obligations — and then has to figure out what those obligations actually mean at the level of a specific model, a specific team, a specific decision.
That interpretive work — translating "the principle" into "the practice" — is what this site is about. Singapore context, financial services focus, practitioner orientation.
Why Singapore?
Singapore is an unusually interesting jurisdiction for AI governance. MAS is one of the few regulators globally that has published both a principles framework (FEAT, 2019) and a detailed operationalisation guide (MindForge, 2024). The gap between those two documents — five years, a generative AI revolution, and a significant increase in governance specificity — is itself a story worth telling.
Singapore is also small enough that the regulatory community, the banking industry, and the AI practitioner community all overlap significantly. The people writing the frameworks and the people trying to implement them are often one conversation apart. That proximity creates accountability and also creates interpretive questions that a purely document-based analysis misses.
The Research Stack
The knowledge infrastructure behind this site runs on three tools:
- NotebookLM — "The Researcher." I ingest source regulatory PDFs directly; NotebookLM answers cross-document questions grounded in specific passages. Useful for initial mapping and for checking whether my interpretations are consistent with what documents actually say.
- Obsidian — "The Architect." A permanent knowledge graph in Markdown, structured to mirror the data model in this site. The vault is the canonical source; the site renders it interactively. Synced to GitHub via Obsidian Git.
- Claude — "The Writer." Polished articles, React components, cross-platform content adaptation. Also the tool that called out when my 7-principle framework-neutral taxonomy didn't adequately reflect MindForge's specific framing — leading to the 6-principle MindForge-anchored structure this site now uses.
The research methodology is deliberately document-first. Every claim links to a specific passage in a specific document. The principle taxonomy is an analytical output, not an assumption.
What's Here (and What's Coming)
The current content pillars:
- Principle Mapping — Cross-framework principle extraction with interactive maps. The six-principle MindForge-anchored taxonomy is the foundation.
- Practitioner's Guide — "What Line 2 actually does." Making governance tangible for people who have to implement it, not just read about it.
- Document Deep Dives — Annotated walkthroughs of key regulatory documents with tagged, citable sections.
- Current Events Commentary — New regulations, enforcement actions, industry developments — read through the lens of the principle taxonomy.
The platform is designed to expand beyond AI governance over time. The same architecture — principle taxonomy, framework mapping, interactive visualisation — applies to other knowledge domains. But AI governance is the first chapter.
Get in Touch
If you work in AI risk governance and have a perspective on something I've got wrong, or a framework I haven't mapped yet, I'd like to hear from you. The mapping exercise is only as good as the inputs.
Find the source code and data on GitHub. The YAML files behind the principle taxonomy are open — corrections and additions are welcome via pull request.