Principl Atlas the 'e' is yours to explore
About

The 'e' is yours to explore

Principl Atlas maps AI risk governance principles across regulatory frameworks — connecting the written rule to the lived practice.

Why "Principl"?

The name is missing an 'e'. That's intentional. Principl is Principle with the exploration left to the reader.

AI governance frameworks are full of principles. What they rarely do is tell you what those principles actually mean in practice — how a Line 2 risk officer operationalises "explainability," what "proportionate governance" looks like for a credit-scoring model, or why two frameworks both say "accountability" but mean subtly different things.

This project exists to bridge that gap. The 'e' is the exploration.

The Line 1 → Line 2 Journey

I spent the first part of my career in financial services building and using quantitative models — Line 1, close to the models, focused on what they do. Over time, I moved toward risk and governance — Line 2, focused on whether what the models do is appropriate, well-controlled, and explainable to people who weren't in the room when the modelling decisions were made.

The shift changed how I read regulatory documents. A quant reads a governance framework looking for constraints. A risk professional reads the same document looking for obligations — and then has to figure out what those obligations actually mean at the level of a specific model, a specific team, a specific decision.

That interpretive work — translating "the principle" into "the practice" — is what this site is about. Singapore context, financial services focus, practitioner orientation.

Why Singapore?

Singapore is an unusually interesting jurisdiction for AI governance. MAS is one of the few regulators globally that has published both a principles framework (FEAT, 2019) and a detailed operationalisation guide (MindForge, 2024). The gap between those two documents — five years, a generative AI revolution, and a significant increase in governance specificity — is itself a story worth telling.

Singapore is also small enough that the regulatory community, the banking industry, and the AI practitioner community all overlap significantly. The people writing the frameworks and the people trying to implement them are often one conversation apart. That proximity creates accountability and also creates interpretive questions that a purely document-based analysis misses.

The Research Stack

The knowledge infrastructure behind this site runs on three tools:

The research methodology is deliberately document-first. Every claim links to a specific passage in a specific document. The principle taxonomy is an analytical output, not an assumption.

What's Here (and What's Coming)

The current content pillars:

The platform is designed to expand beyond AI governance over time. The same architecture — principle taxonomy, framework mapping, interactive visualisation — applies to other knowledge domains. But AI governance is the first chapter.

Get in Touch

If you work in AI risk governance and have a perspective on something I've got wrong, or a framework I haven't mapped yet, I'd like to hear from you. The mapping exercise is only as good as the inputs.

Find the source code and data on GitHub. The YAML files behind the principle taxonomy are open — corrections and additions are welcome via pull request.