The call for digital transformation across government registries continues to grow louder. With the rise of AI and the increasing pressure to "move faster," it is tempting to see emerging technologies as a panacea. But when it comes to statutory registers (those foundational systems that anchor trust, identity, and accountability in society) speed must never come at the cost of integrity.
At Foster Moore, we are enthusiastic about the potential of AI. But we also understand that registers are not like other systems. They are authoritative sources of truth. And as our colleague John Murray puts it, "Let us not make the same mistakes with AI that we did with earlier regulatory reforms."
The goal is not just automation. It is responsible automation, digital transformation that reinforces the legal and civic role of registers, rather than undermining it.
[This post is based on original insights shared by John Murray]
Learning from the past: the rush that went wrong
A decade ago, the global reform agenda was driven by metrics like those in the World Bank Doing Business Report (WBDBR). The push to reduce incorporation times and administrative hurdles was well-intentioned, but the long-term impact was more complicated. It prioritized speed over substance, often at the expense of legal safeguards and data accuracy.
Although these reforms delivered some positive outcomes, they also introduced significant challenges. The rapid creation of opaque legal entities, diminished regulatory oversight, and growing concerns about misuse contributed to a credibility gap, one that eventually fueled movements like the Panama Papers and intensified the push for transparency and beneficial ownership disclosure.
Today, as governments explore AI, we face a similar fork in the road.
AI for registers: transformative, but not trivial
AI has the power to streamline processes, flag anomalies, and reduce manual work. But in the context of a corporate register, where legal rights, obligations, and economic control are at stake, any use of AI must uphold core registry principles:
-
Legal Certainty: Register decisions must have legal standing. If an AI system approves or denies an application, what is the legal foundation? Who is accountable?
-
Accuracy: Poorly trained models can lead to false approvals or denials. Accuracy is not a bonus feature—it is mission-critical.
-
Transparency: Many AI tools operate as "black boxes." Registry authorities must be able to explain how and why a decision was made.
-
Accountability: When something goes wrong, who is responsible? Human oversight and governance frameworks must be clearly defined.
-
Procedural Fairness: Stakeholders must be allowed to respond or appeal decisions, even when AI is in the loop.
-
Auditability: Registers must log and reproduce decision-making paths to withstand legal scrutiny. AI cannot be exempt from this standard.
These principles are not simply best practices, they are essential to the legitimacy and trustworthiness of any statutory register. Registers underpin legal and economic systems, and their decisions carry real-world consequences for businesses, governments, and citizens.
If automation or AI compromises even one of these principles, it risks eroding the public confidence that registries are built on. Upholding these standards ensures that digital transformation strengthens, rather than undermines, the register’s role as a reliable and authoritative source of truth.
John Murray presenting at the EBRA 2023 Conference in Paris on The Future of Registers
Designing AI for the real world of registers
This is not to say AI has no place in registry modernization, far from it. Used wisely, AI can assist in risk profiling, enhance data validation, and support proactive compliance monitoring. But success depends on aligning AI use with register-specific legal, operational, and ethical frameworks.
Here are three ways registry authorities can prepare:
🔍 Adopt a principle-first approach
Do not start with technology, start with the purpose of your register. What must always remain true, regardless of the tools used? Build your AI policies from these foundational truths.
🛠️ Invest in explainable AI and human-in-the-loop models
Registers should favor systems that allow human officers to review, challenge, and override AI decisions. This balances efficiency with accountability.
🧭 Use the Register Capability Maturity Model (RCMM™)
Our diagnostic framework helps you understand not only your digital maturity, but your readiness to responsibly integrate advanced technologies. AI should not be bolted on, it must be embedded into a well-governed, future-ready operating model.
The future is not just fast. It is fair, accountable, and smart.
We are standing at the edge of another transformation wave. But this time, we have the benefit of hindsight. Registry authorities are not just service providers, they are custodians of legal truth. As such, their transformation journey must be cautious, deliberate, and values-driven.
At Foster Moore, our team of Registry People includes leaders from some of the most prestigious corporate registry operators in the world. We also work alongside governments to modernize registers in a way that upholds public trust. That includes exploring how emerging technologies like AI can be applied safely, responsibly, and meaningfully, without compromising the principles that make registers so essential in the first place.
🚀 Ready to talk about your registry modernization journey?
Contact us at info@fostermoore.com or connect directly with our registry transformation leaders:
Bill Clarke, Justin Hygate, or John Murray.
Do not overthink it. Start with a conversation.