It’s easy to be dismissive. We hear companies discuss data risk, ethical data and data sovereignty a lot. But it’s important to realise how fundamental this is going to be to the future of businesses and lives in general. If the United Nations is right, AI will become a $4.8 trillion global market by 2033. That’s a lot of data and a lot of risk to be managing.
Pressures on organisations to manage data quality, provenance, and safety are growing just as steeply as AI but without trust in the data, no AI strategy can scale responsibly. It becomes a mess of inconsistencies and inaccuracies. Without governance, businesses will spend more time cleaning up after AI than creating value with it. Perhaps Retro Data Manager will become a job title of the future.
This is why regulation has had to catch-up quickly. The EU AI Act, for example, marks the world’s first comprehensive attempt to put guardrails around AI. It sets out strict requirements for high-risk systems, obligations for general-purpose models, and penalties for organisations that can’t demonstrate control and transparency.
For European enterprises, and anyone trading with them, knowledge of where their data resides, how it flows, and how their models behave is now key. This means governance can’t be an afterthought. It has to be built into the platform from the start, following the data and the models across clouds, borders, and business processes.
For many organisations this becomes an incredible challenge. We commonly see three recurring areas of failure that undermine governance and trust. The first is residency and sovereignty gaps.
Many platforms simply can’t enforce where data lives or flows once workloads span SaaS, hyperscalers, and edge. It’s becoming untenable. The EU Data Act, which applies from September 2025, puts fresh emphasis on portability, access, and auditability. Combine that with national “sovereign AI” initiatives (from Brussels and London) and it’s clear that businesses need controls that don’t just store data securely but prove where and how it’s being processed.
The second common failing area is rushed LLMs. Large language models are being pushed into production faster than they can be validated. Without rigorous oversight, they hallucinate, drift, or misalign with policy. Regulators have noticed this too, hence the EU AI Act’s specific focus on general-purpose AI, and the UK’s new AI Safety Institute dedicated to model testing. Deploying models without governance is a reputational risk.
The third area is lifecycle blind spots. This is where a lack of integrated tools to enforce policy from ingestion through training, deployment, and inference is undermining oversight and auditability. The result is that enterprises cannot prove compliance end-to-end. And when something goes wrong, such as a data breach, a bias claim, or a regulatory inspection, these enterprises are left scrambling with fragmented logs and partial records.
So, what does good look like?
If governance gaps create risk, then ‘good’ looks like sovereignty, validation, and risk management being designed into the AI platform from the start, not bolted on afterwards. It’s no longer enough to say data is “stored securely.” Enterprises need to prove where it sits, how it moves, and who touches it, across multi-cloud and edge. That means auditable lineage, geofenced storage and processing, and policy controls that follow data wherever it goes.
‘Good’ also requires continuous validation for large models. Accuracy is a starting point, not an endpoint. LLMs need ongoing monitoring for drift, bias, and hallucinations, with retrieval-augmented generation (RAG) and audit trails strengthening confidence in outputs. Regulators are already signalling that transparency and evaluation pipelines will be expected, not optional.
The UK’s AI Safety Institute has actually gone further, warning that untested models pose risks not only to compliance but to adoption itself. Enterprises that can show their models are grounded in trusted data and checked in real time will win the trust.
The third element of ‘good’ is governance in every layer of the infrastructure. Risk management has to run through the stack. That means active metadata to track lineage, access controls for both data and models, usage monitoring to catch unsanctioned behaviour, and policy-as-code to keep environments consistent. With new certification schemes like EU Cloud Services Scheme (EUCS) on the horizon, and sector regulators sharpening expectations, enterprises that embed governance deeply now will find themselves ahead of the curve.
What leaders should do next
For executives, the challenge is turning governance principles into day-to-day practice. A few priorities stand out.
Adopt a recognised framework.
Standards such as ISO/IEC 42001 (the world’s first AI management system) and the NIST AI Risk Management Framework give boards, auditors, and regulators a common language. Using them shows intent and creates a clear structure for oversight.
Make policies portable.
Policies can’t sit in PowerPoint decks. They need to be expressed as code and enforced automatically at ingestion, training, deployment, and inference, across every cloud and edge environment.
Invest in evaluation and observability.
Build pipelines to track model drift, bias, and hallucinations, and ground outputs in enterprise data with RAG. Pair this with end-to-end audit trails so you can prove compliance, not just claim it.
Plan for sovereign readiness.
Run “what if” drills. What happens if data locality laws change? If certification schemes such as EUCS harden? If vendors shift terms? Having exit plans and testable sovereignty controls reduces exposure and strengthens resilience.
Governance doesn’t slow AI down. Done well, it’s what keeps innovation sustainable and trustworthy. And given the expected boom in agentic systems, preparing data and getting frameworks right now will save a lot of time and pain later and enable enterprises to take full advantage of what is coming next.