The UK’s AI Assurance Landscape: Why Assurance Matters in the Age of AI

A stylized bonsai tree with glowing network nodes and circuit-like lines integrated into its trunk and branches, symbolizing artificial intelligence. One hand holds pruning shears while another waters the tree, representing careful cultivation and oversight.

Artificial intelligence is changing how we live, work, and govern. From Internet of Things (IOT) devices, automated decision-making in public services to consumer-facing large language model (LLM) applications, AI is everywhere. But alongside the immense opportunities, the many and serious risks of these epoch-defining technologies can’t be ignored. Bias, lack of transparency, safety failures, and accountability gaps could all erode trust and cause real, potentially catastrophic, harm.

In 2025, governments, businesses, and citizens alike all agree that trustworthy AI is essential. That means advanced systems must be assured, which is the name given to attempts to make AI safer, less biased, and more reliable, and may also include statements about what it should be used for, for example growth, or social good.

At Ekrixis, we believe assurance is important. But the language in this space can verge on the bewildering. A glance at the AI assurance landscape over recent years is enough to overwhelm: a mix of the well-meaning-but-toothless, the misguided, and the actually useful – all jostling in a not-infrequently incoherent jumble of ‘frameworks,’ ‘standards,’ ‘principles,’ and ‘commitments’. But what’s the difference between these labels, and what actually matters?

What Are Frameworks, Standards, and Commitments, and Why Are They Important?

Frameworks are structured sets of guidance or best practices relevant to AI, like how data is used. They’re typically non-binding and help organisations shape their approach to AI development and use. They aim to provide scaffolding for organisations to align with responsible AI, such as the UK Government Digital Service (GDS) Data Ethics Framework, (newly updated to respond to advances in AI in 2025). 

However, political events can dictate how long lasting generalised AI frameworks can be. For example, the US NIST’s AI Risk Management Framework was developed under the Biden administration, and was by all accounts considered stifling and flat-footed by big tech companies, although perhaps that’s an indication that it was on to something. However the official encouragement for federal government to follow it was later cancelled under president Trump via executive order.

Standards are formal, detailed, and often certifiable criteria developed by recognised bodies (like ISO, IEEE or BSI). These are concrete descriptions of what organisations can build policies around, or build into models, and have them be measured against. In 2023, ISO/IEC released two major standards: ISO/IEC 42001 (for AI management systems) and ISO/IEC 23894 (for AI risk management). These standards can be highly useful and should ideally be considered from the inception of the design process. There are also standards like ISO/IEC 22989:2022 relating to correct use of AI terminology (useful in documentation). The British Standards Institute recently released its own BS ISO/IEC 42005:2025 on AI system impact assessments. It’s reasonable to expect that domain will become even more crowded in future, so it’s a good idea to subscribe to get email updates from the relevant institutions as new ones are added or old ones updated.

Principles and Commitments are high-level statements of intent, of wildly varying weight, importance and influence, such as the UK’s department for Science Technology and Innovation (DSIT’s) report, “Assuring a responsible future for AI”. Governments or companies make these public to demonstrate their alignment with responsible AI goals, but they aren’t specific or enforceable on their own. However, they often pave the way for frameworks and standards.

It also has to be observed that a certain amount of vagueness is both strategically useful and necessary in such a dynamic and rapidly evolving field. These types of document contain much that is “aspirational” and lofty in tone, but it’s also important to consider that far from being just statements of intent, these documents are influential to the discourse around AI and can sketch out the contours of a top down superstructure for organisational rules and legislation to fit into as the field develops.

The upshot of all this is that understanding these distinctions helps clarify where real, practical tools exist, and what can be audited or implemented meaningfully.

What’s Influencing the UK Right Now

Rather than adopting a centralised AI law like the EU’s AI Act, the UK is taking an ostensibly principles-based, sector-led approach. This relies on:

  • Cross-sector principles from the 2023 White Paper (safety, transparency, fairness, accountability, contestability)
  • Empowering regulators in each domain (health, finance, etc.) to interpret and apply the principles in context
  • Building out the AI assurance ecosystem: standards, testing methods, evaluators, and audit mechanisms that can be tailored to different risks and uses

Key priorities for UK government and business stakeholders include:

  1. ISO/IEC 42001: Becoming the gold standard for AI governance systems
  2. ISO/IEC 23894: Practical guidance for risk management through the AI lifecycle
  3. Portfolio of AI Assurance Techniques: UK-developed resource mapping real tools to principles
  4. A flourishing AI assurance services market: including third-party audit, red-teaming, bias testing, and impact assessments
  5. AI Security (Formerly Safety) Institute: Testing frontier models for national and global safety. They’ve been producing important and well-regarded red-teaming and safety assessment reports on frontier models and are probably the most influential organisation in the space in the UK.

This landscape favours flexibility and pragmatism, but does place value on structured assurance methods that can demonstrate trustworthiness to regulators, partners, and the public. The unspoken central premise, however, is that there’s a lot of money to be made in AI, and therefore huge economic growth potential – the holy grail of this Labour government’s quest for domestic success and stability, as well as international prestige and relevance. It follows then, that over-regulating, as for example, the EU is perceived to have done, will stifle growth, and is fundamentally considered to be a bad approach. 

Equally, the UK wants to be an influential player when it comes to developing safe and responsible AI as an important component of the UK’s economic, hard and soft power. There are legitimate safety and ethical concerns, and the government recognises that it must be seen to do something to protect the public. How effective these efforts will be remains to be seen. But it does guide organisations, like Ekrixis, who want to work with AI and want to embed best practices in ethics, safety and security, and help others to do the same. 

How Ekrixis Is Putting Assurance Into Practice

At Ekrixis, we intend to play a part in shaping the culture of AI assurance in practice.

  • We’re aligning to ISO/IEC 42001 and ISO/IEC 22989:2022 as part of our internal governance systems, building a management system that tracks and documents AI design, deployment, and oversight
  • We intend to use ISO/IEC 23894 and BS ISO/IEC 42005:2025 to conduct structured risk assessments, identifying weaknesses early and mitigating them transparently
  • We plan to conduct independent bias testing and produce model documentation aligned with UK recommendations for transparency
  • We will train our teams (when we scale) in assurance practices, ensuring AI ethics, risk, and compliance are integrated into development workflows, not bolted on at the end
  • We’re planning to engage with the UK’s assurance ecosystem, contributing feedback to standards development and supporting an emerging profession of AI evaluators

Our approach is about embedding assurance in a way that builds confidence and leads to useful results, rather than signalling or performative ethics.

The taxonomy of the assurance landscape may be confusing, and its constituent organisations and guidelines are yet to face a serious test of their effectiveness. But it’s clear that the UK’s influential position in tech, and its developed regulatory and legal institutions make it important for UK organisations working in the AI space to pay attention and contribute to AI assurance as it evolves as a field nationally and internationally. 

In future blogs, we’ll take a closer look at how the UK’s approach stacks up against the more heavy-handed, centralised regulatory efforts in the EU, the fragmented but influential framings coming from the US, and China’s focused, state-led approach for AI governance. Spoiler: there’s no single “right” roadmap – but the divergences are instructive. 

Scroll to Top