Markets and Economy

EU and UK diverge in the race to regulate artificial intelligence

Photo of a woman’s hand on a computer screen using an AI chatbot
Key takeaways
The EU approach
1

Political negotiations on the EU’s AI Act are in the final phase, with potential for new rules to apply from H1 2026.

The UK approach
2

Initially, the UK is taking a non-legislative approach to the regulation of AI, with regulators tasked with supervising against five core principles.

The impact on business
3

Regulatory divergence in Europe is likely to drive competition for business between the EU and UK.

European policymakers are racing to get to grips with emerging opportunities and risks presented by artificial intelligence (AI), and to implement regulation that both supports the development and use of AI while safeguarding against inherent risks. The EU and UK are taking different approaches, with the EU proposing legislation and the UK taking a non-legislative approach. 

European Union: Proposed legislation would regulate risky AI systems 

In April 2021, the European Commission issued draft legislation aiming – for the first time – to harmonise rules governing AI across the European Union (EU). It is hoped that a political agreement on the proposed act can be reached before the end of the year, with the bulk of new rules applying two years thereafter. Depending on how negotiations progress, we could see new rules for providers and users of AI systems in the EU apply in the first half of 2026. 

The regulations would vary based on the riskiness of each AI system:

  • Unacceptable risk: These systems would be prohibited.
  • High risk: These systems would be regulated.
  • Limited risk: Transparency would be required.
  • Low risk: There would be no regulatory obligations. 

United Kingdom: The government is taking a non-legislative approach 

Unlike the EU, the UK does not intend to legislate at this stage, but instead plans to establish a non-statutory framework requiring regulators – eventually via a statutory duty – to implement five principles designed to guide the responsible development and use of AI in all sectors of the economy: 

  1. Safety, security, and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

The UK government also recently hosted the world’s first summit on AI safety. During the summit it coordinated agreement among participating jurisdictions, including the US, the EU, and China, on the Bletchley Declaration, which is a joint agenda for addressing “frontier” AI risks. That includes risks stemming from the misuse of AI, or untended consequences arising from the use of AI, in sectors such as cybersecurity and biotechnology. It also includes risks stemming from the dissemination of misinformation more broadly.

Following the summit, the UK government is expected shortly to encourage domestic regulators to publish guidance on how they will apply the UK’s proposed principles-based framework.

Going forward

With ambitions to be global leaders in the regulation of AI, it remains to be seen whether the EU or UK – if either – gets it right. 

Learn more about the approach the US government is taking  with artificial intelligence.