As a leader in the field of artificial intelligence (AI) development, the United Kingdom started to explore the ways of AI regulation already in 2023. It will primarily be based on soft law, prioritising innovation and providing incentives for research and businesses. We summarise the different publications and outline the main aspects of the upcoming UK AI regulation.
The general AI regulatory framework
The British government has adopted a so-called pro-innovation approach, led by the Department for Science, Innovation, and Technology (DSIT).
It is a principles-based framework that is non-binding and transversal across various sectors. The goal is to balance innovation and safety by applying the existing regulatory framework, which is technology-neutral.
The AI Regulation White Paper, published by the Government on 29 March 2023, identifies the essential characteristics of regulatory regime:
- Pro-innovation: enabling responsible innovation.
- Proportionate: avoiding unnecessary burdens for businesses and regulators.
- Trustworthy: addressing risks and fostering public trust in AI in order to promote and encourage its uptake.
- Adaptable: adjustable rules to emergent opportunities and risks as AI technologies evolve quickly and effectively.
- Clear: making it easy for actors in the AI life cycle, including businesses using AI, to know what the rules are, who they apply to, who enforces them, and how to comply with them.
- Collaborative: encouraging government, regulators, and industry to work together to facilitate AI innovation.
Definition of AI
The flexibility of the British soft-law approach on this matter is evident primarily in the fact that it does not provide a formal definition of artificial intelligence. Sectoral interpretations of what falls within the definition of AI will be based on the outcomes of the systems, and on the two distinctive characteristics of adaptability and autonomy. It will be the UK regulators, primarily the Office of Communications (Ofcom) and the Financial Conduct Authority (FCA), which shall create specific definitions for each domain, based on these interpretative criteria.
A principles-based approach
The UK’s final AI framework sets out five cross-sectoral principles for existing regulators to interpret and apply within their own remits to guide responsible AI design, development, and use:
- Safety, security and robustness of AI systems against attacks, malfunctions, errors and vulnerabilities, even under unforeseen conditions.
- Appropriate transparency and explainability of AI algorithms.
- Fairness to avoid discrimination and bias against individuals or groups.
- Accountability and governance structures to monitor and ensure that AI is used in compliance with ethical and regulatory standards; developers, users, and regulators are held responsible for the decisions and impacts arising from the use of AI.
- Contestability and redress, so individuals are able to challenge automated decisions and have access to effective mechanisms to seek remedies in cases of harm or injustice, to appeal, and to obtain a review of decisions made by AI systems.
Role of individual regulators and guidance in applying the principles
The UK will not introduce a new AI regulator to oversee the implementation of the framework. Instead, existing regulators, such as the Information Commissioner’s Office (ICO), the Ofcom, and the FCA, have to implement the five principles as they regulate and supervise the development and use of AI within their respective domains.
Regulators are expected to use a proportionate context-based approach, utilising existing laws and regulations. Neither the implementation of the principles by regulators, nor the necessity for them to collaborate will be legally binding.
Nonetheless, the Government expects individual regulators to take swift action to implement the AI regulatory framework in their respective domains. In fact, the DSIT selected 13 leading regulators, requesting them to publish a Strategic Plan with their own approach to AI oversight. On 1 May 2024, the Government published the 13 Regulators’ strategic approaches to AI. Each Plan includes:
- An outline of the measures to align regulators´ AI-oversight approaches with the framework’s principles;
- An analysis of AI-related risks within their regulated domains;
- An explanation of their existing capacity to manage AI-related risks;
- A plan of activities for the next 12 months (until February 2025), including additional AI guidance.
The 13 Strategic Plans provide businesses with valuable insights into regulators’ strategic direction and forthcoming guidance and initiatives. Companies, in turn, are well advised to follow the guidelines and support information gathering exercises.
Based on the Strategic Plans, on 13 January 2025, the DSIT published the AI Opportunities Action Plan.
The AI Opportunities Action Plan focuses on Government action to maximise the opportunities arising from the use of AI in terms of economic growth and improved social welfare. It is divided into three macro-sections:
- “Laying the foundations to enable AI” dedicated to building the AI infrastructure, sharing data in the public and private sector, attracting new talent, and developing a safe and reliable AI through the supervision of the AI Safety Institute (from 14 February 2025 renamed to The AI Security Institute (AISI), a team within DSIT dedicated to AI-related risk assessment).
- “Change lives by embracing AI” on encouraging the use of AI in the private sector and public administration.
- “Secure our future with homegrown AI” focused on the importance for the UK to become a champion in producing AI (“AI maker”), and not a consumer of AI produced by other countries (“AI taker”).
New supporting Central Function
The government has established a Central Function within the DSIT to support and facilitate coordination between regulators, monitor and assess AI-related risks, address regulatory gaps, oversee the implementation of the regulatory framework, promote the sharing of resources and best practices among regulators, and ensure that regulations are consistent and aligned across all sectors.
Additionally, this Central Function helps optimise the use of funds: The DSIT announced £10 million to support regulators in putting in place the necessary tools, resources and expertise needed to adapt and respond to AI.
Moreover, the AISI produces an annual International AI safety report, a register of all AI-related risks, accessible to regulators, government departments, and external groups.
An exception: GPAI systems
It has been acknowledged that for a specific category of particularly powerful AI systems (the so-called GPAI, General Purpose Artificial Intelligence), a future, more prescriptive regulation will be necessary. For now, the Government believes that appropriate conditions are not yet in place.
However, targeted regulatory interventions for these systems will address aspects such as transparency, data quality, risk management, accountability, corporate governance, and addressing harms arising from misuse or unfair biases.
UK AI regulation vs. EU AI Act
The UK approach to AI regulation contrasts with the European one, which is more prescriptive and regulatory, and based on risk assessment.
The two regulatory frameworks share some basic principles: accountability, safety, privacy, transparency, and fairness.
On the other hand, they diverge on multiple aspects, mainly concerning peripheral but closely related sectors to AI, such as data governance, environmental and social well-being, human agency and oversight, non-discrimination, security, contestability, and redress.
Finally, consistent with their respective hard vs. soft law approaches, the EU and the UK distinguish themselves: The former for having implemented a strict sanctioning system (which can lead the Commission to impose fines of up to €35,000,000 or up to 7% of a company’s annual global turnover), and the latter for lacking any sanctioning system that ensures the effective implementation of the Plans by companies.
Tip: Read our guide how the European AI Act influences businesses in the UK.
Conclusion
Soft regulation of AI in the UK consists of initiatives for collaboration with the public, research institutions, funding, and guidelines. The new Central Function of the Government plays a crucial role in ensuring effective coordination between regulators and in mitigating any negative impacts arising from divergent interpretations. This is essential in providing companies with the regulatory clarity needed to adopt and scale their AI investments, thereby strengthening the United Kingdom’s competitive advantage.