March 13, 2026 9 mins read

EU AI Act and the UK: What British Organisations Need to Know

Brexit did not exempt UK organisations from the EU AI Act. If your AI systems operate in EU markets, affect EU citizens, or form part of a supply chain serving Europe, the Act applies to you.

The same extraterritorial logic that made GDPR a global compliance exercise, applies here. The EU AI Act follows the same model.

Does the EU AI Act Apply to UK Businesses?

The Act applies to providers and deployers located outside the EU when their AI systems are placed on the EU market, produce outputs used in the EU, or monitor the behaviour of individuals within the EU (EU AI Act, Article 2).

The UK does not need to comply as a matter of domestic law. However, any UK organisation selling AI systems in Europe, deploying AI affecting EU individuals, or operating as part of an EU-facing AI value chain must comply with the full weight of the regulation.

UK law firm Farrer and Co confirmed in February 2026 that “the Act has significant extraterritorial effect and UK businesses will have to tread carefully.” (Farrer & Co, farrer.co.uk, February 2026)

ScenarioIn scope?
UK company selling AI systems to EU customersYes
UK company deploying AI that affects EU employeesYes
UK company using AI internally with no EU operationsNo
UK company providing AI tools to EU-based partnersYes
UK startup with no EU market presenceNo

If your product touches EU customers, EU employees, or EU regulatory processes, you are in scope.

What the UK’s Own AI Regulation Looks Like

The UK has not passed comprehensive AI legislation. The government’s position, set out in its 2023 white paper, favours a principles-led, sector-by-sector approach rather than a single binding statute.

Secretary of State Peter Kyle announced in 2025 that a comprehensive UK AI Bill would not be introduced before the King’s Speech, expected in May 2026. (King & Spalding, kslaw.com, July 2025)

As of March 2026, whether a dedicated UK AI Bill will appear at all in 2026 remains uncertain, according to Osborne Clarke’s January 2026 regulatory outlook. (Osborne Clarke, osborneclarke.com, January 2026)

DimensionEU AI ActUK Approach (current)
Legal statusBinding regulationSector-by-sector guidance
Single statuteYesNo
Enforcement bodyMarket surveillance authorities + EU AI OfficeICO, FCA, CMA, Ofcom
PenaltiesUp to €35M or 7% global turnoverVaries by sector regulator
Applies to UK organisationsYes, extraterritoriallyYes, domestically
TimelinePhased 2025–2027UK Bill not before May 2026

For organisations with EU exposure, the EU AI Act is the operative regime right now.

The EU AI Act Enforcement Timeline

The EU AI Act applies in phases. Each phase has already arrived or is approaching fast.

DateWhat Takes Effect
2 February 2025Article 5 prohibitions: social scoring, subliminal manipulation, biometric exploitation, real-time biometric ID in public spaces
2 August 2025GPAI model obligations: technical documentation, training data summaries, GPAI Code of Practice compliance
2 August 2026Full high-risk AI framework: quality management systems, Annex IV documentation, conformity assessments, CE marking, post-market monitoring
2 August 2027Transitional arrangements end for AI embedded in regulated products already on market before the Act’s adoption

The European Commission’s Digital Omnibus proposal of November 2025 proposed delaying some high-risk obligations until December 2027. That proposal is still working through the European Parliament and Council as of early 2026, and the Commission retains the right to bring forward the implementation date. (Osborne Clarke, osborneclarke.com, January 2026)

Do not plan your compliance programme around a delay that has not been confirmed.

EU AI Act Risk Classification: Where UK Organisations Need to Focus

The Act organises AI systems into four categories. The category determines the compliance obligations.

Risk CategoryDefinitionObligationsKey UK Examples
ProhibitedArticle 5 practices banned outrightCannot operate in EUSocial scoring systems, subliminal manipulation tools
High-RiskAnnex III application areasFull compliance framework: Art. 9–17HR screening tools, credit scoring, biometric ID
Limited RiskTransparency risks onlyDisclose AI nature, label deepfakesChatbots, generative AI content tools
Minimal RiskNo significant riskNone mandatorySpam filters, AI-enabled games

For UK organisations, the high-risk categories most commonly triggered are employment and HR tools, credit scoring and financial decision systems, biometric identification, critical infrastructure components, and education and vocational training applications.

What High-Risk Compliance Actually Requires

For any UK organisation with high-risk AI systems operating in or affecting the EU, the substantive obligations are substantial.

ArticleRequirementWhat It Means in Practice
Article 9Risk management systemContinuous process, not a document. Updated when the system changes.
Article 10Data governanceTraining data quality, bias examination, representativeness assessment
Article 11 + Annex IVTechnical documentationSystem description, design logic, training methodology, performance metrics. Must be current and available on request.
Article 13TransparencyInstructions for use, capability and limitation disclosure to deployers
Article 14Human oversightHumans must have tools, knowledge and authority to monitor, intervene and override outputs
Article 15Accuracy and robustnessDocumented performance benchmarks, resilience to adversarial inputs
Article 17Quality management systemCovers full AI lifecycle from design through post-market monitoring
Article 72Post-market monitoringActive data collection on real-world performance after deployment

“On request” in Article 11 means tomorrow morning, not after a two-week document retrieval exercise.

The Authorised Representative Requirement

UK organisations placing high-risk AI systems on the EU market must designate an EU-established authorised representative. This is not optional.

The authorised representative carries legal responsibility for compliance within the EU, must be named in the technical documentation and on the EU Declaration of Conformity, and acts as the point of contact for market surveillance authorities.

UK organisations that ignore this requirement lose their legal standing to operate high-risk AI systems in the EU market. Talk to us to find out more about Authorised Representative service!

EU AI Act Penalties

ViolationMaximum Penalty
Article 5 prohibited practices€35 million or 7% of global annual turnover
High-risk AI obligations€15 million or 3% of global annual turnover
Incorrect information to authorities€7.5 million or 1% of global annual turnover

These penalties apply to annual global turnover, not EU revenue. A UK company deriving 10% of its revenue from the EU faces penalties calculated against 100% of its worldwide income.

The Brussels Effect in Practice

The GDPR established that EU data protection standards become the global floor for any organisation operating internationally. The AI Act follows the same mechanism.

UK organisations that build compliance programmes to EU AI Act standard are simultaneously preparing for whatever domestic UK regulation eventually arrives, positioning themselves competitively for enterprise procurement processes that increasingly include AI governance due diligence, and reducing exposure in EU markets ahead of the August 2026 enforcement date.

Raconteur reported in December 2025 that “UK leaders who have not begun preparations must do so or risk regulatory penalties and operational disruption.” (Raconteur, raconteur.net, December 2025)

The organisations that will handle 2026 well started in 2025. The organisations that wait until summer will be assembling technical documentation under regulatory pressure.

What UK Organisations Should Do About EU AI Act Now

Start with a full inventory of AI systems that touch EU markets or EU individuals. Many organisations do not know how many AI systems they have in scope.

Map each system against Annex III. Employment tools, credit scoring systems, biometric applications and customer-facing AI in regulated sectors are the most common high-risk triggers for UK organisations.

For any high-risk system, assess the gap between current documentation and the Annex IV requirements. Technical documentation, risk management records and conformity assessment evidence take months to build properly.

Appoint an EU authorised representative if you do not have one.

Review whether any of your AI practices engage Article 5 prohibitions. These have been in force since February 2025.

The enforcement clock is running. The documentation, risk management and conformity assessment work the regulation requires cannot be produced in the weeks before an audit.

Here’s hown eyreACT helps: our platform automates EU AI Act compliance for UK and EU organisations. From AI system classification to audit-ready documentation and continuous compliance monitoring. Learn more at our demo or join the next pilot cohort.

Frequently Asked Questions

Does the EU AI Act apply to UK companies after Brexit?

Yes, in many cases. The Act applies extraterritorially to any organisation whose AI systems are placed on the EU market, produce outputs used in the EU, or monitor the behaviour of EU individuals. Brexit removed the UK from the EU legal system but did not remove UK organisations from the Act’s scope where their AI systems have EU reach.

What is the EU AI Act enforcement date for UK businesses?

The key date is 2 August 2026, when the full high-risk AI framework becomes enforceable. Some obligations are already in force: Article 5 prohibitions applied from 2 February 2025, and GPAI model obligations from 2 August 2025. UK organisations should not wait until summer 2026 to begin.

Does the UK have its own AI Act?

No. As of March 2026, the UK does not have a dedicated AI statute. The government favours a sector-by-sector approach, with existing regulation from the ICO, FCA, CMA and Ofcom applying to AI in their respective domains. A comprehensive UK AI Bill is not expected before May 2026 at the earliest, and its introduction in 2026 remains uncertain.

What happens if a UK company ignores the EU AI Act?

Penalties for prohibited practice violations reach €35 million or 7% of global annual turnover. For high-risk AI violations, the ceiling is €15 million or 3% of global turnover. These figures are calculated against worldwide revenue, not EU revenue. Beyond financial penalties, non-compliant organisations can be barred from EU markets.

Do UK companies need an authorised representative under the EU AI Act?

Yes, if they place high-risk AI systems on the EU market. The authorised representative must be established in the EU, carries legal responsibility for compliance, and must be named in the EU Declaration of Conformity. Operating without one removes the organisation’s legal standing in the EU market.

Is an HR screening tool high-risk under the EU AI Act?

Yes. AI systems used in employment decisions, including recruitment screening, CV filtering, promotion assessment and performance monitoring, are classified as high-risk under Annex III. They are subject to the full compliance framework including risk management, technical documentation, human oversight and conformity assessment.

What is the difference between a provider and a deployer under the EU AI Act?

A provider develops or places an AI system on the market under their own name. A deployer uses an AI system in a professional context under their authority. A UK bank that builds its own credit scoring model is a provider. A UK bank that purchases and deploys a third-party credit scoring model is a deployer. Both have compliance obligations, though the provider bears the primary burden.

How does the EU AI Act relate to GDPR for UK organisations?

The two regimes overlap significantly for high-risk AI systems that process personal data. GDPR governs personal data processing. The AI Act governs the AI system itself, including its design, documentation, risk management and human oversight, regardless of whether personal data is involved. UK organisations subject to both must map requirements across both simultaneously. A DPIA under GDPR and a Fundamental Rights Impact Assessment under the AI Act are distinct documents with overlapping subject matter.

What is an Annex IV technical documentation package?

Annex IV specifies the mandatory contents of the technical documentation that high-risk AI system providers must maintain. It covers system description and purpose, design specifications, training data and methodology, performance metrics, risk management outputs, human oversight measures, post-market monitoring plan, and instructions for deployers. It must be kept current throughout the system lifecycle and produced to market surveillance authorities on request.

Can a UK company self-certify compliance with the EU AI Act?

For most high-risk AI systems, yes. Self-assessment against the applicable requirements, followed by an EU Declaration of Conformity and CE marking, is the standard conformity assessment route. Third-party assessment by a notified body is mandatory only for specific categories including AI systems used for real-time biometric identification and certain law enforcement applications.