Last updated: March 2026
I had dinner last month with an old friend who runs AI product at a mid-size US enterprise software company. They have zero European customers. Zero European employees. Zero European revenue. They sell exclusively to the North American market.
“So the EU AI Act,” he said. “That’s a Europe thing, right? Doesn’t affect us.”
I asked him one question: “Do any of your customers have European subsidiaries, European clients, or European users?”
He went quiet. Then he said: “Probably most of them.”
That’s the Brussels Effect in action. The EU didn’t just write a law for Europe. It wrote a law that reshapes how AI is built, sold, and governed everywhere — because if you want access to 450 million consumers and the world’s largest single market, you comply. And once you’ve built your product to EU standards, you don’t build a separate, lower-standard version for everyone else. You ship the compliant version globally.
This is not theory. This is what happened with GDPR. It’s happening again with EU AI Act. And if you’re building AI anywhere in the world, you need to understand how it works.
What Is the Brussels Effect?
The term was coined by Columbia Law School professor Anu Bradford in her 2012 paper and 2020 book. It describes the EU’s ability to set global standards through unilateral regulation — not through trade negotiations or diplomatic pressure, but simply by being a market so large and wealthy that companies voluntarily adopt EU rules worldwide rather than maintain different product versions for different jurisdictions.
The Brussels Effect operates through two mechanisms:
| Mechanism | How It Works | AI Act Example |
|---|---|---|
| De facto Brussels Effect | Companies adopt EU standards globally because maintaining separate product versions is more expensive than universal compliance | A US AI recruitment platform builds human oversight, documentation, and bias testing for the EU — then ships the same version to all customers worldwide because it’s cheaper than maintaining two codebases |
| De jure Brussels Effect | Other governments adopt legislation modelled on EU regulation because the EU set the precedent and framework | Colorado’s AI Act (2026), Brazil’s AI Bill, Canada’s AIDA, Singapore’s governance frameworks, and Japan’s AI guidelines all borrow structural elements from the EU AI Act |
The Brussels Effect works when five conditions are met:
- Market size (the EU is massive)
- Regulatory capacity (the EU can actually enforce)
- Preference for strict rules (the EU favours precaution over permissiveness)
- Inelastic targets (companies can’t easily exit the market)
- Non-divisibility (it’s hard to make a “EU version” and a “rest of world version” of software).
The AI Act meets all five.
The GDPR Playbook: We’ve Seen This Before
The EU AI Act’s global reach isn’t speculation. It’s repetition. The GDPR created the exact same pattern — and the numbers tell the story:
| GDPR Brussels Effect | What Happened |
|---|---|
| Global corporate adoption | Every major US tech company (Google, Apple, Meta, Microsoft, Amazon) implemented GDPR-standard privacy controls globally — not just for EU users |
| Cookie consent worldwide | You see cookie banners on websites in Kansas because of a European law |
| De jure spread | Brazil (LGPD), Japan, South Korea, Thailand, India, South Africa, and numerous US states adopted GDPR-influenced data protection laws |
| California’s CCPA/CPRA | California’s privacy laws borrowed heavily from GDPR concepts — lawful basis, consumer rights, data minimisation |
| Corporate infrastructure | DPO roles, privacy-by-design, data protection impact assessments became global corporate standard practice |
| Global fines | EU regulators issued billions in fines to US tech companies — proving extraterritorial enforcement works |
The AI Act is following the same trajectory, but faster — because the compliance infrastructure companies built for GDPR (legal teams, vendor management, impact assessments, documentation practices) already exists. The AI Act plugs into it.
How the AI Act Brussels Effect Is Already Playing Out
We’re only in early 2026 and the pattern is already visible:
De Facto: Companies Adopting EU Standards Globally
| Company | What They’re Doing |
|---|---|
| Microsoft | Updated Purview platform to automate AI Act conformity assessments globally. Positioning Azure as “safest” cloud for regulated AI. Applying EU-standard documentation practices to all enterprise AI products. |
| Signed GPAI Code of Practice. Integrated “Responsible AI Transparency Reports” into Google Cloud console globally. | |
| OpenAI | Signed GPAI Code of Practice. Building technical documentation, training data transparency, and safety evaluation infrastructure that applies to all models globally. |
| Anthropic | Signed GPAI Code of Practice. Safety evaluation and red-teaming practices align with AI Act systemic risk requirements globally. |
| Amazon | Signed GPAI Code of Practice. AWS AI governance tools incorporating EU-standard risk assessment globally. |
| Meta | Refused to sign Code of Practice. Facing formal EU investigation into WhatsApp Business APIs. Demonstrates the cost of non-compliance. |
The key insight: none of these companies are building “EU-only” compliance features. They’re building them into their global products. That’s de facto Brussels Effect — the EU standard becomes the global standard not through diplomacy but through product economics.
De Jure: Governments Adopting EU-Influenced AI Regulation
| Jurisdiction | Legislation | EU AI Act Influence |
|---|---|---|
| Colorado, USA | Colorado AI Act (effective 2026) | Risk-based classification, deployer obligations, impact assessments — directly mirrors EU approach |
| California, USA | Proposed AI regulation | Risk tiers, transparency requirements, discrimination protections echoing the AI Act |
| New York City | Local Law 144 (automated hiring) | AI in employment — same high-risk category as AI Act Annex III |
| Brazil | AI Bill (under debate) | Risk-based framework, prohibited practices, transparency obligations structurally similar to AI Act |
| Canada | AIDA (Artificial Intelligence and Data Act) | Risk classification, high-impact AI systems, transparency duties |
| Singapore | Model AI Governance Frameworks | Voluntary but increasingly aligned with EU principles — AI Verify testing maps to AI Act conformity concepts |
| Japan | AI Guidelines + GPAI participation | Collaborating with EU on AI safety testing standards through International Network of AI Safety Institutes |
| South Korea | AI Basic Act (2024) | Risk-based approach with EU-influenced classification |
| Vietnam | AI Law (effective March 2026) | Risk-based classification — first binding AI law in Southeast Asia, references EU approach |
| UK | Pro-innovation AI regulation | Different approach (sector-specific, not horizontal) but monitoring EU AI Act outcomes closely |
| China | Multiple AI regulations | Different governance model but converging on similar risk areas — generative AI, deepfakes, algorithmic recommendation |
The US is particularly telling: while there’s no federal AI law, the state-level pattern mirrors early GDPR adoption. California led with CCPA after GDPR. Now Colorado leads with AI regulation after the AI Act.
The patchwork is forming and the EU template is the reference architecture.
Why the Brussels Effect Works for AI
| Factor | Why It Applies to AI |
|---|---|
| Market size | The EU is the world’s largest single market by regulatory unity. 450 million consumers, €16 trillion GDP. No AI company can afford to exit. |
| Software non-divisibility | AI models and AI-powered SaaS products are fundamentally global. Building a “Europe version” with human oversight and an “everywhere else version” without it is technically possible but economically absurd. You ship one version. |
| Supply chain cascade | The AI Act applies to providers, deployers, importers, and distributors. If one link in your value chain touches the EU, compliance requirements cascade through the entire chain. A US company selling to a European deployer inherits provider obligations. |
| Extraterritorial design | The AI Act explicitly applies to non-EU companies whose AI outputs are used in the EU. Same design as GDPR Article 3. |
| First-mover advantage | The EU is the first jurisdiction with comprehensive binding AI legislation. Other countries are writing their laws in a world where the EU standard already exists. It’s easier to align than to start from scratch. |
| Corporate compliance infrastructure | Companies that built GDPR compliance programmes now have legal teams, vendor management frameworks, and impact assessment processes that naturally extend to AI Act compliance. The marginal cost of global adoption is lower than the cost of maintaining regional variations. |
What the EU AI Act’s Brussels Effect Means for Your Company
The Brussels Effect has a practical implication that most compliance guides miss: even if you’re not directly subject to the EU AI Act, you will increasingly be expected to meet its standards.
Here’s why:
| Scenario | How the Brussels Effect Reaches You |
|---|---|
| Your enterprise customers have EU operations | They’ll require AI Act compliance in their vendor contracts — regardless of where you’re based |
| Your competitors comply | If your competitor offers AI Act-compliant products and you don’t, EU-conscious buyers choose them. Compliance becomes a competitive advantage, then a baseline expectation. |
| Your investors ask about it | VCs and PE firms are increasingly including AI governance in due diligence. “Are you AI Act compliant?” is the new “Are you GDPR compliant?” |
| Your insurance requires it | Cyber and professional liability insurers are beginning to factor AI Act compliance into underwriting — non-compliance = higher premiums or exclusions |
| Your country adopts similar rules | Colorado, California, Brazil, Canada — the laws are coming to your jurisdiction too, modelled on the EU. Complying now means you’re prepared. |
| Your talent expects it | AI engineers increasingly prefer working at companies with responsible AI practices. The AI Act is becoming shorthand for “we take AI governance seriously.” |
Industry-Specific Brussels Effect of EU AI Act
Banking & Financial Services
The Brussels Effect is strongest in finance because banks already operate under extensive cross-border regulation (Basel, MiFID II, DORA). European banking regulators are incorporating AI Act requirements into supervisory expectations. Non-EU banks with European operations will apply the same AI governance standards globally — maintaining two standards for credit scoring models (one EU-compliant, one not) is operationally untenable.
Healthcare & MedTech
Medical AI faces dual Brussels Effect: the AI Act and the Medical Devices Regulation. Any medtech company seeking CE marking for AI diagnostics must comply with both. Since FDA and other global regulators increasingly recognise EU conformity assessment, building to the EU standard opens global market access. The EU standard becomes the product specification.
Recruitment & HR
Employment AI is where the Brussels Effect hits hardest for US companies. EU clients demand AI Act-compliant recruitment tools. Colorado’s AI Act mirrors the EU’s high-risk classification for employment AI. NYC Local Law 144 targets automated hiring. The direction is clear: human oversight and bias testing in recruitment AI will be required everywhere. Companies building it now for the EU won’t need to rebuild for the US.
Foundation Models / GenAI
The GPAI Code of Practice is the clearest Brussels Effect mechanism in action. Twenty-six global providers signed voluntarily — accepting EU-defined transparency, copyright, and safety standards. These practices will be embedded in their products globally. When OpenAI publishes training data summaries for the EU, that transparency doesn’t disappear for US users. When Anthropic conducts adversarial testing per EU requirements, those safety practices apply to all models.
EdTech
Educational AI classified as high-risk under the AI Act is triggering global product redesigns. EdTech platforms can’t practically maintain an “EU version” with human oversight and a “US version” without it. The transparent, explainable, human-supervised version becomes the product — everywhere.
Automotive & Manufacturing
Vehicle AI and industrial robotics already operate under global safety standards. The AI Act adds an AI-specific layer to CE marking and type approval. Automotive manufacturers building to EU standards will ship those standards globally because vehicles and machinery are designed once and sold worldwide.
The Counter-Arguments (And Why They’re Partially Right)
Not everyone agrees the Brussels Effect will fully materialise for AI. The honest picture:
| Argument Against | Validity |
|---|---|
| “AI is different from data — you can build regional versions” | Partially true for deployment context (you can restrict which features are available where), but false for foundation models (you don’t train two versions of GPT). Overall: the de facto effect is strong for models, weaker for applications. |
| “The US will go its own way” | True at the federal level — no comprehensive US AI law is imminent. But state-level regulation is converging with EU principles. And US companies serving global markets will comply with the EU anyway. |
| “China’s approach is fundamentally different” | True — China regulates AI through sector-specific rules and party-state governance, not horizontal legislation. But even China is converging on similar risk areas (generative AI, deepfakes, algorithmic recommendation). The overlap is larger than the difference. |
| “The Digital Omnibus weakens the AI Act” | Partially true — deadline extensions and relaxed requirements reduce compliance urgency. But the core framework (risk tiers, prohibited practices, GPAI obligations) is unchanged. The structure persists even if timelines shift. |
| “Compliance costs will drive innovation out of Europe” | A real risk. Goldman Sachs estimates Chinese AI providers will invest $70 billion in data centres in 2026. The EU’s AI Continent Action Plan and AI Factories are a response, but the competitiveness gap is genuine. The Brussels Effect works for regulation — whether it works for innovation is an open question. |
How EYREACT Can Help
The Brussels Effect means AI Act compliance isn’t just a European obligation — it’s becoming the global baseline. EYREACT is built for companies navigating this reality, whether you’re based in Berlin, Boston, or Bangalore.
Our platform automates risk classification, evidence management, and audit-ready documentation to the EU standard — which is rapidly becoming the world standard.
Built at the source of EU AI law. Ready for wherever the Brussels Effect takes it. Book a demo to find out how you benefit.
FAQ
What is the Brussels Effect?
The tendency of EU regulation to become the global standard — not through diplomacy or trade agreements, but because the EU market is so large that companies adopt EU rules worldwide rather than maintaining different product versions for different jurisdictions. Coined by Columbia Law professor Anu Bradford.
Is the Brussels Effect happening with the AI Act?
Yes. Both de facto (companies adopting EU standards globally) and de jure (other jurisdictions modelling legislation on the AI Act) effects are visible as of early 2026. Major AI providers have signed the GPAI Code of Practice and are implementing EU-standard governance globally. Colorado, Brazil, Canada, Singapore, and others are adopting EU-influenced AI regulation.
Does this mean every company worldwide needs to comply with the AI Act?
Not directly. The AI Act legally applies only when AI systems or outputs are used in the EU. But the Brussels Effect means that EU standards increasingly become baseline expectations in procurement, investment, insurance, and competitive positioning — even in markets where the AI Act doesn’t legally apply.
How is this different from GDPR’s Brussels Effect?
The mechanism is identical but the speed is faster. Companies already have GDPR compliance infrastructure (legal teams, impact assessments, vendor management) that extends naturally to AI Act compliance. The marginal cost of global adoption is lower than it was for GDPR, making the de facto effect potentially stronger.
Will the US adopt a federal AI law similar to the AI Act?
Not in the near term. The US approach favours sector-specific rules and voluntary frameworks over comprehensive horizontal regulation. However, state-level AI laws (Colorado, California, NYC) are converging with EU principles. The federal vacuum may persist, but the state-level patchwork is increasingly EU-influenced.
Does the Digital Omnibus weaken the Brussels Effect?
The Digital Omnibus delays certain deadlines and relaxes specific requirements, but doesn’t change the AI Act’s fundamental structure. The risk tiers, prohibited practices, GPAI obligations, and extraterritorial reach remain intact. The Brussels Effect depends on the framework’s existence and market leverage, not on individual deadline dates.
How should companies outside the EU respond?
Treat the EU AI Act as your global compliance floor. Build to EU standards and you’ll satisfy most emerging AI regulations worldwide. This is cheaper than building to the minimum in each jurisdiction and upgrading later. The companies that learned this from GDPR are already doing it with the AI Act.
Can companies avoid the Brussels Effect by not selling to the EU?
Theoretically yes, but practically very few companies can afford to abandon a 450-million-consumer market. And even if you don’t sell directly to the EU, your customers, partners, or investors may have EU exposure that cascades compliance expectations to you.
This article is for informational purposes only and does not constitute legal advice. Organisations should seek qualified legal counsel for jurisdiction-specific compliance guidance.