Last updated: March 2026
A buddy of mine is VP of Engineering at a Series C SaaS company in Austin. They make an AI-powered workforce management platform — shift scheduling, performance analytics, task allocation. Last quarter, they signed their first European enterprise client. Big German logistics company. Everyone celebrated.
Two weeks later, their legal team called a meeting. The product that just landed them a trophy European deal? It’s a high-risk AI system under the EU AI Act. Employment and worker management — Annex III, category 4. Their AI decides who works what shift, flags underperforming employees, and recommends terminations. In Europe, that triggers the full compliance gauntlet: risk management, technical documentation, conformity assessment, human oversight, EU database registration, post-market monitoring.
His response: “We’re a Texas company. How can a European law apply to us?”
Same way GDPR applied to every American SaaS company that had European customers. Same legal mechanism. Same extraterritorial reach. Same rude awakening.
If you’re a US company and your AI system is used in the EU, produces outputs that affect people in the EU, or is placed on the EU market — even through a partner or reseller — the EU AI Act applies to you. Full stop.
Let me explain how this works in practice.
The Legal Trigger: EU AI Act Follows the System, Not the Company
Article 2 of the AI Act defines scope. Here’s what pulls you in:
| Trigger | What It Means for US Companies |
|---|---|
| You place an AI system on the EU market | You sell, license, or make your AI product available to EU customers — whether directly or through resellers, distributors, or app stores |
| You put an AI system into service in the EU | You deploy your AI for use within the EU — including internal tools used by EU-based employees of your own company |
| The output of your AI system is used in the EU | Even if your servers are in Virginia and your customers are in California, if the AI’s output affects an EU individual — a credit decision, a content recommendation, a hiring assessment — you’re in scope |
| You’re a provider of a GPAI model | If your foundation model is integrated into downstream AI systems that operate in the EU, GPAI obligations apply to you regardless of where you’re based |
Notice what’s missing from this list: any requirement to have a physical presence in Europe. Any requirement to directly target the EU market. Any requirement to have European customers. The trigger is the output being used in the EU — not the company’s intent or location.
This is the same design principle as GDPR’s Article 3. And we all know how that played out.
The GDPR Playbook: You’ve Seen This Film Before
If your company went through GDPR compliance, the pattern is identical:
| GDPR (2018) | AI Act (2026) |
|---|---|
| Applied to any company processing EU personal data, regardless of location | Applies to any company whose AI systems or outputs are used in the EU, regardless of location |
| Required appointing an EU representative for non-EU companies | Requires appointing an EU authorised representative for non-EU providers of high-risk AI and GPAI |
| Created massive compliance programmes at US tech companies | Will create comparable compliance programmes for AI systems |
| Fines of up to €20M or 4% of global turnover | Fines of up to €35M or 7% of global turnover |
| Many US companies initially assumed it didn’t apply to them | Many US companies are currently assuming the same about the AI Act |
| It applied. They paid. | It will apply. They will pay. |
The difference: AI Act penalties are higher. Maximum 7% of global turnover for prohibited practices, versus GDPR’s 4%. The EU learned from GDPR that the fine ceiling needs to be high enough to make even the largest US tech companies pay attention.
Which US Companies Are Already in Scope
Let me walk through real scenarios, because the abstract legal language doesn’t capture how wide the net actually is.
SaaS Companies with EU Customers
Scenario: A San Francisco-based startup sells an AI recruitment platform. Three European companies use it to screen candidates.
AI Act status: The startup is a provider of a high-risk AI system (employment, Annex III category 4). It must comply with the full provider obligation set: risk management, technical documentation, data governance, human oversight design, conformity assessment, CE marking, EU database registration, and post-market monitoring. It must appoint an EU authorised representative.
The trap: Most SaaS companies don’t track whether their customers are European. Their AI doesn’t have a geographic boundary. The moment one EU-based company signs up and starts using the AI for a high-risk purpose, the provider is in scope — potentially without knowing it.
Foundation Model Providers
Scenario: A US company develops a large language model used by thousands of downstream developers worldwide, some of whom build applications for the EU market.
AI Act status: The company is a GPAI provider. Since August 2025, it must maintain technical documentation, publish training data summaries, comply with EU copyright law, and provide downstream providers with integration information. If the model is classified as systemic risk, additional obligations apply: adversarial testing, incident reporting, cybersecurity protections.
The scale: OpenAI, Anthropic, Google, Meta, Amazon — all GPAI providers subject to these obligations. Most have signed the voluntary AI Pact. Meta notably refused, stating it would focus on compliance directly rather than through the voluntary framework.
US Companies with EU Subsidiaries or Offices
Scenario: A New York-based financial services firm operates a London and Frankfurt office. Both offices use AI tools developed by the US parent for credit risk assessment.
AI Act status: The EU offices are deployers of high-risk AI systems (credit scoring, Annex III category 5b). The US parent, as the entity that developed the system and made it available under its brand, is the provider. Both have obligations — the parent for provider compliance, the EU offices for deployer compliance.
The complication: Post-Brexit, the UK is not subject to the EU AI Act. But the Frankfurt office is. The same AI system may need AI Act compliance for its German deployment but not for its London use. Managing this inconsistency across offices is a real operational challenge.
Cloud and Infrastructure Providers
Scenario: A US cloud company offers AI services — speech-to-text, sentiment analysis, image recognition — via API to global customers, including EU businesses.
AI Act status: If these AI services are used for high-risk purposes by EU deployers, the cloud provider may be considered a provider placing AI systems on the EU market. The classification depends on whether the cloud provider offers a general-purpose tool (which the deployer configures for a specific use) or a purpose-built AI system ready for a specific high-risk application.
The grey zone: A general-purpose sentiment analysis API isn’t inherently high-risk. But if an EU insurer uses it to assess customer emotions during claims calls, the use case may be high-risk. The cloud provider’s liability depends on how much they knew about the downstream use and whether their marketing or documentation encouraged it.
US Companies Selling Through EU Distributors
Scenario: A US medtech company develops an AI diagnostic tool and sells it through a German distributor to European hospitals.
AI Act status: The US company is the provider. The German distributor is the distributor (with its own obligations to verify CE marking and compliance). The hospitals are deployers. The US company must complete a conformity assessment under both the AI Act and the Medical Devices Regulation, appoint an EU authorised representative, and register in the EU database.
US Companies with No EU Presence or Intent
Scenario: A Boston-based analytics company builds an AI tool for the US healthcare market. They have no European customers and don’t market to Europe. But a US hospital client uses the tool to assess treatment options for a patient who is an EU citizen visiting the US.
AI Act status: This is the outer edge of scope. The AI Act applies when outputs are “used in the Union” — not when they affect EU citizens outside the EU. A tool used entirely within the US on a patient physically present in the US likely falls outside scope, even if the patient is an EU citizen. But if the tool’s output is transmitted to an EU-based healthcare provider for follow-up care, scope questions arise.
The lesson: Scope depends on where the AI system operates and where its outputs are consumed, not on the nationality of the affected person.
The Authorised Representative Requirement
Non-EU providers of high-risk AI systems and GPAI models must appoint an authorised representative (AR) within the EU before placing their systems on the market. This isn’t optional — without an AR, you technically cannot legally offer your AI product in Europe.
| What the AR Does | What the AR Does NOT Do |
|---|---|
| Acts as your regulatory contact point in the EU | Replace your provider obligations — you remain fully responsible |
| Maintains a copy of your technical documentation | Conduct your conformity assessment or write your documentation |
| Cooperates with market surveillance authorities on your behalf | Assume product liability for defects or failures |
| Provides information to the AI Office on request | Shield you from penalties for non-compliance |
| Can terminate the mandate if you fail to comply | Operate your AI system or manage your EU customer relationships |
AR services for AI Act compliance are emerging rapidly. Costs range from €5,000–€80,000+ annually depending on the number and risk classification of your AI systems.
We at eyreACT offer Authorised Representative services – talk to us to learn more.
The US Regulatory Contrast: EU AI Act vs US AI Regulation
Part of why American companies underestimate the AI Act is that nothing comparable exists at the federal level in the US.
| Dimension | EU AI Act | US Federal AI Regulation (as of March 2026) |
|---|---|---|
| Comprehensive AI law | Yes — horizontal regulation covering all sectors | No federal AI law. Executive orders and voluntary frameworks only. |
| Risk classification | Four tiers with escalating obligations | No mandatory classification system |
| Pre-market conformity assessment | Required for high-risk systems | No equivalent requirement |
| Mandatory registration | EU database for high-risk systems | No mandatory AI registry |
| Penalties | Up to €35M or 7% of global turnover | No federal AI-specific penalties |
| Extraterritorial reach | Yes — applies to non-EU companies | N/A |
| State-level AI laws | N/A | Growing patchwork: Colorado AI Act, California automated decision-making rules, NYC Local Law 144 (hiring), Illinois BIPA |
The irony: US companies complying with the EU AI Act will likely find themselves well-positioned for future US federal regulation when it eventually arrives. Colorado’s AI Act and California’s proposed rules already mirror elements of the EU’s risk-based approach.
Compliance with the stricter EU standard satisfies most emerging US state requirements automatically.
Industry Impact: Where US Companies Get Hit by EU AI Act the Hardest
| Industry | Why It’s Affected | Risk Level |
|---|---|---|
| Enterprise SaaS (HR, Finance, Legal) | Most B2B SaaS tools touching employment, credit, or legal decisions are high-risk once used by EU customers | High |
| Foundation models / GenAI | GPAI obligations apply regardless of downstream use; systemic risk classification for the largest models | High (GPAI-specific) |
| FinTech | Credit scoring, lending, insurance pricing — all explicitly Annex III high-risk | High |
| HealthTech / MedTech | AI diagnostics and clinical decision support — high-risk under both AI Act and MDR | High (dual regulation) |
| EdTech | Grading, admissions, adaptive learning — Annex III high-risk when affecting educational outcomes | High |
| AdTech / MarTech | Generally limited or minimal risk — but personalisation that exploits vulnerabilities or uses subliminal manipulation crosses into prohibited territory | Low to Prohibited |
| Gaming | Generally minimal risk — unless AI-driven mechanics exploit vulnerable users (children, compulsive spending patterns) | Minimal to Prohibited |
| Cybersecurity | AI threat detection generally not high-risk — unless used in law enforcement contexts | Varies |
| Cloud / Infrastructure | Risk depends on whether you’re offering general compute or purpose-built AI services for high-risk use cases | Varies |
What US Companies Should Do Right Now
| Action | Timeline | Why It Can’t Wait |
|---|---|---|
| Audit your AI portfolio for EU exposure | Immediately | You may already be in scope and not know it. Map every AI system against EU customer base and output destination. |
| Classify each system by risk tier | By Q2 2026 | Your obligations — and penalties — depend entirely on risk classification. |
| Appoint an EU authorised representative | Before placing high-risk systems on EU market | Without an AR, you cannot legally sell high-risk AI in Europe. |
| Begin technical documentation | Now | Annex IV documentation is extensive. Retrofitting it onto existing systems takes 6-12 months. |
| Update vendor contracts | By Q2 2026 | EU customers will demand AI Act compliance warranties. Your contracts need to reflect your provider obligations. |
| Build human oversight into your product | By August 2026 | European deployers need to be able to oversee, interpret, and override your AI. This is a product feature, not a legal clause. |
| Assess conformity assessment approach | By Q2 2026 | Self-assessment for most Annex III systems. Third-party for biometric ID and product-embedded AI. |
| Budget for compliance | Now | Compliance costs for high-risk systems range from $450K–$1.2M for mid-stage companies. Plan accordingly. |
| Monitor the Digital Omnibus | Ongoing | Deadlines may shift to December 2027 for Annex III — but don’t bank on it. |
| Treat EU compliance as your global floor | Strategic decision | Aligning to the EU standard prepares you for Colorado, California, and eventual federal regulation. One standard to rule them all. |
Best Practices for EU AI Act Compliance for US Companies
| Practice | Why It Matters |
|---|---|
| Don’t assume “US-only” means safe | If any EU customer uses your product, or any EU individual is affected by your AI’s output, you may be in scope. Audit proactively. |
| Use GDPR compliance as your template | If you went through GDPR, you know the drill. Apply the same cross-functional approach — legal, engineering, product, compliance — to the AI Act. |
| Appoint the AR early | AR engagement takes time — due diligence, mandate negotiation, documentation handover. Start now, not three weeks before the deadline. |
| Design for the EU from the start | Building human oversight, explainability, and documentation into your product architecture is cheaper than retrofitting. If you have any European ambitions, build for the AI Act from day one. |
| Align US and EU compliance teams | Don’t run separate compliance programmes for Colorado, California, and the EU. Build one framework at the EU level and adapt downward. |
| Talk to your EU customers | They have deployer obligations that depend on your provider documentation. If you can’t provide what they need, they’ll switch to a provider who can. This is a competitive issue, not just a legal one. |
How EYREACT Can Help
EYREACT is built for the reality that AI regulation crosses borders. Our platform helps US companies map their EU exposure, classify their AI systems, track provider obligations, and manage the compliance evidence that European regulators and EU customers will demand.
Whether you’re a SaaS company with European customers, a foundation model provider with global reach, or a US enterprise with EU subsidiaries — EYREACT gives you the compliance infrastructure to operate in Europe with confidence.
EU-hosted. EU jurisdiction. EU Authorised Representative model. No CLOUD Act conflict. Book a demo!
FAQ
We’re a US company with no EU office. Does the AI Act still apply?
Yes, if your AI system is placed on the EU market (including through resellers or distributors), put into service in the EU, or produces outputs used within the EU. Physical presence is irrelevant. The scope trigger is the system’s connection to the EU, not your company’s location.
We don’t specifically target EU customers. Can we still be caught?
Yes. The AI Act doesn’t require you to “target” the EU market. If your AI system’s output is used in the EU — even by a customer you didn’t expect to be European — you’re in scope. This is broader than GDPR, which at least distinguishes between companies that “monitor” or “offer goods and services to” EU individuals.
What if we just block EU users?
Technically possible, but commercially painful. You’d need to ensure no EU-based customer or user can access your AI system, and that no output of your system reaches the EU. For SaaS companies with global customer bases, this means geofencing, contractual restrictions, and enforcement mechanisms. Most companies find compliance cheaper than market exit.
Do we need an authorised representative?
If you’re a non-EU provider placing high-risk AI systems or GPAI models on the EU market, yes. You must appoint an AR in the EU by written mandate before market placement. The AR must be established in one of the EU member states where your AI system is available.
How much does compliance cost for a US company?
Estimates for mid-stage SaaS companies range from $450K–$1.2M for initial compliance, covering engineering time, legal review, documentation, conformity assessment, and AR appointment. Ongoing costs (monitoring, updates, incident reporting) add $200K–$500K annually. These numbers vary significantly by system complexity and number of high-risk systems.
Can we use our SOC 2 or ISO 27001 compliance as a shortcut?
Not directly — these are information security frameworks, not AI governance frameworks. However, your existing security controls, documentation practices, audit capabilities, and vendor management processes provide a solid operational foundation. ISO 42001 (AI management systems) is more directly relevant and may help demonstrate compliance once harmonised standards are published.
What’s the penalty for non-compliance by a US company?
The same as for any company: up to €35M or 7% of global turnover for prohibited practices, up to €15M or 3% for high-risk non-compliance, up to €7.5M or 1% for incorrect information. Enforcement against non-EU companies is exercised through the authorised representative, EU-based distributors or importers, and potentially through cooperation with US authorities under future bilateral agreements.
We signed the voluntary AI Pact. Does that count?
The AI Pact signals intent and may give you a “rebuttable presumption of conformity” during the voluntary phase. But it does not replace mandatory compliance obligations. When enforcement begins, the Pact is a starting point — not a finish line.
Our EU customers are asking for compliance documentation. What should we provide?
At minimum: technical documentation per Annex IV, instructions for use per Article 13, information about your conformity assessment, CE marking status, and EU database registration. Your deployer customers need this to meet their own obligations. If you can’t provide it, they face a compliance gap — and they’ll look for a provider who can.
Will there be a US equivalent of the AI Act?
Not yet at the federal level. But the trend is clear: Colorado’s AI Act (effective 2026), California’s proposed AI regulation, NYC’s Local Law 144, and other state-level initiatives are moving toward risk-based frameworks that mirror elements of the EU approach. The EU AI Act is effectively setting the global floor, just as GDPR did for privacy. Compliance now future-proofs your company.
This article is for informational purposes only and does not constitute legal advice. Organisations should seek qualified legal counsel for jurisdiction-specific compliance guidance.