AI Agents and EU AI Act: What’s Changing?

The European Commission just published preliminary guidance on how the EU AI Act applies to AI agents. If you are building with autonomous AI — systems that browse, decide, call APIs, or coordinate other AI tools — this is directly relevant to your business, and the compliance clock is already running.

Here is what the guidance says, what it does not say, and what you should do about it.

What is an AI agent in EU AI Act context?

The EU AI Act does not define “AI agent” as a separate legal category. That is not an oversight — it is a deliberate choice. Rather than wait for technology to stabilise before legislating, the European Commission has confirmed that existing definitions already capture AI agents in full.

An AI agent — any system that receives input, processes it, and takes actions in the world — falls under the Act’s definition of an AI system under Article 3(1). If it runs on a foundation model like GPT-4 or Claude, that model is also covered under the GPAI model rules.

In practical terms: if you are deploying an AI agent today, you are already operating within a regulated framework. There is no grace period waiting for agent-specific rules to arrive.

Three EU AI Act obligations that apply right now

The guidance identifies three layers of compliance that activate depending on how your agent works.

First: the prohibitions. Article 5 bans certain AI behaviours outright — harmful manipulation, exploitation of vulnerabilities, social scoring. These apply to every AI system, immediately, with no phase-in date. If your agent interacts with customers, makes recommendations, or influences decisions, your design choices need to account for this now.

Second: high-risk requirements from August 2026. If your agent operates in a high-risk sector, it will be subject to the Act’s full Chapter III obligations. That means conformity assessments, technical documentation, human oversight mechanisms, and registration. August 2026 is closer than it looks for companies that have not started.

Third: transparency rules under Article 50. If your agent interacts with people or generates content — which most commercial agents do — transparency obligations apply. Users must know they are interacting with an AI. A Code of Practice to operationalise the detail is still being drafted, which means there is a window right now to get ahead of it.

What triggers what: A quick reference

Agent characteristicObligation triggeredActive from
Any autonomous action or decisionArticle 5 prohibitionsNow
Operates in high-risk sector (HR, credit, health, education, infrastructure)Chapter III full requirements2 August 2026
Interacts with natural persons or generates contentArticle 50 transparencyNow (CoP in development)
Built on a foundation model with systemic risk designationGPAI model obligations on the underlying providerNow
Multi-agent pipeline with autonomous tool-callingOrchestrator-level accountability considerationsNow (guidance preliminary)

Who is responsible for EU AI Act compliance

The Act was written with a clean two-party model in mind: a provider builds, a deployer uses. Agentic systems routinely involve four or more parties in a single pipeline. The guidance does not fully resolve this, but it does confirm that obligations follow function, not just contract.

RoleWho this typically isCore obligations
ProviderThe company that develops and places the agent on the marketConformity assessment, technical documentation, CE marking, registration
DeployerThe company integrating the agent into their product or serviceRisk management in context, human oversight, staff training, incident logging
ImporterNon-EU entity bringing an agent into the EU marketVerify provider compliance before placing on market
DistributorReseller or marketplaceDo not make available if non-compliant; limited obligations if unmodified
Orchestrator*Any entity running multi-agent pipelinesArticle 5, Article 50, pipeline-level accountability — obligations span provider and deployer duties

*Not a statutory term. An operational category that reflects how agentic pipelines actually work and where the guidance is pointing.

High-risk EU AI Act classification: Does this apply to you?

High-risk classification under Annex III of the Act covers specific use cases, not entire industries. An AI agent used in one of the following contexts will almost certainly qualify:

SectorExamples of in-scope use cases
EmploymentCV screening, interview scheduling, performance assessment
Credit and financeLoan decisions, creditworthiness scoring, insurance underwriting
EducationAdmissions, exam assessment, learning progress monitoring
HealthcareDiagnostic support, treatment recommendation, triage
Critical infrastructureEnergy grid management, water systems, transport
Law enforcementRisk profiling, evidence evaluation (significant restrictions apply)
Border controlTravel document verification, risk assessment
Administration of justiceCase outcome prediction, legal research tools used in decisions

If your agent touches any of these use cases, Chapter III applies from August 2026 regardless of whether it is your primary product or a back-office tool.

The problem the EU AI Act guidance does not fully solve

The Commission is explicit that its position on AI agents is preliminary. The AI Office is monitoring developments and has flagged it may issue further strategies. That is not reassurance — it means the rules are going to get more specific, not less.

The structural problem is this: the Act assumes roles are stable. In agentic systems, they are not. A deployer who configures an agent with broad tool-calling rights, autonomous decision scope, or the ability to spawn sub-agents may be carrying provider-level obligations regardless of how their contracts read. The guidance confirms the obligation exists. It does not tell you exactly where the line sits.

Companies building agentic systems now and treating compliance as something to handle once the rules settle are making a bet that the rules will settle in their favour. That is not a safe bet.

Building or deploying AI agents? Here’s what to do now

Three things, in order of priority:

One — classify your agent. Understand whether your system is high-risk under Annex III. This is a structured exercise, not a legal opinion. The outcome determines everything else.

Two — map your role in the chain. Identify whether you are a provider, a deployer, or operating as an orchestrator across a multi-agent pipeline. Your obligations differ materially depending on the answer.

Three — start building your evidence trail. The audit documentation requirements under Chapter III are substantial. Building them retroactively is expensive and incomplete. Every sprint you run without compliance logging is a gap you will have to reconstruct later.

How eyreACT helps

eyreACT is an EU AI Act compliance platform built by regulatory lawyers who worked directly on drafting the Act. Our platform automates the classification, evidence collection, and audit trail generation that agentic AI deployments require — across every development stage, mapped to the specific Articles that apply to your system.

If you are building with AI agents and want to understand your compliance position before it becomes a problem, we would like to show you how the platform works.

Book a demo today!

FAQ

Does the EU AI Act actually apply to AI agents right now, or only from August 2026?

Both, depending on the obligation. Article 5 prohibitions and Article 50 transparency requirements apply now. The full Chapter III high-risk requirements apply from 2 August 2026. There is no part of the Act that does not apply to AI agents — the question is which parts and when.

We use a third-party AI model (OpenAI, Anthropic, Google). Does that mean the compliance obligation sits with them?

Partially. The foundation model provider carries GPAI model obligations. But as the deployer — the company integrating that model into your product — you carry your own set of obligations independently. You cannot outsource compliance by pointing to your model provider. If your agent makes consequential decisions affecting users, that is your liability to manage.

Our agent is internal-facing only. Does the Act still apply?

Yes, if it meets the definition of an AI system and is used in a high-risk context. Internal HR tools are explicitly in scope under Annex III. An agent used to shortlist candidates, assess performance, or manage workforce decisions is high-risk regardless of whether it faces customers or employees.

What does “high-risk” actually require in practice?

At minimum: a conformity assessment before deployment, technical documentation covering your system’s design and intended purpose, a risk management system, human oversight mechanisms, logging of operations sufficient for post-hoc audit, and registration in the EU database before you place the system on the market. For most companies that have not started, this is three to six months of structured work.

We are a small company. Are there any concessions for SMEs?

The Act includes some procedural accommodations for SMEs and startups — reduced fees, simplified registration in some cases, sandbox access. It does not reduce the substantive obligations. If your agent is high-risk, the requirements apply in full. The sandbox provisions are useful for testing compliance before market entry, not for avoiding compliance altogether.

The guidance says the Commission’s position is “preliminary.” Should we wait for it to finalise?

No. The obligations that exist now exist now regardless of whether further guidance comes. What the preliminary status means is that additional, more specific obligations for agentic systems are likely coming. Waiting means you will be catching up to a moving target rather than building from a stable foundation.

What is an “orchestrator” and why does it matter?

The Act does not use this term. It describes the functional reality of multi-agent pipelines: an entity that designs and operates a system in which AI systems direct, invoke, or constrain other AI systems. If that describes your architecture, you are likely carrying obligations that span both provider and deployer duties — because your system, taken as a whole, is placing an AI system on the market even if each component individually seems like mere integration.

eyreACT is an EU AI Act compliance automation platform. Our Living Compliance Binder™ generates audit-ready documentation from deterministic legal logic — not AI judging AI.