A few months ago I sat down with the CTO of a Berlin-based healthtech startup. Smart guy. Great product — an AI-powered diagnostic assistant that helps radiologists spot anomalies in chest X-rays. They’d raised a Series A, had paying customers across Germany and Austria, and were expanding into France.
I asked him a simple question: “Under the EU AI Act, are you the provider or the deployer of your AI system?”
He thought about it for a second. “We build it and sell it. So… provider?”
“Correct. Do you know what that means?”
He did not.
Over the next forty minutes I walked him through the full set of provider obligations under the AI Act. Risk management system. Technical documentation per Annex IV. Data governance. Human oversight design. Quality management system. Conformity assessment. CE marking. EU database registration. Post-market monitoring. Serious incident reporting.
By the end he looked like someone who’d just been told his house needed a new foundation while he was already living in it.
Here’s the thing: if you develop an AI system and place it on the EU market under your own name or trademark — whether you charge for it or give it away — you are the provider. And the provider carries the heaviest compliance burden in the entire AI Act.
Not the deployer. Not the importer. Not the distributor. You.
Let me walk you through exactly what that involves.
What Makes You an AI Provider Under EU AI Act
The AI Act defines a provider in Article 3(3). Let me translate from legalese:
You are a provider if you:
- Develop an AI system yourself, OR
- Have an AI system developed on your behalf (outsourced development still counts), AND
- Place it on the EU market or put it into service under your own name or trademark
It doesn’t matter where you’re based. A company in San Francisco that sells AI software to European customers is a provider under the AI Act. A startup in Tel Aviv whose AI is used by a deployer in Madrid — provider. A team in Bangalore building AI for a European parent company that brands and sells it — the European parent is the provider.
What “placing on the market” means: Making the AI system available for the first time on the EU market for distribution or use in the course of a commercial activity. Even if it’s free.
What “putting into service” means: Supplying the AI system for first use directly to a deployer, or using it yourself for its intended purpose within the EU.
| Scenario | Are You the Provider? |
|---|---|
| You build an AI product and sell it to EU customers under your brand | Yes |
| You outsource AI development to a contractor but sell it under your brand | Yes |
| You build AI and license it white-label to a company that rebrands it | You’re the original provider; the company rebranding it also becomes a provider under Article 25 |
| You build internal AI tools used only by your own EU-based employees | Yes — you’re putting it into service |
| You build an AI model and open-source it | Yes, if it’s a GPAI model (separate obligations apply); for specific AI systems, it depends on how it’s used downstream |
| You fine-tune a third-party model and deploy it under your brand | Yes — and potentially for both GPAI downstream provider and high-risk provider obligations |
| You build AI but only sell it outside the EU | No — unless the output is used within the EU |
The Article 25 Trap: When Someone Else Becomes a Provider Too
This is the part that catches companies off guard. Under Article 25, a deployer, distributor, importer, or any other third party automatically becomes a provider — with full provider obligations — if they:
| Trigger | What Happens |
|---|---|
| Put their name or trademark on a high-risk AI system already on the market | They’re now a provider alongside or instead of you |
| Make a substantial modification to a high-risk system | They’ve created a new version they’re responsible for |
| Change the intended purpose so a non-high-risk system becomes high-risk | They just made themselves the provider of a high-risk system |
| Integrate AI into their own product regulated by EU product safety law | They’re the provider of the AI component |
Why does this matter to you as the original provider? Because when someone triggers Article 25, you have an obligation to cooperate with them — providing documentation, technical access, and assistance so they can meet their new provider obligations. But you also stop being considered the provider for that modified version.
The practical implication: your contracts with customers, resellers, and integrators need to address this. If a customer modifies your AI system, the compliance responsibility shifts — and both parties need to be clear about that before it happens.
The Full AI Provider Obligation Set
Here’s everything you must do as a provider of a high-risk AI system. I’m going to be thorough because this is where most of the compliance effort sits.
Before You Place the System on the Market
| Obligation | Article | What It Actually Involves |
|---|---|---|
| Risk management system | Art. 9 | A continuous, iterative process running throughout the AI system’s entire lifecycle. Identify risks. Assess them. Mitigate them. Document residual risks. Test for new risks. Update the assessment. This isn’t a one-time exercise — it’s a living system. |
| Data governance | Art. 10 | Your training, validation, and testing datasets must be relevant, representative, and as free from errors as possible. You must document collection methods, data preparation, labelling, quality checks, and bias assessment. If you’re processing personal data, GDPR applies in parallel. |
| Technical documentation | Art. 11 + Annex IV | Comprehensive documentation covering: general system description, intended purpose, and capabilities; detailed technical description of development process; design choices and architecture; training methodology and data; testing and validation procedures and results; risk management decisions; human oversight measures; accuracy, robustness, and cybersecurity specifications. This must be prepared before market placement and kept updated. |
| Record-keeping design | Art. 12 | Design your system to automatically log events relevant for identifying risks and substantial modifications throughout its lifecycle. Logs must enable traceability and monitoring. |
| Transparency & instructions for use | Art. 13 | Provide deployers with clear instructions including: intended purpose and limitations; provider identity and contact; performance characteristics (accuracy, robustness); input data specifications; output interpretation guidance; human oversight measures; computational resource requirements; expected lifetime and maintenance needs. |
| Human oversight design | Art. 14 | Build technical measures into the system that enable deployers to effectively oversee it. This means: capability to understand the system; ability to interpret outputs; ability to decide not to use or override the system; ability to intervene or stop operation. |
| Accuracy, robustness & cybersecurity | Art. 15 | Achieve appropriate levels throughout the lifecycle. Robustness includes resilience against errors, faults, and adversarial attacks. Cybersecurity must protect against unauthorised access and manipulation. |
| Quality management system | Art. 17 | A documented QMS covering: compliance strategy and procedures; design, development, and testing techniques; data management; risk management; post-market monitoring; incident reporting; communication with authorities; record-keeping; resource management; and an accountability framework. |
| Conformity assessment | Art. 43 | Before market placement: self-assessment (Annex VI) for most Annex III systems, or third-party assessment when required by product legislation or for biometric identification systems. |
| EU declaration of conformity | Art. 47 | Draw up a written declaration stating the system meets all requirements. Keep it for 10 years after market placement. |
| CE marking | Art. 48 | Affix the CE marking visibly, legibly, and indelibly to the system or its accompanying documentation. |
| EU database registration | Art. 49 | Register the system in the EU public database before placing it on the market. |
After Market Placement (Ongoing)
| Obligation | Article | What It Actually Involves |
|---|---|---|
| Post-market monitoring | Art. 72 | Establish and document a monitoring system proportionate to the AI system’s nature and risks. Actively collect and analyse data on performance, compliance, and emerging risks throughout the system’s operational life. |
| Serious incident reporting | Art. 73 | If the AI system causes or contributes to a serious incident (death, serious health damage, serious disruption to critical infrastructure, fundamental rights violation), report to the relevant market surveillance authority immediately and no later than 15 days. Initial report within 2 days for death or serious health damage. |
| Corrective actions | Art. 16(j) | Take immediate corrective action if the system is not in conformity. Withdraw or recall if necessary. Inform the distributor, deployer, importer, and relevant authority. |
| Documentation maintenance | Art. 18 | Keep technical documentation, conformity assessment results, and the EU declaration of conformity available for national competent authorities for 10 years after market placement. Keep logs for a period appropriate to the intended purpose, minimum 6 months. |
| Authority cooperation | Art. 16(k) | Demonstrate conformity upon reasoned request from any national competent authority. Cooperate with market surveillance activities. |
Provider Obligations by Risk Category
Not every provider has the same burden. It depends on what kind of AI system you’re providing.
| Risk Category | Your Obligations |
|---|---|
| High-risk (Annex III) | Full obligation set above. Self-assessment conformity. August 2026 deadline. |
| High-risk (Annex I product-embedded) | Full obligation set above PLUS product-specific conformity assessment. August 2027 deadline. |
| GPAI model (standard) | Technical documentation, training data transparency, copyright compliance, downstream provider information. Already enforceable since August 2025. |
| GPAI model (systemic risk) | All standard GPAI obligations PLUS adversarial testing, incident monitoring and reporting, cybersecurity, energy consumption reporting. Already enforceable. |
| Limited risk | Transparency obligations only — disclose AI nature to users (chatbots), label AI-generated content (deepfakes, synthetic media). August 2026. |
| Minimal risk | No specific obligations. Voluntary codes of conduct encouraged. |
Industry Examples: AI Provider Life in Compliance Practice
SaaS Credit Scoring (FinTech)
You’ve built an AI-powered credit scoring platform that banks license via API. You’re the provider. Every bank using your API is a deployer.
Your obligations: full high-risk compliance including risk management, technical documentation, data governance, human oversight design (you must build in the capability for the bank to override your score), conformity assessment, CE marking, EU registration, and post-market monitoring across every deployment.
The catch: each bank feeds different input data into your model. Your instructions for use must be clear enough that deployers understand data requirements, limitations, and failure modes — because if a bank feeds in garbage data and a customer gets wrongly denied credit, the regulator will come to you first and ask what guidance you provided.
AI Recruitment Platform (HR Tech)
You sell an AI system that screens CVs, analyses video interviews, and ranks candidates. Employment is Annex III high-risk. You’re the provider.
The unique challenge: your system processes biometric data (facial analysis in video interviews) and makes decisions that directly affect people’s livelihoods. This puts you at the intersection of AI Act high-risk obligations, GDPR data protection requirements (including a DPIA), and potentially the prohibition on emotion recognition in workplace settings. If your video analysis infers candidates’ emotional states, you may have a prohibited practice embedded in a high-risk system — the worst possible combination.
Best move: strip out any emotion inference features, ensure your system provides explainable rankings, build genuine human override capability (not just a rubber-stamp “confirm” button), and document everything obsessively.
Medical Diagnostic AI (HealthTech)
You build AI that analyses medical images for signs of disease. You’re the provider under both the AI Act and the Medical Devices Regulation.
Double conformity assessment: your system needs MDR CE marking through a notified body AND must meet AI Act high-risk requirements. The documentation requirements overlap significantly but aren’t identical — you’ll need to map between the two frameworks and ensure both are satisfied.
The human oversight requirement is particularly critical here: a clinician must be able to understand the AI’s output, interpret its confidence level, and make an independent clinical judgment. “The AI says cancer” cannot be the basis of a diagnosis.
Open-Source AI Model (GPAI)
You develop and release an open-source language model. You’re a GPAI provider.
For standard GPAI models released under open licences with publicly available parameters: you only need to comply with copyright and publish a training data summary. But if your model is classified as having systemic risk (based on computing power, reach, or capability), the full GPAI systemic risk obligations apply regardless of licensing.
The downstream trap: if someone takes your open-source model, fine-tunes it for a high-risk use case (say, credit scoring), and deploys it in the EU, they become the provider of the high-risk system under Article 25. But you, as the GPAI provider, have an obligation to cooperate and provide information that helps them comply. Your documentation, model cards, and safety evaluations become their compliance foundation.
Industrial IoT + AI (Manufacturing)
You build AI that monitors and controls manufacturing equipment — predicting maintenance needs, optimising production parameters, managing quality control. If the AI is a safety component of machinery covered by EU product safety legislation (the Machinery Regulation), you’re a high-risk provider under Annex I.
Your deadline is August 2027, not 2026. But the conformity assessment may require third-party assessment through a notified body, which takes time. Starting now is not early — it’s necessary.
Chatbot / Virtual Assistant (Limited Risk)
You build a customer service chatbot deployed on EU websites. This is limited risk, not high-risk — unless the chatbot makes decisions that affect access to essential services (insurance claims, credit applications, public benefits), in which case it slides into Annex III territory.
For a straightforward chatbot, your provider obligation is simple: ensure users know they’re interacting with AI. Article 50 transparency. But if your chatbot generates synthetic text for public information purposes, you also need machine-readable content marking.
The mistake I see: companies assume “it’s just a chatbot” and never classify it. Then the chatbot evolves, gains decision-making capability, starts handling claims or applications, and nobody updates the risk classification. What was limited risk is now high-risk — and nobody noticed.
The Non-EU AI Provider Problem
If you’re based outside the EU and placing AI systems on the EU market, you must appoint an authorised representative within the EU before market placement (Article 22). Your AR becomes your regulatory face in Europe — maintaining documentation, cooperating with authorities, and serving as the contact point for compliance.
| Your Situation | What You Need |
|---|---|
| Non-EU provider, high-risk AI system | Mandatory AR appointment before market placement |
| Non-EU provider, GPAI model | Mandatory AR appointment before market placement |
| Non-EU provider, limited risk AI | AR not mandatory but recommended |
| EU-based provider | No AR needed — you are the entity |
Your AR does not replace your obligations — they facilitate them. You remain fully responsible for compliance. But without an AR, you technically cannot place a high-risk AI system on the EU market at all.
Best Practices for Providers
| Practice | Why It Matters |
|---|---|
| Build compliance into the development lifecycle from day one | Retrofitting documentation onto a deployed system costs five times more than building it in. Every sprint should produce compliance artefacts alongside code. |
| Treat technical documentation as a living product | Annex IV documentation isn’t a one-time deliverable. It evolves with your system. Version it, review it, keep it current. |
| Design human oversight as a first-class feature | Don’t bolt it on as an afterthought. The ability for deployers to understand, interpret, and override your system should be architected from the start. |
| Over-communicate with deployers | Your instructions for use are your deployers’ compliance lifeline. Be specific about intended use, limitations, data requirements, failure modes, and what oversight they need to implement. Vague guidance creates liability for both of you. |
| Monitor your system after deployment | Post-market monitoring isn’t optional. Build telemetry that lets you detect performance degradation, emerging biases, and unexpected use patterns. |
| Control downstream modifications contractually | If a customer modifies your system, they may become a provider under Article 25. Your contracts should address this — defining what modifications are permitted, requiring notification, and allocating compliance responsibilities. |
| Prepare for incident reporting | Know your reporting chain before an incident happens. Have templates ready. Know the 2-day and 15-day deadlines. Practice the process. |
| Get your conformity assessment right | For self-assessment: be thorough, honest, and documented. A superficial self-assessment is worse than none — it creates a false sense of compliance that won’t survive regulatory scrutiny. |
| Keep everything for 10 years | Technical documentation, conformity assessment, EU declaration of conformity, logs — all must be available for a decade after market placement. Plan your document retention now. |
| Engage with standards early | CEN/CENELEC harmonised standards for the AI Act are under development. When published, compliance with these standards creates a presumption of conformity. Track their progress and align your practices early. |
How EYREACT Can Help
Building AI is hard enough without drowning in compliance paperwork. EYREACT automates the provider’s compliance journey — from risk classification through to post-market monitoring — so you can focus on building the product while the platform tracks your obligations.
Living Compliance Binders map every provider requirement to evidence, timelines, and ownership. The Rule Engine monitors 400+ rules derived directly from the AI Act. Gap analysis shows you exactly what’s missing before a regulator has to tell you.
Because the best time to start was when you wrote the first line of code. The second best time is now. Book a demo!
FAQ
I outsource my AI development. Am I still the provider?
Yes. If you have the AI system developed on your behalf and place it on the market under your name or trademark, you are the provider regardless of who wrote the code. Your outsourcing contract should ensure the developer delivers documentation, data governance records, and testing results that you need for compliance.
We give our AI away for free. Does the AI Act apply?
Yes. “Placing on the market” includes making the system available “whether for payment or free of charge.” If your free AI tool is used in the EU for its intended purpose, you are the provider with full obligations.
What’s the difference between a provider and a deployer?
The provider develops the AI system (or has it developed) and places it on the market. The deployer uses it in a professional capacity. The provider has the heavier obligations: risk management, technical documentation, conformity assessment, CE marking, registration, post-market monitoring. The deployer has independent but lighter obligations: human oversight, monitoring, transparency, incident reporting. Both are accountable to regulators.
Can I contractually transfer my provider obligations to someone else?
No. The AI Act assigns obligations by functional role, not by contract. You cannot contractually eliminate your provider obligations. You can contractually require partners to support your compliance (e.g., requiring a data supplier to warrant data quality), but the regulatory obligation remains yours.
What if my AI system is used for something I didn’t intend?
If a deployer uses your system for a purpose you didn’t intend — particularly one that makes it high-risk — that’s on them under Article 25. However, you should clearly document your intended purpose, communicate it in your instructions for use, and consider technical safeguards against misuse. If you know your system is being misused and do nothing, regulators will have questions.
Do I need a conformity assessment for every version update?
Not for every minor update. But if an update constitutes a “substantial modification” — changing the system’s performance, intended purpose, or compliance status — a new conformity assessment may be required. Define clear change management procedures that classify updates by impact level.
How long does conformity assessment take?
For self-assessment (most Annex III systems): plan for 3-6 months if you’ve been building compliance artefacts alongside development. If you’re starting from scratch for an existing system, 6-12 months is realistic. Third-party assessment (product-embedded AI, biometric ID): add 3-6 months for notified body engagement and review.
What happens if I don’t register in the EU database?
You technically cannot place a high-risk AI system on the EU market without registering. Failure to register is a compliance violation subject to penalties. The registration must happen before market placement, not after.
I’m a small startup. Are there any exemptions?
The AI Act provides reduced penalties for SMEs and startups (the lower of the fixed amount or the percentage of turnover applies). The Digital Omnibus proposes extending these benefits to small mid-cap enterprises (under 750 employees, under €150M revenue). Regulatory sandboxes offer priority access for SMEs. But the substantive obligations are the same — no SME gets to skip risk management or documentation.
What about GPAI models — am I a provider of those too?
If you develop a general-purpose AI model and place it on the EU market, you’re a GPAI provider with specific obligations (technical documentation, transparency, copyright compliance). If someone integrates your GPAI model into a high-risk AI system, you must cooperate with them so they can meet their high-risk obligations. You don’t inherit the high-risk obligations yourself, but you’re part of the compliance chain.
This article is for informational purposes only and does not constitute legal advice. Organisations should seek qualified legal counsel for jurisdiction-specific compliance guidance.