February 18, 2026 9 mins read

AI Vendor Compliance: What EU AI Act Says You Must Do

Let me tell you a story I’ve heard three times this month.

A fintech company in Berlin buys an AI-powered credit scoring tool from a US vendor. The tool works beautifully. Customers get instant decisions. Default rates drop. The board is happy.

Then someone in legal reads the EU AI Act and realises: credit scoring is explicitly listed as high-risk in Annex III. The tool needs a risk management system, technical documentation, human oversight, conformity assessment, and EU database registration.

The Berlin company calls the vendor. The vendor says “we’re working on it.” There’s no documentation. No conformity assessment. No CE marking. And August 2026 deadline is meant to be taken seriously.

Here’s the punchline: under the AI Act, the regulatory risk doesn’t sit with the vendor. It sits with you.

This is the story playing out across Europe right now. Companies that thought buying AI from a reputable vendor meant compliance was someone else’s problem are discovering that the EU AI Act doesn’t care who built the system. It cares who deploys it.

The Trap: “We Just Use It, We Don’t Build It”

Most companies don’t develop AI in-house. They buy it. A recruitment platform powered by AI. A chatbot from a SaaS provider. A fraud detection model from a third-party vendor. An AI diagnostic tool integrated into a healthcare workflow.

Under the EU AI Act, that makes you a deployer. And deployers have their own obligations — separate from, and in addition to, whatever the provider (the vendor) is supposed to do.

The most dangerous assumption in AI compliance right now is: “Our vendor will handle it.”

They might. They might not. But either way, if they don’t, you’re the one facing the regulator.

What the AI Act Actually Requires from You as a Deployer

If your third-party AI system is classified as high-risk, here’s what falls on your shoulders — not your vendor’s:

Your Obligation as DeployerWhat It Means in Practice
Human oversight (Article 26)You must assign competent people to oversee the AI system’s operation and intervene when necessary. Your vendor can’t do this for you — it’s your staff, your processes, your accountability.
Input data quality (Article 26)If you control the input data, you’re responsible for ensuring it’s relevant and representative. Garbage in, garbage out — and the regulator blames you, not the vendor.
Monitoring and logging (Article 26)You must monitor the system’s operation and keep automatically generated logs for at least 6 months. If something goes wrong, you need to prove you were watching.
Transparency to affected individuals (Article 26)You must inform people when decisions affecting them are made or assisted by AI. Your vendor won’t knock on your customer’s door — you will.
Incident reporting (Article 26)If you detect a serious incident or malfunction, you must report it to the provider and the relevant authority. “We didn’t know” is not a defence.
Fundamental Rights Impact Assessment (Article 27)Certain deployers (public bodies, essential services, credit, insurance) must conduct an FRIA before deploying high-risk AI. This is your assessment, not the vendor’s.
Suspension of use (Article 26)If you have reason to believe the system poses a risk, you must stop using it and notify the provider. You can’t keep running a system you know is non-compliant.

The Provider-Deployer Trap: When You Accidentally Become the Provider

Here’s where it gets really dangerous. Under Article 25, you automatically become a provider — with all the provider’s much heavier obligations — if you do any of the following:

TriggerWhat Happens
You put your name or brand on the AI systemYou’re now the provider. Full Article 16 obligations apply.
You make a substantial modificationYou changed the system in a way that affects compliance. You’re now the provider.
You change the intended purposeYou took a system not classified as high-risk and used it for a high-risk purpose. You’re now the provider.
You integrate AI into your own productIf your product is regulated by EU product safety legislation, you’re the provider of the AI component.

This happens more often than people think. A company buys a general-purpose language model, fine-tunes it for HR screening, and deploys it internally. That fine-tuning and redeployment for a high-risk purpose (employment decisions, Annex III) means they’ve just become the provider under Article 25 — with obligations for risk management, technical documentation, conformity assessment, and everything else.

Industry Examples: How AI Vendor Compliance Mechanism Works

Banking & Financial Services

A European bank uses a third-party AI system for credit scoring. Under Annex III, this is explicitly high-risk. The bank is the deployer. It must conduct a Fundamental Rights Impact Assessment, ensure human oversight of credit decisions, monitor system performance, and retain logs.

If the vendor hasn’t completed a conformity assessment, the bank can’t legally deploy the system after August 2026. The bank also faces DORA requirements for third-party ICT risk management — meaning the AI vendor is now an ICT third-party risk that needs contractual governance.

Healthcare

A hospital integrates an AI diagnostic tool from a US medtech vendor. The tool is both a medical device (regulated under MDR) and a high-risk AI system (Annex I). The hospital is the deployer.

If the vendor hasn’t completed both MDR and AI Act conformity assessments, the hospital is using a non-compliant system.

The hospital must also ensure human oversight — a doctor must be able to interpret and override AI recommendations.

Recruitment & HR

A company uses an AI-powered recruitment platform to screen CVs. Employment and recruitment is listed as high-risk in Annex III. The company is the deployer. It must inform candidates that AI is involved in their application process, ensure human oversight of hiring decisions, and monitor for bias.

If the company has customised the platform’s scoring criteria, it may have triggered Article 25 — making it the provider.

Education

A university uses an AI system to grade exams or determine admissions. Education is listed as high-risk in Annex III. The university is the deployer. Students and parents must be informed. Human oversight must be in place.

And if the vendor can’t provide technical documentation, the university faces a compliance gap it can’t fill itself.

Insurance

An insurer uses AI for risk assessment and pricing in life and health insurance. Explicitly high-risk under Annex III. The insurer must conduct an FRIA, ensure transparency to policyholders, and monitor for discriminatory outcomes.

If the vendor’s model is a black box with no explainability, the insurer can’t meet its Article 13 transparency obligations.

Retail & E-commerce

A retailer deploys an AI recommendation engine. This is likely minimal or limited risk — but if the retailer also uses AI for dynamic pricing that exploits consumer vulnerabilities (age, economic situation), it could cross into prohibited territory under Article 5. The line between “personalisation” and “manipulation” is thinner than most marketing teams think.

The AI Vendor Due Diligence Checklist

Before August 2026, every company using third-party AI should be asking their vendors these questions:

QuestionWhy It Matters
What is the risk classification of your AI system under the EU AI Act?If they can’t answer this, they haven’t started compliance.
Have you completed a conformity assessment?Without this, you can’t legally deploy a high-risk system.
Can you provide technical documentation per Annex IV?You need this to meet your own deployer obligations.
Is your system registered in the EU database?Required for high-risk systems before market placement.
Do you have a post-market monitoring system?If they’re not monitoring, you’re flying blind.
What are the system’s known limitations and failure modes?You need this for your human oversight design.
Will you notify us of material updates or changes?A silent model update could change your risk profile overnight.
Do you have an EU authorised representative?Required for non-EU providers of high-risk systems and GPAI models.
Can you provide instructions for use per Article 13?You need these to implement human oversight and inform affected individuals.
What is your incident reporting procedure?You have to report serious incidents — you need your vendor’s cooperation.

If your vendor can’t answer these questions, you have a problem. And the problem is yours, not theirs.

What to Put in Your Vendor Contracts

The AI Act doesn’t prescribe specific contractual terms, but smart deployers are already updating their vendor agreements:

  • Compliance warranty: vendor warrants that the AI system complies with applicable EU AI Act requirements
  • Documentation obligation: vendor must provide and maintain technical documentation, instructions for use, and conformity assessment evidence
  • Change notification: vendor must notify you before any material change to the system
  • Incident cooperation: vendor must cooperate in incident reporting and provide information within defined timescales
  • Audit rights: you have the right to audit or request evidence of the vendor’s compliance
  • Indemnification: vendor indemnifies you for losses arising from their non-compliance
  • Termination trigger: you can terminate if the vendor fails to maintain compliance

How EYREACT Can Help

This is exactly the problem EYREACT was built to solve. Our platform gives you a single view across all your AI systems — whether built in-house or bought from third-party vendors with automated risk classification, Living Compliance Binders, evidence management, and real-time gap analysis.

For every vendor AI system, EYREACT tracks which obligations are yours as deployer, which are the provider’s, and where the gaps are. You’ll know before the regulator does. Book a demo!


FAQ

My vendor says they’re “AI Act ready.” Is that enough?

No. “Ready” is marketing, not compliance. Ask for specific evidence: conformity assessment results, technical documentation per Annex IV, EU database registration, CE marking. If they can’t produce these, they’re not compliant — they’re planning to be.

We only use the AI system as-is, without modifications. Are we still responsible?

Yes. As a deployer, you have independent obligations for human oversight, monitoring, transparency, incident reporting, and (in some cases) fundamental rights impact assessments. These obligations exist regardless of whether you modify the system.

What if our vendor is based outside the EU?

Non-EU vendors placing high-risk AI systems on the EU market must appoint an authorised representative in the EU. If they haven’t, the system technically can’t be legally placed on the market. You should verify AR appointment as part of vendor due diligence.

Can we transfer compliance responsibility to the vendor through a contract?

You can contractually require the vendor to fulfil their provider obligations, but you cannot contractually eliminate your deployer obligations. The AI Act assigns obligations by role, not by contract. A regulator will hold you accountable for your deployer duties regardless of what your vendor agreement says.

We customised the vendor’s AI system for our use case. Does that change anything?

Potentially, yes. If the customisation constitutes a “substantial modification” or changes the system’s intended purpose to a high-risk application, you become the provider under Article 25. This triggers the full set of provider obligations including conformity assessment and technical documentation.

What happens if our vendor’s system is found non-compliant after we’ve deployed it?

You must suspend use of the system and notify the relevant authority. Continuing to deploy a system you know is non-compliant exposes you to penalties of up to €15 million or 3% of global turnover. This is why ongoing monitoring and vendor communication are essential.

How do we handle AI systems from multiple vendors?

Each system needs its own risk classification, compliance assessment, and monitoring. You can’t assume that because one vendor is compliant, others are too. A centralised compliance platform helps manage the portfolio view across multiple vendors and systems.

Does this apply to AI features embedded in SaaS tools we already use?

Yes. If a SaaS tool incorporates AI features that fall under the AI Act’s scope — for example, AI-powered analytics, automated decision-making, or chatbots — you have deployer obligations for those features. The fact that the AI is “embedded” doesn’t exempt it.


This article is for informational purposes only and does not constitute legal advice. Organisations should seek qualified legal counsel for jurisdiction-specific compliance guidance.