February 12, 2026 13 mins read

High-Risk AI Systems Under the EU AI Act: The Complete Guide for Those Who Build and Deploy AI

Let me paint a picture for you.

A friend of mine runs engineering at a mid-size fintech in Amsterdam. Last September, I asked him whether he’d started thinking about EU AI Act compliance. He laughed. “We’re a payment company, not a robot manufacturer. High-risk AI is for self-driving cars and facial recognition.”

I pulled up Annex III on my phone and showed him section 5(b): AI systems intended to be used for creditworthiness assessment. His company runs three of them.

Then 5(a): AI systems used for evaluating eligibility for public assistance benefits. They have a partnership with a government agency that does exactly that.

He stopped laughing.

The thing about high-risk AI under the EU AI Act is that it doesn’t match most people’s intuition. It’s not about how sophisticated the technology is. It’s not about whether the AI is “dangerous” in a sci-fi sense. It’s about what the AI is used for and who it affects.

A simple logistic regression model that decides whether someone gets a loan is high-risk. A cutting-edge generative AI model that writes marketing copy is not.

And if your AI system is high-risk, the compliance burden is substantial. Risk management, technical documentation, data governance, human oversight, conformity assessment, EU database registration, post-market monitoring — all mandatory, all enforceable by EU AI Act from August 2026, all carrying penalties of up to €15 million or 3% of global turnover.

So let’s figure out if this applies to you.

Two Pathways to High-Risk Classification

The AI Act classifies AI systems as high-risk through two separate pathways. You need to check both.

Pathway 1: Product Safety (Annex I)

Your AI system is high-risk if it’s a safety component of a product — or is itself a product — covered by existing EU harmonisation legislation AND is required to undergo a third-party conformity assessment under that legislation.

Product CategoryExamples
Medical devicesAI diagnostic algorithms, AI-powered imaging analysis, clinical decision support
MachineryAI controlling industrial robots, automated manufacturing systems
ToysAI-enabled interactive toys with voice assistants
VehiclesAutonomous driving systems, ADAS components
Civil aviationAI flight management systems, air traffic control components
Radio equipmentAI-powered telecommunications systems
LiftsAI-controlled elevator management
Marine equipmentAI navigation and safety systems
Rail systemsAI-based train control and signalling

Deadline: August 2027 (one year later than Annex III systems).

Pathway 2: Use Case Classification (Annex III)

Your AI system is high-risk if it falls within one of eight specific use case categories listed in Annex III — and poses a significant risk of harm to health, safety, or fundamental rights.

Deadline: August 2026.

This is where most companies discover they’re affected.

The Eight Annex III Categories: What’s Actually on the List

CategoryWhat’s CoveredReal-World Examples
1. BiometricsRemote biometric identification, biometric categorisation, emotion recognition (where permitted)Facial recognition at building access points; identity verification at bank onboarding; emotion detection in customer service (non-workplace)
2. Critical InfrastructureAI as safety components in management of road traffic, water, gas, heating, electricity, or critical digital infrastructureSmart grid management AI; traffic light optimisation; water treatment control systems; digital infrastructure monitoring
3. Education & Vocational TrainingAI determining access to education, evaluating learning outcomes, monitoring exam behaviour, adapting teaching levelUniversity admissions algorithms; automated essay grading; proctoring software; adaptive learning platforms that determine student progression
4. Employment & Worker ManagementRecruitment, CV screening, interview evaluation, task allocation, performance monitoring, promotion/termination decisionsATS systems with AI screening; video interview analysis; workforce management AI; automated performance scoring
5. Access to Essential ServicesCredit scoring, creditworthiness assessment, life/health insurance risk and pricing, eligibility for social benefits, emergency call evaluation and dispatch, healthcare triageBank credit scoring models; insurance risk assessment AI; benefits eligibility algorithms; emergency services dispatch; hospital triage systems
6. Law EnforcementVictim risk assessment, polygraph/lie detection, evidence reliability evaluation, crime risk assessment, profiling for investigationPredictive analytics for law enforcement; AI-powered evidence analysis; risk assessment tools for suspect prioritisation
7. Migration, Asylum & Border ControlRisk assessment of security/health/irregular migration, visa and residence permit application examination, irregular migrant identificationAutomated visa screening; border control risk assessment; asylum application processing AI
8. Administration of Justice & Democratic ProcessesAI to research and interpret facts and law, AI to apply law to factsLegal research AI used in courts; case outcome prediction for judicial support; AI drafting legal documents for court proceedings

Important note on exceptions: An AI system listed in Annex III is NOT automatically high-risk if it doesn’t pose a significant risk of harm. Specifically, the AI Act carves out three situations where an Annex III system may avoid high-risk classification:

ExceptionWhat It Means
Narrow procedural taskThe AI performs a purely administrative function that doesn’t influence the substantive decision
Pattern detectionThe AI identifies deviations from prior decision-making patterns, flagging them for independent human review without replacing or influencing the human’s assessment
Preparatory taskThe AI prepares information for a human assessor but doesn’t filter, recommend, or rank options

However — and this is the trap many companies fall into — if your AI system profiles individuals (automated processing of personal data to assess work performance, economic situation, health, preferences, behaviour, location, or movement), it is always high-risk regardless of exceptions.

No escape clause.

What “High-Risk” Actually Requires You to Do

Here’s the full obligation set for providers of high-risk AI systems. This is not optional. This is not “best practice.” This is law.

RequirementArticleWhat You Must Do
Risk management systemArt. 9Continuous, iterative risk identification, assessment, and mitigation throughout the entire AI lifecycle. Not a one-time exercise — this is a living process.
Data governanceArt. 10Training, validation, and testing datasets must be relevant, representative, and as free of errors as possible. You must document data collection, preparation, and quality measures.
Technical documentationArt. 11Comprehensive documentation per Annex IV — system description, design choices, architecture, training methodology, testing results, risk management decisions. Must be prepared before market placement.
Record-keepingArt. 12Automatic logging of events throughout the system’s lifetime. Logs must enable traceability and monitoring. Minimum retention: 6 months (longer if required by sector law).
Transparency & instructions for useArt. 13Clear instructions enabling deployers to interpret outputs, understand limitations, implement oversight, and use the system appropriately. Must include input data specs, output formats, confidence levels, known failure modes.
Human oversightArt. 14Technical measures enabling effective human oversight. Must allow humans to understand capabilities and limitations, monitor operation, interpret outputs, and intervene or override when necessary.
Accuracy, robustness & cybersecurityArt. 15Appropriate levels of accuracy, robustness against errors and adversarial attacks, and cybersecurity protection throughout the lifecycle.
Quality management systemArt. 17Documented QMS covering compliance strategy, design and development procedures, testing and validation, data management, risk management, post-market monitoring, incident reporting, communication with authorities, and record-keeping.
Conformity assessmentArt. 43Assessment of compliance before market placement. Self-assessment (Annex VI) for most Annex III systems. Third-party assessment required for biometric identification and when mandated by product legislation.
CE markingArt. 48Affixing CE marking to the AI system or its documentation indicating conformity.
EU database registrationArt. 49Registration in the EU public database before market placement.
Post-market monitoringArt. 72Proportionate monitoring system that actively collects and analyses data on system performance and compliance throughout its operational life.
Serious incident reportingArt. 73Report serious incidents to market surveillance authorities without undue delay and no later than 15 days after becoming aware.

Industry Deep Dives: Where Companies Actually Get Caught in High Risk AI System Trap

Banking & Financial Services: The Ultimate High-Risk AI

The financial sector is ground zero for high-risk AI. Credit scoring (Annex III, 5b) is the most obvious one, but banks typically have dozens of AI systems that qualify: anti-money laundering models, fraud detection, customer risk profiling, loan approval automation, insurance pricing, and mortgage eligibility assessment.

The challenge I see most often: banks built these systems years ago, long before the AI Act. The models work. They’re embedded in production. And nobody documented the risk management process, data governance decisions, or design rationale at the time.

Retrofitting compliance documentation onto a model that’s been running for five years is significantly harder than building it in from the start.

The Monetary Authority of Singapore’s November 2025 AI Risk Management Guidelines are useful context here — they require AI inventories, risk materiality assessments, and lifecycle controls that closely mirror what the EU AI Act demands. Banks operating globally should align their governance frameworks across both jurisdictions. Read more on Singapore’s approach to AI regulation here.

Healthcare and High-Risk AI

Medical AI sits at the intersection of two regulatory frameworks: the AI Act (high-risk under both Annex I via the Medical Devices Regulation, and potentially Annex III for healthcare triage) and the MDR itself. The conformity assessment is the most complex in any sector because you’re satisfying two sets of requirements simultaneously.

A hospital deploying a third-party AI diagnostic tool must verify that the provider has completed both MDR conformity assessment and AI Act compliance. The deployer obligations are also significant: human oversight means a clinician must be able to interpret and override the AI’s recommendation. “The AI said so” is not an acceptable basis for a clinical decision.

High-Risk AI Systems in Recruitment & HR

Employment AI is probably the sector where the gap between industry practice and regulatory requirement is widest. The recruitment industry has enthusiastically adopted AI for CV screening, video interview analysis, personality assessment, skill matching, and workforce optimisation — often without realising that virtually all of these are now high-risk.

The most common pitfall: a company buys an AI recruitment platform from a vendor, customises the scoring criteria, and deploys it internally. The vendor may or may not be compliant. But the company, as deployer, has independent obligations for human oversight, transparency to candidates, monitoring for bias, and (if it’s a public body or essential service) a Fundamental Rights Impact Assessment.

Worse — if the company modified the scoring criteria substantially or is using the system for a purpose different from the vendor’s stated intended use, it may have triggered Article 25 and become the provider with the full provider obligation set.

Education High-Risk AI Systems

EdTech AI is an emerging battleground. Admissions algorithms, automated grading, adaptive learning platforms, exam proctoring — all potentially high-risk under Annex III category 3. The deployer is typically the educational institution, which must ensure human oversight of consequential decisions (admissions, final grades, disciplinary actions based on proctoring) and transparency to students and parents.

The proctoring use case is particularly sensitive because it often involves biometric processing (facial recognition to verify identity, eye-tracking to detect cheating) which may also trigger biometric obligations under Annex III category 1 — meaning the system could be high-risk under two separate categories simultaneously.

High-Risk AI in Insurance

AI risk assessment and pricing in life and health insurance is explicitly listed in Annex III (5c). Insurers using AI to determine premiums, assess claims, or evaluate risk profiles must comply with the full high-risk obligation set. The FRIA requirement is particularly relevant for insurance: if AI pricing systematically disadvantages certain demographic groups, the insurer needs to demonstrate that the fundamental rights impact has been assessed and mitigated.

Critical Infrastructure & Energy: High-Risk AI No One Mentions

Smart grid management, traffic control systems, water treatment AI, and digital infrastructure monitoring are all caught under Annex III category 2. These systems often have long operational lifetimes and were deployed before the AI Act was contemplated. Operators need to audit existing systems and determine whether retrofitting compliance is feasible, or whether replacement is necessary.

The “Not High-Risk” Escape Hatch: Use It Carefully

Article 6(3) allows providers to argue that an Annex III system is not high-risk if it “does not pose a significant risk of harm.” But this is not a free pass. You must:

  • Document your assessment before placing the system on the market
  • Register the assessment (though the Digital Omnibus proposes removing this registration requirement)
  • Make the documentation available to regulators on request
  • Reconsider the assessment if the system’s purpose, deployment context, or use cases change

And remember the profiling override: if your system profiles individuals by processing personal data to assess aspects of their life (work, finances, health, behaviour, location), it is always high-risk regardless of exceptions.

My advice: use the exception sparingly and document your reasoning thoroughly. A regulator who disagrees with your self-assessment will have strong enforcement powers.

Best Practices for High-Risk AI Compliance

PracticeWhy It Matters
Start with a complete AI inventoryYou cannot classify what you haven’t catalogued. Include internal tools, vendor systems, embedded AI features, and prototypes.
Classify early, classify conservativelyWhen in doubt, classify as high-risk. The cost of over-compliance is paperwork. The cost of under-classification is €15 million.
Build compliance into the development lifecycleRetrofitting documentation onto existing systems is five times harder than building it in. Integrate risk management, data governance, and documentation from sprint one.
Don’t ignore third-party systemsYour vendor’s AI is your compliance problem. Audit vendor systems against the same criteria as internal ones.
Invest in human oversight designThis is not a checkbox exercise. Design real mechanisms that enable real humans to meaningfully oversee, interpret, and override AI decisions.
Prepare for ongoing obligationsCompliance doesn’t end at market placement. Post-market monitoring, incident reporting, and periodic review are continuous requirements.
Document everythingEvery risk assessment, data governance decision, testing result, and design choice should be documented, timestamped, and traceable. This is your evidence when the regulator asks.
Watch the Digital OmnibusDeadlines may shift. Exceptions may broaden. But don’t count on it — plan for August 2026 as binding.

How EYREACT Can Help

EYREACT automates the hard part. Our platform classifies your AI systems against both Annex I and Annex III, maps every applicable obligation to Living Compliance Binders, tracks evidence collection in real time, and tells you exactly where your gaps are — across your entire AI portfolio, including third-party vendor systems.

400+ rules. Every Article. Every Annex. One dashboard. Get a demo!

FAQ

How do I know if my AI system is high-risk?

Check two things. First, is your AI system a safety component of a product covered by EU harmonisation legislation in Annex I? If yes, and it requires third-party conformity assessment, it’s high-risk. Second, does your AI system fall within any of the eight use case categories in Annex III? If yes, and it poses a significant risk of harm, it’s high-risk. If you’re unsure, document your assessment and classify conservatively.

What’s the deadline for high-risk compliance?

For Annex III systems (use case classification): 2 August 2026. For Annex I systems (product-embedded AI): 2 August 2027. The Digital Omnibus proposes extending these to December 2027 and August 2028 respectively, but this hasn’t been adopted yet.

What are the penalties for non-compliance?

Up to €15 million or 3% of global annual turnover for failing to meet high-risk obligations. Up to €7.5 million or 1% for supplying incorrect information to authorities. Penalties for prohibited practices are higher: €35 million or 7%.

Can I argue my Annex III system is not high-risk?

Yes, under Article 6(3), if the system doesn’t pose a significant risk of harm and doesn’t materially influence decision-making outcomes. However, you must document this assessment, and it doesn’t apply if the system profiles individuals. Use this exception carefully and conservatively.

Do I need a third-party conformity assessment?

For most Annex III high-risk systems: no. You can use internal conformity assessment (self-assessment) per Annex VI. Third-party assessment is required for remote biometric identification systems and when mandated by product-specific legislation. However, the self-assessment must be thorough and documented — it’s not a rubber stamp.

Does the AI Act apply to AI systems already deployed?

Yes. High-risk systems already on the market must comply by the relevant deadline. There is no blanket grandfather clause. However, systems lawfully on the market before the rules apply can continue operating without new certification if no significant design changes occur.

We use a vendor’s AI system. Who is responsible for compliance?

Both of you, for different things. The vendor (provider) is responsible for risk management, technical documentation, conformity assessment, and post-market monitoring. You (deployer) are responsible for human oversight, input data quality, monitoring, transparency, incident reporting, and (in some cases) fundamental rights impact assessment. If you’ve modified the system substantially, you may have become the provider under Article 25.

What counts as a “substantial modification” that triggers provider status?

The AI Act doesn’t define this precisely, but the Commission’s guidance indicates that changes affecting the system’s compliance with requirements (changes to training data, model architecture, intended purpose, or performance characteristics) could constitute substantial modifications. Minor parameter tuning within the provider’s documented specifications is generally not a substantial modification.

Is a recommendation engine high-risk?

It depends entirely on what it recommends and the consequences. A product recommendation engine on an e-commerce site is minimal risk. An AI system that recommends whether to approve a loan application is high-risk (credit scoring, Annex III 5b). An AI system that recommends educational paths that determine access to institutions is high-risk (education, Annex III 3a). Context and consequence determine classification, not the underlying technology.

Will the Commission add more high-risk categories?

The Commission has the power to amend Annex III by adding or modifying use case categories, based on evidence of emerging risks. Any changes must maintain or increase the level of protection. The February 2026 review under Article 112 may signal areas under consideration for expansion.

This article is for informational purposes only and does not constitute legal advice. Organisations should seek qualified legal counsel for jurisdiction-specific compliance guidance.