The EU AI Act introduces a precise legal vocabulary that determines whether your AI system is regulated, how strictly, and what you must do about it. Misunderstanding even one term — “deployer” versus “provider,” “high-risk” versus “limited risk” — can mean the difference between a proportionate compliance programme and a €35 million penalty.
This glossary covers every term that matters. Bookmark it. Share it with your legal team. Return to it when regulators ask questions.
A
AI System
The foundational definition that determines whether the Act applies at all. Under Article 3(1), an AI system is a machine-based system designed to operate with varying levels of autonomy that generates outputs — predictions, recommendations, decisions, content — influencing real or virtual environments. The key word is “autonomy.” A simple rule-based decision tree does not qualify. A machine learning model that infers patterns from data does. If you are unsure whether your system meets this definition, assume it does and work backwards.
AI Literacy
Defined in Article 4, AI literacy refers to the skills, knowledge and understanding that enable providers, deployers and affected persons to make informed use of AI systems and to understand their capabilities and limitations. From August 2026, providers and deployers must ensure their staff have sufficient AI literacy for the roles they perform. This is not a one-time training exercise — it is an ongoing organisational obligation.
Annex I — AI Techniques
The technical annex that defines what counts as an AI technique for purposes of the Act. It includes machine learning approaches, logic- and knowledge-based approaches, and statistical approaches. The Commission can update Annex I via delegated acts, meaning the technical scope of the regulation can expand without primary legislation.
Annex III — High-Risk AI Systems
The list of application areas that automatically classify an AI system as high-risk regardless of intent. It covers eight domains: biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration and border control, and administration of justice. If your system operates in any of these domains and makes consequential decisions about natural persons, start here.
Annex IV — Technical Documentation
The mandatory contents list for the technical documentation that high-risk AI system providers must maintain. It covers system description, design specifications, training data, performance metrics, risk management outputs, human oversight measures, post-market monitoring plan and more. Annex IV is what a market surveillance authority will request. Your documentation must map to it line by line.
Authorised Representative
A natural or legal person established in the EU who is designated by a non-EU provider of a high-risk AI system to act on their behalf. The authorised representative carries legal responsibility for compliance within the EU and must be named in the technical documentation and on the EU Declaration of Conformity. Non-EU providers who ignore this requirement expose their EU market access entirely.
B
Biometric Categorisation System
An AI system that assigns natural persons to categories based on biometric data — including categories relating to race, ethnicity, political opinion, religion, sexual orientation or trade union membership. Biometric categorisation for law enforcement purposes is prohibited under Article 5. In other contexts it is classified as high-risk under Annex III and subject to the full Article 9–15 compliance framework.
Brussels Effect
Not a defined term in the Act itself, but the mechanism by which EU regulation becomes global standard. Because multinational organisations cannot operate one compliance regime in Europe and another everywhere else, the EU AI Act is de facto setting the floor for global AI governance thanks to Brussels effect. Organisations that build to EU AI Act standard are simultaneously building to the emerging global standard.
C
CE Marking
The conformity marking that high-risk AI systems must carry before they can be placed on the EU market or put into service. CE marking for AI systems requires completion of the applicable conformity assessment procedure, registration in the EU database, and issuance of an EU Declaration of Conformity. It is the visible signal to regulators and customers that the system meets AI Act requirements.
Conformity Assessment
The process by which a provider demonstrates that a high-risk AI system meets the requirements of the AI Act. Most high-risk systems can self-assess. Systems in specific sensitive areas — particularly biometric identification and AI in law enforcement — require third-party assessment by a notified body. The conformity assessment must be documented and retained for ten years after the system is placed on the market.
GPAI Model (General-Purpose AI Model)
An AI model trained on broad data, designed to perform a wide range of tasks, and made available for integration into downstream systems. Large language models are the paradigm case. GPAI models are regulated under Title VIII of the Act. All GPAI model providers must maintain technical documentation, comply with copyright law, and publish summaries of training data. Models with systemic risk face additional obligations including adversarial testing and incident reporting.
GPAI Model with Systemic Risk
A GPAI model that poses systemic risk due to its high-impact capabilities — defined as models trained using more than 10^25 FLOPs of computational power, or models designated by the Commission based on capability assessments. Providers of systemic-risk models must conduct model evaluations, adversarial testing, track and report serious incidents, implement cybersecurity measures and ensure adequate energy efficiency reporting.
D
Data Governance
Required under Article 10 for high-risk AI systems. Data governance covers the practices, processes and policies governing training, validation and testing datasets. Requirements include examination of datasets for biases, relevance and representativeness; data collection processes; and measures to detect and address data gaps. Regulators will ask not just what data you used, but how you ensured it was appropriate.
Declaration of Conformity (EU)
The formal document in which the provider declares that a high-risk AI system complies with all applicable requirements of the AI Act. The EU Declaration of Conformity must be signed before CE marking is affixed, kept for ten years, and made available to market surveillance authorities on request. It names the system, the provider, the applicable requirements and the conformity assessment procedure used.
Deployer
Any natural or legal person who uses an AI system under their authority in a professional context — excluding personal non-professional use. Deployers are not the same as providers. A bank that purchases a credit scoring model from a third-party vendor and deploys it for loan decisions is a deployer. Deployers have their own obligations under the Act: conducting fundamental rights impact assessments, implementing human oversight, informing affected individuals, and monitoring for unexpected risks.
DORA (Digital Operational Resilience Act)
EU Regulation 2022/2554, applicable to financial entities and their ICT service providers. DORA and the AI Act create overlapping obligations for financial institutions deploying AI: DORA’s ICT risk management requirements intersect with the AI Act’s risk management, technical documentation and human oversight provisions. Compliance teams in financial services must map requirements across both regimes simultaneously.
E
Emotion Recognition System
An AI system that infers or predicts the emotional states of natural persons based on biometric data. Under the AI Act, providers and deployers of emotion recognition systems must inform natural persons when such systems are being used — subject to specific exceptions for safety, research or security purposes. Emotion recognition in workplace and educational settings is subject to particular scrutiny.
Enforcement Deadline
The AI Act applies in phases. The prohibition on unacceptable risk AI practices (Article 5) applied from 2 February 2025. GPAI model obligations and governance provisions applied from 2 August 2025. The full framework for high-risk AI systems — including Articles 9–15, technical documentation, conformity assessment and CE marking — applies from 2 August 2026. This is the date that determines whether your compliance programme is ready or exposed.
F
Fundamental Rights Impact Assessment (FRIA)
Required under Article 27 for deployers of high-risk AI systems in certain public and private contexts. The FRIA assesses the impact of the AI system on fundamental rights — including non-discrimination, privacy, dignity and procedural fairness. It must be completed before deployment and registered in the EU database. The FRIA is distinct from a data protection impact assessment (DPIA) under GDPR, though the two assessments overlap substantially and should be conducted in coordination.
G
Gap Analysis
Not a defined term in the Act but a critical compliance tool. A gap analysis maps your current documentation, processes and technical measures against the specific requirements of Articles 9–15 and identifies what is missing, incomplete or non-compliant. For high-risk AI systems, a gap analysis is the starting point for any structured compliance programme. Without one, you are guessing.
H
High-Risk AI System
The core regulatory category of the Act. An AI system is high-risk if it falls within an application area listed in Annex III and poses a significant risk of harm to health, safety or fundamental rights. High-risk systems are subject to the full compliance framework: risk management (Article 9), data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency (Article 13), human oversight (Article 14), accuracy and robustness (Article 15), and quality management (Article 17). Non-compliance carries penalties of up to €30 million or 6% of global annual turnover.
Human Oversight
Required under Article 14 for all high-risk AI systems. Human oversight measures must enable natural persons to understand the system’s capabilities and limitations, monitor its operation, intervene or override outputs, and refuse to act on outputs where appropriate. Article 14 is one of the most operationally demanding provisions in the Act — it requires not just that a human is nominally present but that the human has the tools, knowledge and authority to exercise meaningful control.
I
Importer
A natural or legal person established in the EU that places a high-risk AI system from a non-EU provider on the EU market. Importers must verify that the provider has completed the conformity assessment, that technical documentation exists, that CE marking is affixed and that the provider’s contact details are included with the system. Importers share legal responsibility with providers for systems placed on the EU market.
Incident Reporting
Required under Article 73 for providers and deployers of high-risk AI systems and GPAI models with systemic risk. Serious incidents — defined as incidents that cause or could cause death, serious harm to health or safety, or significant damage to property — must be reported to market surveillance authorities. GPAI systemic risk providers must also report to the AI Office. Incident reporting obligations create a continuous monitoring requirement that does not end at launch.
L
Limited Risk AI System
AI systems that pose limited risk — primarily transparency risks — are subject only to specific transparency obligations. Chatbots must disclose that the user is interacting with AI. Deepfake content must be labelled. Emotion recognition and biometric categorisation systems must notify affected persons. Limited risk systems do not require conformity assessment or CE marking but must meet these disclosure requirements consistently.
M
Market Surveillance Authority
The national competent authority responsible for monitoring and enforcing AI Act compliance within a member state. Market surveillance authorities have broad investigative powers: they can request documentation, conduct audits, require system modifications, and impose market access restrictions. They are the regulators who will knock on the door. Your compliance documentation must be ready for that knock without notice.
Minimal Risk AI System
AI systems that pose minimal or no risk — including AI-enabled video games, spam filters and most consumer applications — face no mandatory compliance obligations under the Act. Providers may voluntarily apply codes of conduct. The distinction between minimal risk and limited or high risk is determined by the system’s application area and the nature of its outputs, not by the provider’s characterisation.
N
National Competent Authority
Each EU member state must designate one or more national competent authorities to supervise the application of the AI Act. National competent authorities act as market surveillance authorities for high-risk AI systems and are responsible for registering notified bodies, investigating complaints, and coordinating with the European AI Office. The UK equivalent post-Brexit is the emerging AI governance framework under the ICO, CMA and sector regulators.
Notified Body
A third-party conformity assessment body designated by a member state to perform mandatory conformity assessments for specific high-risk AI systems — primarily those involving biometric identification and AI systems in law enforcement. Notified bodies must be accredited, independent and technically competent. The number of accredited notified bodies for AI Act purposes remains limited, creating potential bottlenecks for organisations requiring third-party assessment.
O
Operator
A collective term used in some AI Act provisions to refer to both providers and deployers. Context determines which obligations apply to which operator. In practice, the provider/deployer distinction governs the allocation of compliance responsibilities and liability.
P
Post-Market Monitoring
Required under Article 72 for providers of high-risk AI systems. Post-market monitoring means actively collecting and analysing data on the system’s performance in real-world conditions after deployment. It must cover whether the system continues to meet the requirements of the Act, whether unintended risks have emerged, and whether any corrective action is needed. Post-market monitoring is not a one-time review — it is a continuous operational obligation integrated into the quality management system.
Prohibited AI Practices (Article 5)
The absolute prohibitions that apply from 2 February 2025 regardless of risk classification or sector. They include: subliminal manipulation, exploitation of vulnerabilities, social scoring by public authorities, real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions), biometric categorisation inferring sensitive characteristics, and AI systems that assess criminal recidivism risk based purely on profiling. Violation of Article 5 carries the highest penalties in the Act: up to €35 million or 7% of global annual turnover.
Provider
Any natural or legal person who develops an AI system or has an AI system developed and places it on the market or puts it into service under their own name or trademark, whether for payment or free of charge. Providers bear the primary compliance burden under the EU AI Act — including technical documentation, conformity assessment, CE marking, post-market monitoring and incident reporting. An organisation that builds an AI system in-house and deploys it is simultaneously a provider and a deployer.
Q
Quality Management System (QMS)
Required under Article 17 for providers of high-risk AI systems. The QMS must cover the entire AI system lifecycle: design, development, testing, deployment, post-market monitoring and incident management. It must include documented policies, procedures and responsibilities, version control for system updates, and processes for handling non-conformities. The QMS is the organisational backbone of AI Act compliance — without it, the technical documentation and risk management requirements cannot be sustainably maintained.
R
Real-Time Remote Biometric Identification
The use of AI to identify natural persons in publicly accessible spaces without their prior knowledge, by comparing their biometric data against a reference database in real time. Subject to narrow exceptions for law enforcement purposes (serious crime, missing persons, terrorist threats), real-time remote biometric identification is prohibited under Article 5. Post-remote identification — where the match occurs after the fact — is classified as high-risk and subject to strict conditions.
Regime 28
Not a defined term in the Act but an emerging compliance concept: the ability to register once and operate across all 28 EU AI Act jurisdictions simultaneously. Because the Act is directly applicable EU law, a single conformity assessment, EU Declaration of Conformity and EU database registration covers all member states. Organisations that structure their compliance programme around this principle avoid the cost and complexity of 28 parallel national compliance exercises.
Risk Management System
Required under Article 9 for high-risk AI systems. The risk management system is a continuous, iterative process — not a document — covering the full lifecycle of the AI system. It must identify and analyse known and foreseeable risks, estimate and evaluate risks that may emerge during deployment, adopt risk mitigation measures, and test residual risk after mitigation. The risk management system must be updated when the system is modified, retrained or redeployed in a new context.
S
Serious Incident
An incident involving a high-risk AI system that directly or indirectly leads to death, serious harm to health or safety, damage to critical infrastructure, property damage, or infringement of fundamental rights obligations. Serious incidents must be reported to the relevant market surveillance authority without undue delay. For GPAI systemic risk models, reporting goes to the AI Office. Organisations must have incident detection and reporting processes in place before deployment — not after an incident occurs.
Social Scoring
The evaluation or classification of natural persons based on their social behaviour or personal characteristics leading to detrimental treatment in unrelated social contexts. Social scoring by public authorities is prohibited under Article 5. The prohibition applies to both general social scoring systems and sector-specific scoring systems used to restrict rights or opportunities in ways disproportionate to the original behaviour. Private-sector social scoring in employment or financial services contexts is subject to separate high-risk classification rules.
Subliminal Manipulation
The use of AI techniques to influence a person’s behaviour by exploiting subconscious processes — below the threshold of conscious awareness — in ways that impair their ability to make an informed decision and that cause or are likely to cause harm. Subliminal manipulation is prohibited under Article 5 regardless of intent. The prohibition covers both direct subliminal messaging and indirect manipulation through personalisation algorithms designed to exploit psychological vulnerabilities.
Systemic Risk
The category of risk associated with GPAI models whose capabilities could cause widespread harm across the EU — including disruption to critical sectors, large-scale discriminatory outcomes, or destabilisation of democratic institutions. The primary indicator of systemic risk is training compute above 10^25 FLOPs, though the Commission can designate additional models based on capability assessments. Systemic risk triggers the most demanding tier of GPAI obligations.
T
Technical Documentation
The comprehensive record that providers of high-risk AI systems must compile, maintain and make available to market surveillance authorities on request. Technical documentation must cover the system’s general description, design specifications, training methodology and data, performance benchmarks, risk management outputs, human oversight measures, and post-market monitoring plan. The required contents are specified in Annex IV. Technical documentation must be kept up to date throughout the system’s lifecycle and retained for ten years after the system leaves the market.
Transparency Obligation
The requirement under Article 50 that certain AI systems disclose their AI nature to users. AI systems that interact directly with natural persons must inform those persons that they are interacting with AI — unless this is obvious from context. AI-generated content, including deepfakes, must be machine-readable labelled. Providers of emotion recognition and biometric categorisation systems must inform affected persons. Transparency obligations apply regardless of risk classification.
U
Unacceptable Risk
The highest risk category under the Act, subject to outright prohibition under Article 5. Unacceptable risk AI practices are those whose potential for harm to individuals and society is considered so severe that no regulatory framework can adequately mitigate it. The list includes subliminal manipulation, exploitation of vulnerabilities, social scoring, real-time biometric surveillance and criminal recidivism profiling. These prohibitions applied from 2 February 2025 — before the rest of the Act’s framework came into force.
V
Vulnerability Exploitation
The use of AI to target and exploit specific vulnerabilities of persons or groups — including age, disability, social or economic situation — in ways that distort their behaviour and cause or are likely to cause harm. Exploitation of vulnerabilities is prohibited under Article 5. The prohibition covers AI systems specifically designed to exploit known psychological, cognitive or social weaknesses, including targeted manipulation of elderly persons, individuals with mental health conditions, or economically distressed populations.
W
Watermarking (AI-Generated Content)
The technical measure required under Article 50 for providers of GPAI models used to generate synthetic content — including images, audio, video and text. AI-generated content must be marked in a machine-readable format that allows detection of its AI origin. The requirement applies to content generated at scale and is intended to support the enforcement of transparency obligations and the detection of disinformation. The technical standards for watermarking are being developed by CEN-CENELEC.
Every definition in this glossary maps to a specific article, annex or recital of EU Regulation 2024/1689. The regulation is live. The enforcement clock is running.
eyreACT automates AI Act compliance — from risk classification to audit-ready documentation — so your legal and compliance teams spend their time on decisions, not paperwork. Learn more at our demo.