February 20, 2026 7 mins read

EU AI Act: What Is an AI System? Quick Definition

The EU AI Act is now in force — and whether your technology falls within its scope depends entirely on one question: does it meet the legal definition of an “AI system”? Get this wrong, and you risk either unnecessary compliance costs or penalties of up to €35 million. Here’s what you need to know.

Why the AI System Definition Matters More Than You Think

Before you can classify risk levels, build compliance binders, or prepare for audits, you need to answer the most fundamental question under the EU AI Act: is your product actually an AI system?

The AI system definition under the AI Act is the gateway to the entire regulation. If your system falls within scope, you’re subject to a cascade of obligations depending on its risk classification — from transparency requirements to full conformity assessments. If it doesn’t, the AI Act simply doesn’t apply to you.

This isn’t an academic distinction. The European Commission published formal guidelines on the AI system definition in February 2025 specifically because organisations were struggling to determine whether their technologies qualified. And with the full enforcement deadline of 2 August 2026 now months away, getting clarity on this definition is no longer optional — it’s urgent.

The Official AI System Definition: Article 3(1) Explained

Article 3(1) of the EU AI Act (Regulation (EU) 2024/1689) defines an AI system as:

A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

That’s a dense piece of legal text. Let’s break it down into the seven elements the European Commission has identified as the building blocks of this definition.

The Seven Elements of the AI System Definition

1. Machine-Based System

The system must be computationally driven, integrating both hardware and software components. This is a broad starting point — essentially any digital technology running on a computer qualifies as “machine-based.” The key point here is that the AI Act only regulates systems that operate on machines, not human decision-making processes that happen to be assisted by technology.

2. Designed to Operate with Varying Levels of Autonomy

Your system must have some degree of independence from human involvement. According to the Commission’s guidelines, this means the definition excludes systems designed to operate solely with full manual human control, whether that control is direct (manual operation) or indirect (through automated rule-based controls set entirely by humans).

This is where many organisations find their first compliance relief — if your system requires constant human direction to function, it likely falls outside the AI Act’s scope.

3. May Exhibit Adaptiveness After Deployment

The definition states a system “may” exhibit adaptiveness — this is a critical word. Adaptiveness, which the Commission interprets as self-learning capabilities that allow behaviour to change during use, is not mandatory. A system can still be an AI system under the AI Act even if it doesn’t adapt or learn after deployment.

This means you cannot dismiss AI Act obligations simply because your system is “static” or doesn’t learn from new data in production.

4. For Explicit or Implicit Objectives

The system must operate towards some goal or purpose, whether clearly stated (explicit) or embedded in its design and training (implicit). Most commercial AI systems easily satisfy this element, as they’re built to accomplish specific tasks.

5. Infers, From the Input It Receives, How to Generate Outputs

This is the most important element of the definition — and the one the Commission spent the most time clarifying in its guidelines.

Inference is the distinguishing characteristic that separates AI systems from traditional software. The Commission has clarified that inference refers primarily to techniques used during the build phase of a system, where a system derives outputs through AI techniques such as machine learning (supervised, unsupervised, self-supervised, and reinforcement learning) or logic- and knowledge-based approaches.

The critical test: does the system go beyond rules defined solely by humans to generate its outputs? If your system can only execute pre-defined, human-authored rules, it’s traditional software, not AI. If it derives patterns, learns relationships, or generates outputs through computational techniques that weren’t explicitly programmed step-by-step by humans, it likely qualifies as an AI system.

6. Outputs Such as Predictions, Content, Recommendations, or Decisions

The outputs listed are illustrative, not exhaustive. The AI Act covers systems that generate any type of output through inference, whether that’s a credit score prediction, a generated text, a product recommendation, or an automated decision.

7. That Can Influence Physical or Virtual Environments

The outputs must have some potential impact — they need to be capable of influencing something, whether that’s a physical process (like controlling machinery) or a virtual environment (like personalising a user’s digital experience).

What Is NOT an AI System Under the AI Act?

The Commission’s guidelines are equally valuable for what they exclude. Understanding what falls outside the definition can save your organisation significant compliance effort and cost.

According to the guidelines, the following are explicitly not AI systems under the AI Act:

Mathematical optimisation tools — Systems like linear regression models or traditional physics-based simulations, even if enhanced by machine learning for processing speed, are not considered AI systems.

Basic data processing tools — Software that sorts, filters, or organises data without any learning or inference capability. Think Excel formulas, SQL queries, and standard database operations.

Classical heuristics — Rule-based tools that follow fixed logical patterns without any learning capability. The classic example is a chess engine using a minimax algorithm — sophisticated, but not AI under this definition.

Simple prediction systems — Systems that generate predictions using basic statistical rules (such as calculating average temperatures from historical data) and that do not adapt or evolve over time.

Fully human-controlled systems — Any system that operates solely under direct human direction, without any independent inference or autonomous operation.

The common thread? These systems lack the ability to infer outputs. They execute human-defined rules rather than deriving their own patterns or generating outputs through computational learning.

Why This Definition Creates Real Compliance Challenges

In practice, the AI system definition under the AI Act creates several thorny issues for organisations:

The boundary is blurry. Many modern software systems incorporate elements of machine learning or statistical inference alongside traditional rule-based logic. Determining whether these hybrid systems cross the threshold into “AI system” territory requires careful, case-by-case analysis — exactly what the Commission acknowledged in its guidelines.

Components vs. systems. If your product includes an AI component (say, a machine learning model for one specific feature) within a larger traditional software platform, is the whole product an AI system, or just that component? The answer has significant implications for the scope of your compliance obligations.

The definition is technology-neutral by design. The AI Act deliberately avoids listing specific technologies. This future-proofs the regulation but makes it harder for organisations to get a definitive answer without expert analysis.

International implications. The AI Act applies to AI systems placed on the EU market or whose outputs are used within the EU, regardless of where the provider is based. This means the definition has global reach — non-EU companies need to assess their systems against it too.

What You Should Do Now: A Practical Compliance Roadmap

With the August 2026 enforcement deadline approaching, here’s what your organisation should prioritise:

Step 1: Inventory your systems. Create a comprehensive catalogue of all technology systems your organisation develops, deploys, or uses. Don’t pre-filter — capture everything.

Step 2: Apply the seven-element test. For each system, assess it against the seven components of the AI system definition. Pay particular attention to the inference element — this is where most borderline cases are decided.

Step 3: Classify your AI systems by risk. Once you’ve identified which systems qualify as AI systems, determine their risk category under the AI Act (unacceptable, high-risk, limited/transparency, or minimal risk). Each category carries different obligations.

Step 4: Build your compliance evidence. For systems that fall within scope, you’ll need to document your compliance systematically. This means establishing evidence binders that track your conformity across the entire AI lifecycle — from development and testing through deployment and monitoring.

Step 5: Establish ongoing governance. Compliance isn’t a one-time exercise. The AI Act requires continuous monitoring, post-market surveillance, and regular updates to your compliance documentation as your systems evolve.

How eyreACT Automates This Process

At eyreactACT, we built our platform specifically to solve the compliance challenge the AI system definition creates.

Our Living Compliance Binders™ automate the evidence collection and validation process across all risk categories, tracking your compliance posture from development through deployment and beyond. Instead of wrestling with spreadsheets and manual documentation, eyreACT gives you a systematic, audit-ready framework that evolves with your AI systems.

Whether you’re a provider developing AI systems, a deployer integrating them into your operations, or a distributor bringing them to market — eyreACT helps you move from uncertainty to compliance confidence.

The August 2026 deadline is approaching. Don’t wait until enforcement begins to find out whether your systems are in scope. Start your free assessment!