Regulatory Context

Understanding the EU AI Act

A practical overview of what the EU AI Act means for organisations developing, deploying, or using AI systems in the European Union.

What is the EU AI Act?

The EU AI Act is the European Union’s comprehensive regulation governing artificial intelligence. It establishes a legal framework for the development, deployment, and use of AI systems within the EU market.

The regulation takes a risk-based approach, categorising AI systems based on their potential impact on health, safety, and fundamental rights. Different risk categories carry different compliance obligations.

Timeline

The EU AI Act entered into force in August 2024, with different provisions becoming applicable at different times. Most significant obligations apply from August 2025 and August 2026, depending on the AI system's risk classification.

Risk-based classification

The EU AI Act categorises AI systems based on the level of risk they pose.

  • Unacceptable Risk

    AI systems that pose clear threats to safety, livelihoods, or rights. These are prohibited. Examples include social scoring systems and certain types of biometric surveillance.
  • High Risk

    AI systems with significant potential impact. Subject to extensive compliance requirements including conformity assessments, documentation, and monitoring. Examples include AI in employment, credit scoring, and critical infrastructure.
  • Limited Risk

    AI systems with transparency obligations. Users must be informed they are interacting with AI. Examples include chatbots and emotion recognition systems.
  • Minimal Risk

    AI systems with minimal potential harm. No specific regulatory requirements, though general good practices apply. Examples include spam filters and AI in video games.

Why this matters for your organisation

Legal obligations

Organisations deploying or providing AI systems in the EU may be subject to specific legal requirements depending on their role and the risk level of their systems.

Documentation requirements

Many AI systems will require technical documentation, risk assessments, and records of compliance activities.

Significant penalties

Non-compliance can result in substantial fines—up to €35 million or 7% of global annual turnover for the most serious violations.

Ongoing obligations

Compliance is not a one-time activity. Organisations must maintain documentation, conduct monitoring, and respond to changes.

Market access

Compliance may be required to place AI systems on the EU market or to continue serving EU customers.

Due diligence expectations

Even where direct obligations may be limited, customers, partners, and investors may expect evidence of AI governance practices.

What organisations are expected to evidence

Depending on your role and your AI systems' risk classification, you may need to demonstrate:

    • AI system inventory and classification
    • Risk assessment processes
    • Data governance practices
    • Human oversight mechanisms
    • Technical documentation
    • Quality management systems
    • Monitoring and logging
    • Incident response procedures

How eyreACT supports compliance efforts

eyreACT is a platform designed to help organisations structure and evidence their EU AI Act compliance activities. It provides:

  • Structure

    Organised frameworks for documentation and assessment

  • Traceability

    Clear records of decisions and activities

  • Evidence

    Exportable reports demonstrating compliance efforts

Note: eyreACT is a supporting tool, not a substitute for legal advice or professional compliance guidance. Organisations should work with qualified advisors on their specific compliance strategies.

Assess your readiness today

Take the free assessment to understand your current compliance posture and receive actionable recommendations.