March 6, 2026 15 mins read

GDPR vs AI Act: Why You Need Two Compliance Layers and How They Actually Work Together

I had a call last month with a DPO at a mid-size German insurance company. She’d spent the last six years building what she considered a bulletproof GDPR compliance programme. Privacy by design. Data protection impact assessments. Consent management. Vendor agreements. Regular audits. The works.

Then I asked her about the AI Act.

“We’re GDPR compliant,” she said. “Doesn’t that cover it?”

It doesn’t. And this misconception is everywhere right now. Companies that did the hard work on GDPR in 2018 are assuming that their existing privacy compliance programme extends to the AI Act. It doesn’t. They’re two different laws, protecting different things, through different mechanisms, enforced by different authorities, with different penalties.

But here’s what makes it interesting — and what most guides miss: they’re not separate islands. They’re layers.

  • The GDPR is your data protection foundation.
  • The AI Act is the product safety layer that sits on top.

And if you’ve done GDPR properly, you’ve already built roughly 40% of your AI Act compliance programme without knowing it.

Let me explain how.

Two Laws, Two Different Jobs

The most important thing to understand is what each law actually regulates.

DimensionGDPREU AI Act
What it regulatesProcessing of personal dataDevelopment, placement on market, and use of AI systems
Legal natureFundamental rights law (data protection)Product safety law (derived from medical device legislation)
Core concernProtecting individuals’ privacy and data rightsProtecting health, safety, fundamental rights, democracy, and rule of law
Applies whenPersonal data of EU individuals is processedAI systems are placed on the EU market or used within the EU — even if no personal data is involved
Risk approachRisk-based obligations around data processing (DPIA for high-risk processing)Risk-based classification of AI systems (four tiers: prohibited, high-risk, limited, minimal)
Key actorsData controller, data processorProvider, deployer, importer, distributor, authorised representative
Pre-market requirementsNone — GDPR applies to processing activities, not productsConformity assessment, CE marking, EU database registration for high-risk systems
EnforcementNational data protection authorities (DPAs)National market surveillance authorities + EU AI Office (for GPAI)
Maximum penalties€20M or 4% of global turnover€35M or 7% of global turnover (prohibited practices)
In force since25 May 2018Phased: Feb 2025 (prohibitions), Aug 2025 (GPAI), Aug 2026 (high-risk)

The critical distinction: GDPR only applies when personal data is being processed. The AI Act applies to AI systems regardless of whether they process personal data.

An AI system that optimises energy grid distribution using only anonymous sensor data? GDPR doesn’t care. The AI Act does — if it’s a safety component of critical infrastructure, it’s high-risk under Annex III.

But in practice, most AI systems that fall under the AI Act also process personal data. Credit scoring uses personal financial data. Recruitment AI processes candidates’ CVs and video interviews. Healthcare AI analyses patient records. Biometric systems process the most sensitive personal data that exists.

Which means most organisations will need to comply with both laws simultaneously for the same systems.

Where EU AI Act and GDPR Overlap: The Compliance Synergies

If you’ve been through GDPR, you have a head start. Here’s where existing GDPR work maps directly onto AI Act requirements.

GDPR RequirementAI Act RequirementHow They Connect
Data Protection Impact Assessment (DPIA) — Art. 35 GDPRFundamental Rights Impact Assessment (FRIA) — Art. 27 AI ActThe AI Act explicitly allows the FRIA to complement and build on an existing DPIA. Do the DPIA first, then expand it to cover broader fundamental rights (non-discrimination, freedom of expression, human dignity). One assessment, two compliance boxes.
Data minimisation — Art. 5(1)(c) GDPRData governance — Art. 10 AI ActGDPR requires you to collect only what’s necessary. The AI Act requires your training data to be relevant, representative, and as error-free as possible. Tension exists here — the AI Act sometimes needs more data for fairness testing than GDPR’s minimisation principle would prefer. The Digital Omnibus proposes resolving this by permitting sensitive data processing for bias detection.
Transparency — Arts. 13-14 GDPRTransparency and instructions for use — Art. 13 AI ActGDPR requires you to tell people what data you collect and why. The AI Act requires you to tell deployers how the system works, its limitations, and how to oversee it. Different audiences (data subjects vs deployers) but the same underlying principle: no black boxes.
Right to explanation of automated decisions — Art. 22 GDPRHuman oversight — Art. 14 AI ActGDPR gives individuals the right not to be subject to solely automated decisions with legal effects, and to obtain meaningful information about the logic involved. The AI Act goes further — requiring that humans can understand, interpret, and override AI decisions. The AI Act’s human oversight is broader than GDPR’s Article 22 because it applies even when a human is technically “in the loop.”
Data protection by design and default — Art. 25 GDPRRisk management system — Art. 9 AI ActBoth require you to build protections into the system from the start. GDPR focuses on privacy protections. The AI Act focuses on safety, fairness, and fundamental rights protections. Same philosophy, broader scope.
Record of processing activities — Art. 30 GDPRTechnical documentation — Art. 11 + Annex IV AI ActGDPR requires you to document what personal data you process, why, and how. The AI Act requires comprehensive technical documentation of how the AI system was designed, built, trained, tested, and deployed. Your GDPR records of processing are a starting point for the data-related sections of AI Act documentation.
Data breach notification — Art. 33-34 GDPRSerious incident reporting — Art. 73 AI ActGDPR: notify the DPA within 72 hours of a personal data breach. AI Act: notify the market surveillance authority without undue delay (initial report within 2 days for death/serious health damage, full report within 15 days) for serious AI incidents. Different triggers, different timelines, different authorities — but the muscle of “detect, assess, report” is the same.
Data Protection Officer (DPO) — Arts. 37-39 GDPRNo mandatory “AI Officer” — but QMS requires accountability — Art. 17 AI ActGDPR requires a DPO for certain organisations. The AI Act doesn’t mandate a specific role, but the quality management system must include clear accountability structures. Many organisations are extending their DPO’s remit to cover AI governance, or creating a parallel AI compliance function.
Vendor due diligence (processor agreements) — Art. 28 GDPRSupply chain compliance — Arts. 16, 25, 26 AI ActGDPR requires data processing agreements with vendors handling personal data. The AI Act requires providers to give deployers the documentation they need, and deployers to verify their vendors’ compliance. If you already run GDPR vendor due diligence, extending it to cover AI Act requirements is a natural evolution.

Where They Diverge: The New Obligations

Now here’s what the AI Act demands that GDPR simply doesn’t cover.

AI Act RequirementWhy GDPR Doesn’t Cover It
Risk classification of AI systems (Art. 6)GDPR assesses risk of data processing activities. The AI Act classifies entire AI systems by risk tier. These are fundamentally different assessments.
Conformity assessment (Art. 43)GDPR has no concept of pre-market product certification. The AI Act requires formal conformity assessment before a high-risk AI system can be placed on the market. This is a product safety mechanism, not a data protection one.
CE marking (Art. 48)A physical (or digital) marking of conformity. Nothing equivalent exists in GDPR.
EU database registration (Art. 49)High-risk AI systems must be registered in a public EU database. GDPR has no comparable public registry.
Post-market monitoring (Art. 72)GDPR requires ongoing compliance with data processing principles, but doesn’t mandate a formal monitoring system for the data processing product itself. The AI Act requires active, continuous monitoring of the AI system’s performance in the real world.
Prohibited practices (Art. 5)GDPR restricts certain data processing (e.g., large-scale profiling, processing of special categories). The AI Act outright bans specific AI use cases — social scoring, subliminal manipulation, emotion recognition in workplaces. These are absolute prohibitions, not conditions on processing.
Instructions for use (Art. 13 AI Act)GDPR requires transparency to data subjects. The AI Act requires detailed technical instructions for deployers covering system performance, limitations, input data requirements, output interpretation, and human oversight measures. This is an instruction manual, not a privacy notice.
Accuracy, robustness, cybersecurity (Art. 15)GDPR requires data accuracy (Art. 5(1)(d)) and security of processing (Art. 32). The AI Act requires accuracy, robustness against adversarial attacks, and cybersecurity of the AI system as a product. GDPR protects data. The AI Act protects the system.

Industry Deep Dives: Mapping GDPR Against EU AI Act

Banking: The Double Assessment Problem

A European bank deploys an AI credit scoring system. Under GDPR, it needs a DPIA because automated credit decisions affect individuals’ legal rights (Article 22). Under the AI Act, it’s high-risk (Annex III, 5b) and requires full compliance including risk management, conformity assessment, and post-market monitoring. The deployer also needs an FRIA (Article 27) because credit scoring affects access to essential financial services.

The smart move: conduct the DPIA first (you probably already have one), then expand it into the FRIA by adding fundamental rights beyond privacy — non-discrimination, right to explanation, proportionality of the decision. One document, two frameworks satisfied.

The additional complication: MAS-style AI risk management guidelines (like Singapore’s November 2025 proposals) and DORA requirements for ICT third-party risk create a third and fourth compliance layer. The bank isn’t dealing with two regulations — it’s dealing with five or six overlapping frameworks for the same AI system.

Healthcare: Consent vs Classification

A hospital deploys an AI diagnostic tool that analyses patient medical images. GDPR requires explicit consent for processing health data (Article 9) or another lawful basis. The AI Act classifies the system as high-risk under both Annex I (medical device) and potentially Annex III (healthcare triage).

The tension: GDPR’s data minimisation says collect only what’s necessary. The AI Act’s data governance (Article 10) says training data must be representative and comprehensive enough to avoid bias. A diagnostic AI trained on too little data may be GDPR-compliant but AI Act non-compliant because it produces biased results across demographic groups.

The Digital Omnibus proposes allowing processing of sensitive personal data (including health data) for bias detection and correction — specifically to resolve this tension. But until it’s adopted, providers navigate an uncomfortable gap between the two frameworks.

Recruitment: The Triple Overlap

An HR tech company provides AI-powered candidate screening. This triggers three simultaneous compliance regimes:

GDPR: processing candidate personal data (CVs, video interviews, assessment results) requires lawful basis, transparency, DPIA, and Article 22 safeguards against solely automated decisions.

AI Act: employment and recruitment is explicitly high-risk under Annex III. Full provider obligations apply — risk management, data governance, technical documentation, human oversight, conformity assessment.

National employment law: many EU member states have additional rules on automated decision-making in employment (France’s CNIL guidance, Germany’s works council requirements, Netherlands’ algorithmic management rules).

The practical reality: the company needs a DPIA for GDPR, a conformity assessment for the AI Act, a FRIA if deployed by public bodies or essential services, and potentially works council approval under German law. Four separate compliance processes for one AI system. This is why integrated compliance platforms exist.

Insurance: Profiling Meets Product Safety

An insurer uses AI for life insurance risk assessment and pricing. GDPR Article 22 applies because automated profiling affects insurance terms. The AI Act classifies this as high-risk (Annex III, 5c). Both laws require transparency about how the decision is made — but to different audiences.

GDPR: the policyholder has the right to meaningful information about the logic involved and the right to contest the decision.

AI Act: the provider must supply instructions for use to the deployer (the insurer) explaining the system’s capabilities, limitations, and how to implement human oversight.

One system, two transparency obligations, two different audiences. The insurer needs to build both: a customer-facing explanation (GDPR) and an internal governance framework with human oversight capability (AI Act).

The Unified Compliance Approach: GDPR Meets EU AI Act

Here’s how smart organisations are building an integrated framework instead of running two parallel compliance programmes.

StepWhat To DoGDPR BenefitAI Act Benefit
1. AI system inventoryCatalogue every AI system, what data it processes, and what decisions it influencesMaps to record of processing activities (Art. 30)Maps to AI system registry and risk classification (Art. 6)
2. Unified impact assessmentConduct DPIA first, then expand to FRIADPIA satisfies Art. 35 GDPRFRIA satisfies Art. 27 AI Act
3. Integrated data governanceDocument data collection, quality, representativeness, and bias assessment for each AI systemSupports data minimisation, accuracy, and lawful basis documentationSatisfies Art. 10 data governance requirements
4. Combined transparency documentationCreate layered transparency: public-facing (for data subjects) and deployer-facing (instructions for use)Satisfies Arts. 13-14 GDPR transparencySatisfies Art. 13 AI Act instructions for use
5. Unified oversight modelImplement human oversight that satisfies both Article 22 GDPR and Article 14 AI ActEnsures human involvement in automated decisionsEnsures effective human oversight of AI system operation
6. Integrated vendor managementExtend GDPR processor agreements to include AI Act compliance warranties and documentation requirementsSatisfies Art. 28 GDPRSatisfies deployer obligations for third-party AI systems
7. Combined incident responseBuild one incident response process with branching for GDPR breach notification (72 hours to DPA) and AI Act serious incident reporting (2/15 days to market surveillance)Satisfies Arts. 33-34 GDPRSatisfies Art. 73 AI Act
8. Unified audit trailMaintain one compliance evidence base covering both personal data processing and AI system governanceSupports accountability (Art. 5(2) GDPR)Supports technical documentation and record-keeping (Arts. 11-12 AI Act)

The Penalty Comparison

Both laws have teeth. Here’s how they compare:

Violation CategoryGDPR PenaltyAI Act Penalty
Highest tier€20M or 4% of global turnover (violations of data processing principles, rights of data subjects, international transfers)€35M or 7% of global turnover (prohibited AI practices)
Middle tier€10M or 2% of global turnover (violations of technical/organisational measures, DPO obligations)€15M or 3% of global turnover (non-compliance with high-risk obligations)
Lower tierVaries by member state€7.5M or 1% of global turnover (supplying incorrect information to authorities)
Enforcement authorityNational data protection authorityNational market surveillance authority + EU AI Office (for GPAI)
Private right of actionYes — individuals can claim compensation (Art. 82)Right to explanation of individual decision-making (Art. 86); right to lodge complaint with market surveillance authority (Art. 85)

The overlap risk: a single AI system failure could trigger both a GDPR fine and an AI Act fine. An AI credit scoring system that uses personal data improperly (GDPR violation) and lacks proper risk management (AI Act violation) faces cumulative exposure of up to 11% of global turnover. That’s not theoretical — it’s the regulatory reality of deploying AI in the EU without integrated compliance.

Best Practices for Meeting Both GDPR and EU AI Act Demands

PracticeWhy It Matters
Don’t run parallel programmesOne integrated compliance function covering both GDPR and AI Act reduces cost, avoids contradictions, and creates a single source of truth.
Start from your GDPR foundationIf your GDPR programme is mature, you already have 40% of what the AI Act needs. Build on it, don’t rebuild from scratch.
Map roles carefully“Data controller” doesn’t automatically equal “provider.” “Data processor” doesn’t automatically equal “deployer.” Map both frameworks’ role definitions to your actual organisational structure.
Combine impact assessmentsThe DPIA and FRIA can and should be one exercise. The AI Act explicitly supports this approach.
Resolve the data minimisation tension proactivelyDocument why your AI system needs the data it uses for training. If you need more data than GDPR’s minimisation principle would suggest, justify it through the bias prevention argument and document the balancing exercise.
Use one evidence management systemCompliance evidence for GDPR and the AI Act should live in the same platform — not in separate folders managed by separate teams.
Coordinate incident responseA serious AI incident involving personal data may trigger both GDPR breach notification and AI Act incident reporting — to different authorities, with different timelines. Your incident response plan must handle both simultaneously.
Watch the Digital OmnibusThe proposed GDPR amendments (legitimate interest for AI training, relaxed sensitive data processing for bias detection, extended breach notification timeline) could significantly change the GDPR-AI Act interface.

How EYREACT Can Help

EYREACT is built for the reality that AI compliance doesn’t exist in isolation. Our platform maps AI Act obligations alongside their GDPR touchpoints, so you can see both compliance layers for every AI system in one view.

Living Compliance Binders track evidence across both frameworks. The Rule Engine flags where AI Act requirements overlap with existing GDPR obligations — and where they diverge, so you know exactly what new work is needed.

One platform. Two regulatory layers. Zero guesswork. Book a demo!

FAQ

If I’m GDPR compliant, am I automatically AI Act compliant?

No. GDPR compliance provides a foundation — particularly around data governance, impact assessments, transparency, and vendor management. But the AI Act introduces entirely new obligations: risk classification, conformity assessment, CE marking, EU database registration, post-market monitoring, and prohibited practice screening. These have no GDPR equivalent.

Can the same person manage both GDPR and AI Act compliance?

In many organisations, especially mid-size ones, the DPO is the natural person to take on AI Act oversight. The skill sets overlap significantly. However, the AI Act’s technical requirements (conformity assessment, system testing, accuracy evaluation) may require engineering input that goes beyond a typical DPO’s expertise. Consider pairing your DPO with a technical lead.

Do I need separate impact assessments for GDPR and the AI Act?

No. The AI Act explicitly allows the Fundamental Rights Impact Assessment to build on an existing DPIA. Conduct one integrated assessment that covers both data protection risks (GDPR) and broader fundamental rights impacts (AI Act). This is more efficient and produces a more comprehensive risk picture.

What happens if the AI Act and GDPR conflict?

The AI Act states that it does not affect GDPR obligations. When both apply, you must comply with both. The main tension point is data minimisation vs data representativeness for training — the Digital Omnibus proposes resolving this by permitting sensitive data processing for bias detection. Until adopted, document your balancing decisions carefully.

Does the AI Act apply to AI systems that don’t process personal data?

Yes. The AI Act applies to AI systems based on their function and risk level, not based on whether they process personal data. An AI system managing electricity grid distribution using only anonymous sensor data is still high-risk under Annex III if it’s a safety component of critical infrastructure.

Which authority enforces what?

GDPR: national data protection authorities. AI Act: national market surveillance authorities (which may or may not be the same body — Germany is likely to use the Federal Network Agency for AI Act enforcement, not the data protection authorities). GPAI models: the EU AI Office. If an AI incident involves both personal data and AI system failure, you may be dealing with two separate regulators simultaneously.

Can I be fined under both laws for the same incident?

Potentially, yes. A GDPR fine and an AI Act fine are imposed by different authorities for different violations. However, the principle of ne bis in idem (not punished twice for the same offence) may limit cumulative penalties in practice. This is untested territory — don’t be the test case.

How does the Digital Omnibus affect the GDPR-AI Act relationship?

The proposed changes include: legitimate interest as a lawful basis for AI training under GDPR, processing of sensitive personal data for bias detection in AI systems, extended breach notification timeline from 72 to 96 hours, and a unified incident reporting portal. These changes, if adopted, would significantly reduce the friction between the two frameworks.

This article is for informational purposes only and does not constitute legal advice. Organisations should seek qualified legal counsel for jurisdiction-specific compliance guidance.