Just a few days ago, I was at a founder meetup in London. One of the speakers — CTO of a well-funded generative AI startup — gave a confident talk about how the EU AI Act was “mainly a problem for deployers.” Their company builds a foundation model. They sell API access. Downstream customers build applications.
“The compliance obligation flows down the chain,” he said. “We provide the model. They deal with the regulation.”
I caught him after the talk. “Have you read Chapter V?”
He hadn’t.
Chapter V of the EU AI Act is dedicated entirely to general-purpose AI models. It’s been enforceable since 2 August 2025. It imposes obligations on GPAI providers that are completely separate from, and in addition to, the high-risk system rules that apply to downstream deployers. Technical documentation. Training data transparency. Copyright compliance. Downstream provider support. And if your model is classified as having systemic risk — adversarial testing, incident reporting, cybersecurity, and a direct reporting relationship with the EU AI Office.
The CTO’s assumption that compliance “flows down” was exactly backwards. For GPAI, it flows up. The model provider is the first link in the chain, not the last.
Let me walk you through how this actually works.
What Counts as a GPAI Model in EU AI Act
The EU AI Act defines a GPAI model as an AI model that displays significant generality, is capable of performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications. The Commission’s July 2025 guidelines added a technical threshold: a model trained using more than 10²³ FLOPs that can generate language, images, or video.
| Criterion | Explanation |
|---|---|
| Significant generality | The model can competently perform a wide range of distinct tasks — not just one narrow function |
| Versatile integration | It can be integrated into various downstream AI systems, not locked to a single application |
| Training threshold | Trained using more than 10²³ FLOPs (floating point operations) |
| Output modality | Generates language (text or audio), text-to-image, or text-to-video outputs |
What qualifies: GPT-4, Claude, Gemini, Llama, Mistral, Stable Diffusion and similar foundation models. Also smaller models that meet the generality and integration criteria.
What doesn’t qualify: Specialised models trained above the FLOP threshold but lacking general capabilities — purpose-built transcription models, image upscaling tools, weather forecasting systems, or game-specific AI. If the model can only do one thing, it’s not “general-purpose” regardless of how much compute went into training it.
When does it kick in: Obligations begin from the start of pre-training and extend through the entire lifecycle, including post-market modifications.
The Two Tiers: Standard GPAI vs Systemic Risk in EU AI Act
The AI Act splits GPAI providers into two categories with different obligation levels.
| Tier | Threshold | Who This Covers |
|---|---|---|
| Standard GPAI | Meets the GPAI definition (generality + versatility + 10²³ FLOPs) | Most foundation model providers, including smaller open-source models |
| GPAI with systemic risk | Trained with ≥10²⁵ FLOPs, OR classified by the Commission based on high-impact capabilities | GPT-4, Gemini Ultra, Claude (largest variants), and other frontier models. Providers must notify the Commission within 2 weeks of reaching or foreseeing the threshold. |
The 10²⁵ FLOPs threshold is a presumption, not an absolute line. A model below the threshold could still be classified as systemic risk if the Commission determines it has high-impact capabilities based on benchmarks, reach, or real-world effects.
Conversely, a provider above the threshold can contest the classification with evidence that their model doesn’t present systemic risk — though obligations remain in effect during the review period.
What Standard GPAI Providers Must Do
These obligations are enforceable now — since 2 August 2025.
| Obligation | Article | What It Involves |
|---|---|---|
| Technical documentation | Art. 53(1)(a) + Annex XI | Comprehensive documentation covering: model description, architecture, training process and methodology, computational resources used, training data characteristics, evaluation results, and known limitations. Must be maintained, updated, and provided to the AI Office on request. Keep for 10 years. |
| Downstream provider information | Art. 53(1)(b) + Annex XII | Provide downstream AI system providers with documentation enabling them to understand capabilities, limitations, and integration requirements. This includes: intended tasks, acceptable use policies, technical specifications, input/output formats, training data provenance, and integration instructions. Deliver within 14 days of request. |
| Copyright compliance policy | Art. 53(1)(c) | Establish and implement a policy to comply with EU copyright law, particularly the text and data mining opt-out provisions of the Copyright Directive. This means respecting rights holders’ machine-readable opt-outs from text and data mining. |
| Training data summary | Art. 53(1)(d) | Publish a sufficiently detailed summary of the content used to train the model, using the AI Office’s template. This is public-facing transparency — the market and rights holders should be able to understand what data went into your model. |
The open-source exception: If your GPAI model is released under a free and open licence — with parameters, weights, architecture, and usage publicly available — you only need to comply with the copyright policy and training data summary obligations. You’re exempt from technical documentation and downstream provider information requirements. Unless your open-source model has systemic risk, in which case the full obligation set applies regardless of licensing.
What Systemic Risk GPAI Providers Must Do
Everything above, plus:
| Additional Obligation | Article | What It Involves |
|---|---|---|
| Model evaluations | Art. 55(1)(a) | Conduct and document comprehensive model evaluations, including adversarial testing, to identify and mitigate systemic risks. This means red-teaming, stress testing, and structured evaluation against safety benchmarks. |
| Systemic risk assessment and mitigation | Art. 55(1)(b) | Assess and mitigate possible systemic risks at Union level, including their sources. Maintain ongoing risk management that evolves with the model and its deployment context. |
| Serious incident tracking and reporting | Art. 55(1)(c) | Track, document, and report serious incidents to the AI Office and relevant national competent authorities without undue delay. Also report possible corrective measures taken. |
| Cybersecurity protections | Art. 55(1)(d) | Ensure an adequate level of cybersecurity protection for the model and its physical infrastructure. This includes protection against model theft, unauthorised access, and adversarial manipulation. |
| Safety and Security Framework | Code of Practice | Develop a comprehensive Safety and Security Framework before model release, covering evaluation triggers, risk categories, mitigation strategies, and organisational responsibilities. Update regularly. |
| Commission notification | Art. 51(2) | Notify the AI Office within 2 weeks of reaching or reasonably foreseeing the 10²⁵ FLOPs threshold. |
The Code of Practice: Your Compliance Shortcut
Published on 10 July 2025, the GPAI Code of Practice is the bridge between obligations and implementation. It’s voluntary — but signing it creates a “rebuttable presumption of conformity,” meaning the AI Office will presume you’re compliant unless evidence suggests otherwise.
| Chapter | Applies To | What It Covers |
|---|---|---|
| Transparency | All GPAI providers | Model documentation, downstream provider information, public summaries |
| Copyright | All GPAI providers | Copyright compliance policies, opt-out mechanisms, rights holder engagement |
| Safety and Security | Systemic risk GPAI providers only | Risk governance, evaluations, red teaming, incident reporting, cybersecurity |
Who signed: As of early 2026, 26 major AI providers have signed, including Microsoft, Google, Amazon, OpenAI, and Anthropic. xAI signed only the Safety and Security chapter, committing to demonstrate transparency and copyright compliance through alternative means.
Who didn’t sign: Meta stated it would focus on direct compliance rather than joining the voluntary pact. Non-signatories face enhanced scrutiny — they must demonstrate compliance through “alternative adequate means,” which the AI Office will evaluate on a case-by-case basis.
The practical benefit of signing: During the first year (August 2025–August 2026), the AI Office is taking a collaborative approach with Code signatories. If you’ve signed and haven’t fully implemented every commitment yet, the AI Office will work with you rather than immediately penalise you. After August 2026, full enforcement begins regardless.
The Downstream Chain: How GPAI Obligations Connect to High-Risk Systems
This is the part that confuses most people. GPAI providers and high-risk AI system providers have separate but interconnected obligations.
| Your Role | Your Obligations | How You Connect to the Chain |
|---|---|---|
| GPAI model provider | Technical documentation, downstream info, copyright, training summary. If systemic risk: evaluations, incident reporting, cybersecurity. | You provide documentation and information that downstream providers need to comply with their high-risk obligations. You must cooperate with them. |
| AI system provider (using a GPAI model) | Full high-risk obligations if the system is high-risk: risk management, technical documentation, conformity assessment, CE marking, registration, post-market monitoring. | You rely on the GPAI provider’s documentation to understand model capabilities and limitations. You’re responsible for the complete system — not just your wrapper around the model. |
| Deployer | Human oversight, monitoring, transparency, incident reporting. | You rely on the AI system provider’s instructions for use. The GPAI provider is two steps removed from you — but their documentation cascades down to inform your deployment. |
The critical implication: if you build a high-risk AI system on top of a GPAI model, you can’t outsource your compliance to the model provider. You inherit responsibility for the complete system. The GPAI provider’s documentation helps you — but doesn’t replace your own risk management, technical documentation, or conformity assessment.
And conversely: if you’re the GPAI provider, you can’t claim ignorance of downstream use. Your documentation and cooperation obligations exist precisely because your model is the foundation that others build on.
Industry Examples
OpenAI / GPT
GPT-4 and successors are GPAI models with systemic risk (trained above 10²⁵ FLOPs). OpenAI must provide technical documentation, training data summaries, downstream integration information, copyright compliance, model evaluations, adversarial testing, incident reporting, and cybersecurity protections. OpenAI signed the Code of Practice.
A startup that builds a credit scoring application using GPT-4’s API is the provider of a high-risk AI system. The startup can’t point to OpenAI’s compliance and say “they handled it.” The startup must conduct its own risk management, produce its own technical documentation, and complete its own conformity assessment for the credit scoring system.
Meta / Llama
Llama models are open-source GPAI. Under the open-source exemption, Meta only needs to maintain copyright compliance and publish training data summaries — unless a Llama model crosses the systemic risk threshold, in which case full obligations apply regardless of open-source licensing. Meta has not signed the Code of Practice, meaning it must demonstrate compliance through alternative means that the AI Office will evaluate.
A company that takes Llama, fine-tunes it for medical triage, and deploys it in EU hospitals becomes the provider of a high-risk AI system under Article 25. Meta’s open-source exemption doesn’t cascade downstream. The company bears full high-risk provider obligations.
Mistral
Paris-based Mistral develops GPAI models and has signed the Code of Practice. As an EU-based provider, Mistral deals directly with the AI Office without needing an authorised representative. Downstream providers building on Mistral’s models can request integration documentation within 14 days.
Stability AI / Stable Diffusion
Image generation models are GPAI if they meet the generality and FLOP thresholds. A marketing agency using Stable Diffusion to generate advertising images has limited risk transparency obligations (AI-generated content must be labelled) but is not operating a high-risk system. The GPAI provider’s obligations (documentation, copyright) are separate from the limited-risk user’s obligations (labelling).
Enterprise Fine-Tuning Scenario
A European bank takes a GPAI model and fine-tunes it on proprietary financial data to build a credit risk assessment tool. The bank is now the provider of a high-risk AI system. The GPAI provider remains the GPAI provider, with its own obligations.
The bank cannot rely on the GPAI provider’s documentation alone. It must produce its own technical documentation covering the fine-tuning process, the proprietary data, the system’s performance characteristics, and the risk management decisions specific to credit scoring.
If the bank’s fine-tuning constitutes a “significant modification” to the GPAI model itself (changing capabilities, performance, or risk profile), the bank could also become a GPAI provider under Article 25, inheriting GPAI-specific obligations as well.
Timeline: What’s Already Enforceable and What’s Coming
| Date | What Happened / Happens |
|---|---|
| 10 July 2025 | GPAI Code of Practice published |
| 18 July 2025 | Commission published draft GPAI guidelines |
| 2 August 2025 ✅ | GPAI obligations enforceable. New models must comply. AI Office operational. Systemic risk models must notify. |
| 2 August 2026 | Full enforcement powers: AI Office can request information, order model recalls, mandate mitigations, impose fines. GPAI-specific penalties enforceable. |
| 2 August 2027 | Legacy GPAI models (placed on market before August 2025) must be fully compliant. |
The enforcement grace period: Between August 2025 and August 2026, the AI Office is collaborating with providers rather than immediately fining them. Code of Practice signatories who haven’t fully implemented every commitment won’t be considered in breach — the AI Office will work with them. After August 2026, this grace period ends.
Penalties for GPAI Violations Under EU AI Act
| Violation | Maximum Penalty |
|---|---|
| Non-compliance with GPAI obligations | Up to €15M or 3% of global annual turnover |
| Supplying incorrect, incomplete, or misleading information to the AI Office | Up to €7.5M or 1% of global annual turnover |
| Violations of prohibited practices (applies to GPAI systems, not models) | Up to €35M or 7% of global annual turnover |
GPAI-specific penalties become enforceable from August 2026. Penalties for prohibited practices applied to AI systems built on GPAI models have been enforceable since August 2025.
Best Practices for GPAI Models Under EU AI Act
| Practice | Why It Matters |
|---|---|
| Sign the Code of Practice | Presumption of conformity is a significant advantage during enforcement ramp-up. Non-signatories face enhanced scrutiny. |
| Prepare documentation now, not after enforcement | Technical documentation per Annex XI is extensive. Start before the AI Office comes asking. |
| Build downstream provider support into your operations | You must deliver integration documentation within 14 days of request. Have it ready, not drafted on demand. |
| Take copyright compliance seriously | The text and data mining opt-out isn’t optional. Implement machine-readable opt-out detection in your training pipeline. Document your compliance. |
| If you’re near 10²⁵ FLOPs, notify proactively | The two-week notification window is tight. If you’re approaching the threshold, engage the AI Office early. Surprises go badly. |
| Cooperate with downstream high-risk providers | They need your documentation to comply. If you’re unresponsive or unhelpful, they’ll switch to a GPAI provider who cooperates — and you’ll face scrutiny for obstructing their compliance. |
| Treat the grace period as preparation, not vacation | August 2025 to August 2026 is your window to build compliance infrastructure. The AI Office is watching who’s making progress and who’s waiting. |
| Track the standardisation process | CEN/CENELEC standards for GPAI are still under development. When published, compliance with these standards will create a stronger presumption of conformity than the Code of Practice alone. |
How EYREACT Can Help
EYREACT tracks your obligations across the entire AI value chain — whether you’re the GPAI model provider, the downstream system builder, or the deployer. Our platform maps GPAI-specific obligations alongside high-risk system requirements, so you can see exactly where your responsibilities begin and where your downstream partners’ begin.
For companies building on third-party GPAI models, EYREACT’s Living Compliance Binders document both the GPAI provider’s contribution and your own system-level compliance — creating the audit trail that proves you didn’t just assume the model provider had it covered. Book a demo!
FAQ
We build a foundation model but don’t sell it — we only use it internally. Are we a GPAI provider?
If you use the model only for internal purposes and don’t place it on the market or make it available to third parties, GPAI provider obligations likely don’t apply. But if you put it into service within the EU — even internally — other AI Act obligations may apply depending on the use case (e.g., high-risk if used for employment decisions).
The guidelines clarify that the trigger is “placing on the market,” which means making the model available for distribution or use.
We fine-tune an open-source model and build an application. What are our obligations?
You’re likely the provider of the downstream AI system. If your application is high-risk, you have full high-risk provider obligations. The open-source GPAI provider has its own (reduced) obligations. If your fine-tuning constitutes a significant modification to the model itself, you may also inherit GPAI provider obligations.
Does uploading a model to Hugging Face make us a GPAI provider?
The guidelines clarify that uploading a model to a hosting platform does not by itself transfer provider status to the platform. The entity that developed the model and makes it available remains the provider. The hosting platform facilitates access but doesn’t become the provider simply by hosting.
What’s the difference between a GPAI model and a GPAI system?
A GPAI model is the underlying AI model — the trained weights, architecture, and capabilities. A GPAI system is an AI system built on top of a GPAI model that serves various purposes. The model provider has GPAI-specific obligations. The system provider has obligations that depend on the system’s risk classification. They may be the same entity or different entities.
We signed the Code of Practice. Does that mean we’re compliant?
The Code creates a presumption of conformity — meaning the AI Office will presume compliance unless evidence suggests otherwise. But you must actually implement the commitments, not just sign. The AI Office will monitor adherence and can revoke the presumption if you’re not following through.
Can we contest systemic risk classification?
Yes. If your model exceeds 10²⁵ FLOPs but you believe it doesn’t present systemic risk, you can provide evidence (benchmark results, scaling laws, deployment constraints) to the Commission. They may accept or reject your rebuttal. Obligations remain in effect during the review. Initial reassessment can be requested six months after designation.
What counts as a “serious incident” for reporting purposes?
The AI Act doesn’t define this precisely for GPAI models, but the Code of Practice and Commission guidance indicate incidents that cause or could cause significant harm at scale — widespread generation of harmful content, systematic bias affecting large populations, security breaches enabling model misuse, or failures with cascading effects across downstream systems.
How do GPAI obligations interact with GDPR?
GPAI models trained on personal data must comply with GDPR for data processing aspects and with the AI Act for model governance aspects. Training data summaries must balance transparency with data protection — you can’t disclose personal data in your training summary. The Digital Omnibus proposes allowing legitimate interest as a legal basis for AI training under GDPR.
We’re a small startup building a foundation model. Are there any exemptions?
The AI Act provides reduced penalties for SMEs and startups. The Digital Omnibus proposes extending simplified requirements to small mid-cap enterprises. Regulatory sandboxes offer priority access for startups. But the substantive GPAI obligations are the same regardless of company size — documentation, transparency, copyright, and cooperation requirements apply to everyone.
This article is for informational purposes only and does not constitute legal advice. Organisations should seek qualified legal counsel for jurisdiction-specific compliance guidance.