May 10, 2026 6 mins read

AI in Schools: What the EU AI Act and GDPR Actually Require

Most institutions still treat AI governance as a future problem. The frameworks that govern it are already in force.

The EU AI Act came into effect in August 2024. GDPR has applied since 2018. Together, they create a specific and demanding set of obligations for any organisation deploying AI in education — obligations that most school trusts, multi-academy trusts, and higher education institutions are not yet meeting.

This piece sets out what the frameworks actually require, where the accountability gaps sit, and what responsible governance looks like in practice.

Why education is treated differently

The EU AI Act does not apply uniformly across sectors. It uses a risk-based classification system, and education sits near the top of it.

Annex III of the Act classifies AI systems used in education and vocational training as high-risk by default. This is not a borderline judgment. The drafters made a deliberate choice, and the reasoning is straightforward: AI systems in education make or influence decisions that affect life trajectory — admissions, assessment, progression, pastoral support. The subjects of those decisions are largely minors. The power asymmetry between institution and student is extreme.

High-risk classification under Annex III is not triggered by how sophisticated a system is. It is triggered by where it is deployed and what decisions it influences. A simple scoring model used in admissions is high-risk. A complex generative AI used for administrative drafting may not be.

What counts as high-risk AI in education

AI useClassificationPrimary obligation trigger
Admissions screening toolsHigh-riskAnnex III, point 3(a)
Automated grading or assessmentHigh-riskAnnex III, point 3(b)
Proctoring softwareHigh-riskAnnex III + GDPR Art. 9 (biometric)
Learning analytics influencing progressionHigh-riskAnnex III, point 3(b)
Adaptive learning platformsContext-dependentDepends on decision output
AI chatbots for pastoral/wellbeing supportContext-dependentGDPR Art. 9, safeguarding law
Administrative drafting toolsLikely limited riskTransparency obligations only
Timetabling and back-office schedulingLikely limited riskStandard data processing rules

Back-office AI versus in-classroom AI

The distinction matters legally, not just operationally. Back-office AI — timetabling, HR processes, budget forecasting — rarely touches students directly. The risk profile is lower, oversight requirements are lighter, and liability follows conventional data protection principles.

In-classroom AI is categorically different. Where a system generates outputs that directly influence decisions about individual students, the full high-risk compliance stack applies: conformity assessment before deployment, technical documentation, human oversight mechanisms, and registration in the EU AI database.

GDPR adds a further layer. Automated decision-making with legal or similarly significant effects on individuals — which includes grading, progression, and exclusion decisions — requires a lawful basis under Article 6, and where special category data is involved, a condition under Article 9. Consent is rarely a viable basis where there is a structural power imbalance between institution and student.

Who is responsible — and who is liable

The AI Act creates a chain of responsibility. Developers bear liability for the fundamental design of a system. Deployers — schools and universities — bear liability for how it is implemented, monitored, and used. A vendor contract that does not address this allocation does not remove institutional exposure. It obscures it.

PartyPrimary obligationsWhere exposure sits
AI developer / vendorSystem design, conformity assessment, technical documentationFundamental design failures
Institution (deployer)Implementation controls, human oversight, DPIA, staff trainingDeployment without proper oversight; misclassification
Named responsible personInternal governance; documenting oversight decisionsPersonal accountability where designated role exists

The AI Act requires a designated responsible person for high-risk systems. Collective oversight does not satisfy this requirement. Most institutions do not currently have a named individual with documented authority over AI governance.

Where transparency is falling short

Some institutions have published AI use registers or algorithmic impact statements, usually in response to pressure from student unions or governors. These are a start. They are not sufficient.

In almost every case where disclosure has been attempted, it stops at the procurement boundary. The institution can document that a third-party tool is in use. It cannot explain how that tool works, because the vendor’s logic is proprietary and the training data is undisclosed. Transparency obligations fall on the deployer. The information needed to meet those obligations sits with someone else.

Until procurement frameworks require vendors to supply AI Act-compliant technical documentation as a contract condition, disclosure in education will remain performative. Publishing a register is not the same as having accountability.

Practical steps for MAT CEOs and data protection officers

1. Map what you have. Before any governance framework can operate, you need an inventory of every AI tool in use — including tools adopted informally by teachers without central procurement. You cannot govern what you have not found.

2. Classify before deploying anything new. If a tool influences a decision about a student — assessment, progression, behaviour, admissions — treat it as high-risk until you have evidence otherwise. The cost of over-caution is some additional paperwork. The cost of misclassification is institutional liability.

3. Fix your contracts. Current vendor agreements almost certainly do not require AI Act-compliant documentation, do not allocate liability clearly, and do not give you the information you need to meet your own transparency obligations. New procurement should close this gap as standard.

Frequently asked questions

Does the EU AI Act apply to UK schools?

Not directly. The EU AI Act applies to systems placed on the EU market or used within the EU. UK schools operating solely in the UK are not subject to it as a matter of EU law. However, many AI vendors serving UK education will be EU AI Act compliant as a baseline, and the UK is developing its own AI governance framework referencing similar risk principles. UK institutions should monitor this closely.

If a vendor says their product is compliant, does that cover the institution?

No. Vendor compliance covers the developer’s obligations. The deploying institution has its own separate obligations under the Act — including implementing human oversight, conducting DPIAs, and documenting governance decisions. A vendor certificate does not satisfy institutional duties.

What is a DPIA and when is one required in education?

A Data Protection Impact Assessment is a structured analysis of privacy risks before processing begins. Under GDPR, it is mandatory where processing is likely to result in a high risk to individuals — which includes systematic monitoring of students, automated decision-making affecting progression, and processing of special category data such as health information or biometrics. Most high-risk AI systems in education trigger this obligation.

Can a school use student data to train or improve an AI system?

Only with a clear lawful basis under GDPR Article 6, and a condition under Article 9 where special category data is involved. Consent from minors is generally not valid as a standalone basis. Purpose limitation also applies: data collected for educational delivery cannot be repurposed for model training without a fresh legal analysis.

What does human oversight actually mean in practice?

For high-risk AI systems, the Act requires that a human can review, override, and halt the system’s outputs. It is not satisfied by a general appeals process that operates after the fact. Oversight must be genuinely operative — meaning a qualified person reviews AI-influenced decisions before they take effect, at least on a risk-sampled basis.

When does enforcement begin?

Prohibited AI practices were banned from February 2025. High-risk system obligations apply from August 2026. Institutions that wait for enforcement to begin before addressing compliance are already late: technical documentation, conformity assessments, and procurement changes take time. Liability accrues from the point of deployment, not from the point of being investigated.