February 14, 2026 8 mins read

European Declaration on Digital Rights and Principles: The Values Document Behind Every EU Digital Law You’re Now Complying With

A few weeks ago I was explaining the EU AI Act to a room of startup founders in Brussels. One of them asked a question I get surprisingly often: “Where does all this come from? Who decided that AI needs human oversight, or that emotion recognition in the workplace should be banned? Is there some kind of… philosophy document, like Satoshi Nakamoto’s Bitcoin whitepaper?”

There is. And almost nobody’s read it.

The European Declaration on Digital Rights and Principles for the Digital Decade was signed on 23 January 2023 by the Presidents of the European Parliament, the Council, and the Commission. It’s the first document of its kind anywhere in the world — a political commitment by all three EU institutions to a set of values that should govern digital transformation across Europe.

It’s not law. It doesn’t impose obligations on companies. You won’t get fined for violating it. But if you want to understand why the AI Act, GDPR, the Digital Services Act, the Data Act, and every other piece of EU digital legislation works the way it does — this is the source code.

What It Is (and What It Isn’t)

AspectDetail
Official nameEuropean Declaration on Digital Rights and Principles for the Digital Decade
Adopted23 January 2023
Signed byPresidents of the European Parliament, Council of the EU, and European Commission
Legal statusNon-binding political declaration — not legislation
EnforceabilityNone — it’s a commitment, not a regulation
PurposeDefine the values and principles guiding EU digital policy through 2030
MonitoringCommission publishes annual monitoring reports alongside the State of the Digital Decade Report
Relationship to lawInforms and underpins the AI Act, GDPR, DSA, DMA, Data Act, NIS2, DORA, and other EU digital legislation
ScopeAll digital transformation — not just AI
International dimensionGuides EU positions in international digital governance negotiations

The Six Chapters of the European Declaration on Digital Rights and Principles

The Declaration is structured around six principles. Each one maps directly to binding obligations you’re already dealing with — or about to.

ChapterPrincipleWhat It SaysWhere It Becomes Law
IPeople at the CentreTechnology must serve people, not the other way around. Developed and used in full respect of fundamental rights, for the benefit of every individual.AI Act (human oversight, prohibited practices), GDPR (data subject rights), EU Charter of Fundamental Rights
IISolidarity and InclusionDigital transformation must leave nobody behind. Universal access to connectivity, digital skills, fair working conditions, and digital public services.Digital Decade targets (80% basic digital skills by 2030, full 5G coverage), Platform Work Directive, AI Act (AI literacy)
IIIFreedom of ChoicePeople must be empowered to benefit from AI while being protected from harmful systems. Human-centred, trustworthy, ethical AI. Transparency in algorithmic decisions. Freedom to choose which online resources to use.AI Act (transparency, human oversight, prohibited practices), DMA (platform contestability), DSA (algorithmic transparency)
IVParticipation in Digital Public SpaceA fair online environment free from illegal and harmful content. Protection of democratic discourse. Freedom of expression balanced with safety.DSA (content moderation, illegal content), AI Act (deepfake labelling), election integrity legislation
VSafety, Security and EmpowermentSafe, secure, privacy-protective digital technologies. Protection against cybercrime. Effective control over personal data. Special protection for children and young people.GDPR, NIS2 (cybersecurity), Cyber Resilience Act, AI Act (accuracy, robustness, cybersecurity), children’s data protection
VISustainabilityDigital technologies should support sustainability and the green transition. Minimise environmental footprint.AI Act (GPAI energy consumption reporting), Data Act, EU Green Deal digital provisions

Why the European Declaration on Digital Rights and Principles Matters for AI Act Compliance

The Declaration isn’t on your compliance checklist. But it’s the interpretive lens through which EU regulators, courts, and policymakers understand the AI Act.

When a market surveillance authority evaluates whether your AI system’s human oversight measures are “effective”, they’re measuring against the Declaration’s vision of people being in control of technology, not the other way around.

Here’s how the Declaration’s principles translate directly into AI Act obligations you’re dealing with right now:

Declaration PrincipleAI Act Obligation
“AI should serve as a tool for people”Human oversight (Article 14) — humans must be able to understand, interpret, and override AI
“Trustworthy and ethical AI systems”Risk management (Article 9), accuracy and robustness (Article 15)
“Transparency in how AI operates”Transparency obligations (Article 13, Article 50), instructions for use
“Suitable data to prevent discrimination”Data governance (Article 10), bias detection and mitigation
“Human oversight to prevent rights violations”Human oversight design (Article 14), Fundamental Rights Impact Assessment (Article 27)
“Personal autonomy regarding AI outcomes”Right to explanation (Article 86), deployer transparency obligations (Article 26)
“Safe technology in line with ethical and legal principles”Prohibited practices (Article 5), conformity assessment (Article 43)
“Protection of children and young people”Exploitation of vulnerabilities prohibition (Article 5(1)(b)), high-risk education AI (Annex III)
“Fair competition in the digital environment”GPAI obligations (Chapter V), interoperability requirements
“Effective control over personal data”GDPR integration, data governance requirements (Article 10)

Industry Context: What the Declaration Signals for Your Sector

Banking & Financial Services

The Declaration’s emphasis on non-discrimination and human autonomy in algorithmic decisions is the philosophical backbone of why credit scoring is high-risk under the AI Act. When regulators evaluate your AI lending decisions, they’re applying the Declaration’s principle that technology must serve people and that individuals should have autonomy regarding AI outcomes.

This is also why the right to explanation (Article 86) matters — it implements the Declaration’s transparency commitment.

Healthcare

“Safe technology” and “people at the centre” are the principles driving the AI Act’s dual regulation of medical AI (AI Act plus MDR). The Declaration’s commitment to ensuring technology benefits health and well-being directly informs why AI diagnostics need human oversight — a clinician must always be the decision-maker, not the algorithm.

Employment & HR

The Declaration’s commitment to “fair working conditions” in the digital age is why workplace AI is so heavily regulated. Emotion recognition in the workplace is prohibited (Article 5(1)(f)) because it violates the Declaration’s principle that workers should be protected from harmful digital practices. The Platform Work Directive extends this into gig economy AI.

Education

“Digital skills” and “leaving nobody behind” — the Declaration’s inclusion principles are why educational AI is high-risk. AI that determines access to education or evaluates learning outcomes directly impacts the Declaration’s vision of equitable digital opportunity.

Technology & SaaS

The Declaration’s emphasis on “interoperability, open technologies, and standards” underpins the Data Act’s switching requirements, the DMA’s contestability rules, and the AI Act’s standardisation approach. If you’re building AI products for the EU market, the regulatory environment assumes technology should be open, transparent, and contestable — not proprietary black boxes.

The 2030 European Declaration on Digital Rights and Principles Targets

The Declaration exists within the broader Digital Decade programme, which sets concrete targets for 2030:

TargetCurrent Status (2025)Goal by 2030
Citizens with basic digital skills55.6%80%
ICT specialists employed~9 million20 million
5G coverage (populated areas)Growing100%
Gigabit connectivityGrowingUniversal
EU-produced semiconductors (global share)~10%20%
Edge nodes deployedEarly stage10,000+
Businesses using AI~8%75%
Digital public servicesPartial100% accessible online
Citizens with electronic IDPartial100%

The AI Act exists partly to enable the “75% of businesses using AI” target — you can’t get there if the public doesn’t trust AI. The Declaration’s values framework is designed to build that trust.

How EYREACT Can Help

The Declaration is the values framework. The AI Act is the law. EYREACT is how you comply with the law while embodying the values.

Our platform automates AI Act compliance — risk classification, Living Compliance Binders, evidence management, and 400+ rules — built by a team that works directly with the policymakers who shaped both the Declaration and the Act.

FAQ

Is the Declaration legally binding?

No. It’s a political declaration — a shared commitment by the EU institutions. It doesn’t impose obligations on companies and there are no penalties for non-compliance. However, it shapes how binding legislation (AI Act, GDPR, DSA, etc.) is interpreted and enforced.

How does it relate to the EU AI Act?

The Declaration is the values framework. The AI Act is the legal implementation. Every major AI Act obligation — human oversight, transparency, prohibited practices, data governance — traces back to a principle in the Declaration. Understanding the Declaration helps you understand the regulatory intent behind the AI Act’s requirements.

Do I need to comply with the Declaration?

Not directly. But if you’re complying with the AI Act, GDPR, DSA, or any other EU digital legislation, you’re already implementing the Declaration’s principles through those binding laws.

Is this the same as the EU Charter of Fundamental Rights?

No, but it builds on it. The EU Charter (2000) establishes fundamental rights including privacy, data protection, and non-discrimination. The Declaration extends these rights into the digital context and adds principles specific to digital transformation — AI transparency, algorithmic fairness, digital skills, and sustainable technology.

Why should I care about a non-binding document?

Because it’s the North Star for EU digital regulation. When the Commission writes delegated acts, when courts interpret the AI Act, when member states designate competent authorities — they look to the Declaration for interpretive guidance. Understanding it gives you a strategic advantage in anticipating where regulation is heading next.

Is there monitoring or enforcement?

The Commission publishes annual monitoring reports alongside the State of the Digital Decade Report. Member states submit roadmaps outlining implementation measures. Eurobarometer surveys track citizen perceptions. None of this creates enforcement mechanisms against companies — the monitoring tracks EU-wide progress against the 2030 targets.

Will it change?

The Declaration itself is unlikely to be amended — it’s a political commitment, not legislation. But the legislation it informs is evolving. The Digital Omnibus, the AI Act’s periodic reviews, the upcoming Digital Fairness Act, and the Cloud and AI Development Act (CADA) all draw from the Declaration’s principles and extend them into new areas.

This article is for informational purposes only and does not constitute legal advice.