Skip to content

Under-16s offline? How Europe’s push could bring decentralised ID into the picture

A policy shift driven by minor safety

Across Europe, concern about minors’s mental health and online safety has reached a tipping point. Policymakers, parents, and educators are increasingly alarmed by the impact of social media on under-16s. To count with, from addictive design patterns and algorithmic content loops to exposure to harmful or inappropriate material. These concerns are no longer confined to academic studies or parental advocacy; they are now shaping political debate at the highest levels of the EU.

In recent months, members of the European Parliament have signalled growing support for stricter age-related safeguards online, including plans to restrict or even prohibit minors under the age of 16 from using social media sites. This debate has resurfaced at a particularly charged moment: European elections have sharpened focus on digital regulation, the Digital Services Act (DSA) is moving from legislation to enforcement, and AI driven content systems are making it harder than ever to control what young users see and engage with.

Here comes the tension. How can regulators better protect minors online without normalising intrusive identity checks, mass data collection, or expanded surveillance of users? As calls for tougher rules grow louder, questions about how age restrictions would actually be enforced  and at what cost to privacy are pushing digital identity and age assurance technologies into the spotlight.

What is the European Parliament proposing?

So far, the European Parliament has not moved to introduce a single, binding law that would ban under-16s from social media outright. Instead, a clear shift in political mood is emerging.  Recent parliamentary discussions and resolutions have encouraged the European Commission to examine stricter age thresholds for social networks, with under-16 access repeatedly raised as a concern. These proposals serve more as a signal of intent toward stricter regulations, more precise guidelines, or more vigorous enforcement in the near future than as immediate legal requirements.

Importantly, this debate builds on regulatory foundations that are already in place. The Digital Services Act requires platforms to identify and reduce systemic risks, including those affecting minors’s wellbeing, while GDPR sets out age-related consent rules for data processing, allowing Member States to set the threshold between 13 and 16. The result is a patchwork of expectations across Europe, with inconsistent approaches to age protection and enforcement.

What is changing is the level of scrutiny on whether existing safeguards actually work. Policymakers are increasingly sceptical that self declared ages or light touch controls are enough to protect younger users in practice. As a result, the conversation is shifting from abstract principles to practical enforcement and to the reality that any meaningful restriction on under-16 access will ultimately depend on reliable ways to assure a user’s age.

That shift brings a difficult question into focus. If platforms are expected to do more, how can they prove compliance without defaulting to invasive identity checks or excessive data collection? It is here that age assurance and the tools used to deliver it become central to the policy debate.

The enforcement challenge: Why age checks are harder than they sound

On paper, limiting access for under-16s sounds simple enough. In reality, age checks have always been one of the weakest links in online safety. Most platforms already have some kind of age gate, but they’re often easy to get around and hard to enforce properly at scale.

The most common approach is still self declared age, asking users to enter a date of birth when they sign up. It’s quick and requires low effort, but it doesn’t work particularly well. It depends on honesty in spaces where people have plenty of reasons to lie. Tighter checks do exist, but they bring their own problems. Some platforms ask for government ID, while others use facial age estimation tools that analyse a selfie or short video to guess how old someone is.

None of these options are without tradeoffs. Asking for ID or biometric data often means collecting far more information than you actually need to answer a simple question: “Is this person over 16?” That data usually ends up stored in centralised systems, which makes it valuable if breached and raises real concerns about how it might be reused or repurposed later. For both parents and young people, these checks can feel heavy handed and invasive.

Then there’s the friction. Uploading documents, taking selfies, or going through the same checks on multiple platforms are all annoying to a certain degree. Instead of encouraging compliance, it can push people to look for shortcuts or move to less regulated spaces. In trying to protect minors, platforms risk making intrusive verification feel like the cost of being online at all.

This is where blunt enforcement starts to backfire. If it’s not carefully designed, stricter age rules could quietly expand surveillance, turning routine identity checks into a normal part of everyday internet use. The real challenge for regulators and platforms is enforcing age limits while doing it in a way that keeps minors safe without undermining privacy and trust for everyone else.

Where decentralised identity technology enters the conversation

It is against this backdrop that digital identity technology starts to feature more prominently in the policy discussion. In practice, the European Parliament is leaning towards the EU’s official digital identity infrastructure as the most likely way to support age assurance at scale. Parliamentary resolutions and supporting materials point to tools such as the European Digital Identity (EUDI) Wallet and EU backed age verification solutions as mechanisms that platforms could rely on to demonstrate compliance with stricter age thresholds.

This approach reflects a desire for harmonisation and legal certainty. An EU-level digital identity framework offers governments and regulators a standardised, recognisable system that can be deployed across Member States, reducing fragmentation and easing enforcement under the Digital Services Act. From a policymaker’s perspective, using an official digital identity infrastructure appears to offer a straightforward way to prove that platforms are taking “reasonable steps” to prevent under-16s from accessing restricted services.

However, this also brings the original tension back into focus. Even when designed with safeguards, state-backed digital ID systems tend to rely on centralised issuance, persistent identifiers, and institutional trust anchors. If used poorly, they risk normalising identity checks for routine online activity and expanding the amount of personal data that flows through platforms and intermediaries, even when the underlying requirement is simply to verify age.

This is where decentralised identity offers a stronger long-term model for digital age assurance. Decentralised identity is not an alternative to digital identity, but a better form of it. Instead of proving who you are, it allows you to prove what is true about you — such as being over 16 — without revealing anything else. Using verifiable credentials and selective disclosure, individuals can present cryptographic proof of age without sharing names, dates of birth, document numbers, or creating new data trails across platforms.

Crucially, decentralised identity shifts control away from platforms and central databases and back to users. Credentials are held by the individual, reused across services, and shared only when necessary. This aligns far more closely with GDPR principles of data minimisation, purpose limitation, and user control. In the context of protecting minors, it offers a way to enforce age limits without turning identity checks into a permanent feature of everyday internet use.

As Europe continues to refine how under-16 protections should work in practice, the choice is not whether to use digital identity, but what kind. A decentralised, privacy-preserving approach provides the same regulatory assurances policymakers are seeking, while avoiding the long-term risks of over collection, surveillance, and centralised control that purely institutional digital ID systems can introduce.

Australia’s experience: A useful point of comparison

Looking outside Europe, Australia is a really interesting example of how you can do stricter age checks without making everyone use a government issued digital ID. Their approach, led by the Office of the Australian Information Commissioner (OAIC), is all about making sure platforms take “reasonable steps” to keep under-16s off social media, rather than forcing a one-size-fits-all system.

Platforms have plenty of options for how they verify age, from checking IDs or using selfies and facial recognition, to bank linked checks or even looking at online behaviour. The idea is to give platforms flexibility: they can pick the mix that works best for their users, as long as it actually stops underage accounts.

What’s nice about the Australian model is that it keeps choice front and centre. Using a government ID is totally optional, and platforms have to offer other ways to verify age for anyone who doesn’t want to share official documents.It’s a smart way to balance minor safety with privacy and accessibility, proving that stricter rules don’t have to mean hoarding sensitive data.

For Europe, the takeaway is clear: you can make age checks work without going all in on a single digital ID. With the right mix of flexibility, accountability, and user choice, it’s possible to protect youngsters online while still respecting everyone else’s privacy.

A more balanced path forward

Banning access for under-16s can’t do it all. Tighter regulations can protect underage persons from dangerous information, but they don’t address every issue that arises in the digital age. Platform responsibility, privacy conscious age checks, and parent and educator support are all part of a more successful strategy.

Platforms should architect services with minors’s safety in mind, from content moderation to clear reporting systems and policies. At the same time, privacy focused age verification tools can enforce limits without collecting unnecessary personal data. These tools let users prove their age without revealing their full identity, making digital identity a help rather than a hurdle.

Parents and educators also play a key role. Teaching digital literacy and having open conversations about risks are all essential alongside technical safeguards. Technology alone can’t replace human judgement and guidance.

When combined, these actions offer a well rounded strategy that respects privacy, safeguards minors, and is consistent with larger EU ideals. Digital identity promotes compliance and helps keep underage persons safe online when used responsibly.

Share

Related articles

join the community

Become a cheqmate

Join our community to learn more about what we’re building. Get the latest news and insights in our groups below.

Discover cheqd in your language

Select your language to view our content