Under-16s offline? How Europe’s push could bring decentralised ID into the picture

A policy shift driven by minor safety

Across Europe, concern about minors’s mental health and online safety has reached a tipping point. Policymakers, parents, and educators are increasingly alarmed by the impact of social media on under-16s. To count with, from addictive design patterns and algorithmic content loops to exposure to harmful or inappropriate material. These concerns are no longer confined to academic studies or parental advocacy; they are now shaping political debate at the highest levels of the EU.

In recent months, members of the European Parliament have signalled growing support for stricter age-related safeguards online, including plans to restrict or even prohibit minors under the age of 16 from using social media sites. This debate has resurfaced at a particularly charged moment: European elections have sharpened focus on digital regulation, the Digital Services Act (DSA) is moving from legislation to enforcement, and AI driven content systems are making it harder than ever to control what young users see and engage with.

Here comes the tension. How can regulators better protect minors online without normalising intrusive identity checks, mass data collection, or expanded surveillance of users? As calls for tougher rules grow louder, questions about how age restrictions would actually be enforced  and at what cost to privacy are pushing digital identity and age assurance technologies into the spotlight.

What is the European Parliament proposing?

So far, the European Parliament has not moved to introduce a single, binding law that would ban under-16s from social media outright. Instead, a clear shift in political mood is emerging.  Recent parliamentary discussions and resolutions have encouraged the European Commission to examine stricter age thresholds for social networks, with under-16 access repeatedly raised as a concern. These proposals serve more as a signal of intent toward stricter regulations, more precise guidelines, or more vigorous enforcement in the near future than as immediate legal requirements.

Importantly, this debate builds on regulatory foundations that are already in place. The Digital Services Act requires platforms to identify and reduce systemic risks, including those affecting minors’s wellbeing, while GDPR sets out age-related consent rules for data processing, allowing Member States to set the threshold between 13 and 16. The result is a patchwork of expectations across Europe, with inconsistent approaches to age protection and enforcement.

What is changing is the level of scrutiny on whether existing safeguards actually work. Policymakers are increasingly sceptical that self declared ages or light touch controls are enough to protect younger users in practice. As a result, the conversation is shifting from abstract principles to practical enforcement and to the reality that any meaningful restriction on under-16 access will ultimately depend on reliable ways to assure a user’s age.

That shift brings a difficult question into focus. If platforms are expected to do more, how can they prove compliance without defaulting to invasive identity checks or excessive data collection? It is here that age assurance and the tools used to deliver it become central to the policy debate.

The enforcement challenge: Why age checks are harder than they sound

On paper, limiting access for under-16s sounds simple enough. In reality, age checks have always been one of the weakest links in online safety. Most platforms already have some kind of age gate, but they’re often easy to get around and hard to enforce properly at scale.

The most common approach is still self declared age, asking users to enter a date of birth when they sign up. It’s quick and requires low effort, but it doesn’t work particularly well. It depends on honesty in spaces where people have plenty of reasons to lie. Tighter checks do exist, but they bring their own problems. Some platforms ask for government ID, while others use facial age estimation tools that analyse a selfie or short video to guess how old someone is.

None of these options are without tradeoffs. Asking for ID or biometric data often means collecting far more information than you actually need to answer a simple question: “Is this person over 16?” That data usually ends up stored in centralised systems, which makes it valuable if breached and raises real concerns about how it might be reused or repurposed later. For both parents and young people, these checks can feel heavy handed and invasive.

Then there’s the friction. Uploading documents, taking selfies, or going through the same checks on multiple platforms are all annoying to a certain degree. Instead of encouraging compliance, it can push people to look for shortcuts or move to less regulated spaces. In trying to protect minors, platforms risk making intrusive verification feel like the cost of being online at all.

This is where blunt enforcement starts to backfire. If it’s not carefully designed, stricter age rules could quietly expand surveillance, turning routine identity checks into a normal part of everyday internet use. The real challenge for regulators and platforms is enforcing age limits while doing it in a way that keeps minors safe without undermining privacy and trust for everyone else.

Where decentralised identity technology enters the conversation

It is against this backdrop that digital identity technology starts to feature more prominently in the policy discussion. In practice, the European Parliament is leaning towards the EU’s official digital identity infrastructure as the most likely way to support age assurance at scale. Parliamentary resolutions and supporting materials point to tools such as the European Digital Identity (EUDI) Wallet and EU backed age verification solutions as mechanisms that platforms could rely on to demonstrate compliance with stricter age thresholds.

This approach reflects a desire for harmonisation and legal certainty. An EU-level digital identity framework offers governments and regulators a standardised, recognisable system that can be deployed across Member States, reducing fragmentation and easing enforcement under the Digital Services Act. From a policymaker’s perspective, using an official digital identity infrastructure appears to offer a straightforward way to prove that platforms are taking “reasonable steps” to prevent under-16s from accessing restricted services.

However, this also brings the original tension back into focus. Even when designed with safeguards, state-backed digital ID systems tend to rely on centralised issuance, persistent identifiers, and institutional trust anchors. If used poorly, they risk normalising identity checks for routine online activity and expanding the amount of personal data that flows through platforms and intermediaries, even when the underlying requirement is simply to verify age.

This is where decentralised identity offers a stronger long-term model for digital age assurance. Decentralised identity is not an alternative to digital identity, but a better form of it. Instead of proving who you are, it allows you to prove what is true about you — such as being over 16 — without revealing anything else. Using verifiable credentials and selective disclosure, individuals can present cryptographic proof of age without sharing names, dates of birth, document numbers, or creating new data trails across platforms.

Crucially, decentralised identity shifts control away from platforms and central databases and back to users. Credentials are held by the individual, reused across services, and shared only when necessary. This aligns far more closely with GDPR principles of data minimisation, purpose limitation, and user control. In the context of protecting minors, it offers a way to enforce age limits without turning identity checks into a permanent feature of everyday internet use.

As Europe continues to refine how under-16 protections should work in practice, the choice is not whether to use digital identity, but what kind. A decentralised, privacy-preserving approach provides the same regulatory assurances policymakers are seeking, while avoiding the long-term risks of over collection, surveillance, and centralised control that purely institutional digital ID systems can introduce.

Australia’s experience: A useful point of comparison

Looking outside Europe, Australia is a really interesting example of how you can do stricter age checks without making everyone use a government issued digital ID. Their approach, led by the Office of the Australian Information Commissioner (OAIC), is all about making sure platforms take “reasonable steps” to keep under-16s off social media, rather than forcing a one-size-fits-all system.

Platforms have plenty of options for how they verify age, from checking IDs or using selfies and facial recognition, to bank linked checks or even looking at online behaviour. The idea is to give platforms flexibility: they can pick the mix that works best for their users, as long as it actually stops underage accounts.

What’s nice about the Australian model is that it keeps choice front and centre. Using a government ID is totally optional, and platforms have to offer other ways to verify age for anyone who doesn’t want to share official documents.It’s a smart way to balance minor safety with privacy and accessibility, proving that stricter rules don’t have to mean hoarding sensitive data.

For Europe, the takeaway is clear: you can make age checks work without going all in on a single digital ID. With the right mix of flexibility, accountability, and user choice, it’s possible to protect youngsters online while still respecting everyone else’s privacy.

A more balanced path forward

Banning access for under-16s can’t do it all. Tighter regulations can protect underage persons from dangerous information, but they don’t address every issue that arises in the digital age. Platform responsibility, privacy conscious age checks, and parent and educator support are all part of a more successful strategy.

Platforms should architect services with minors’s safety in mind, from content moderation to clear reporting systems and policies. At the same time, privacy focused age verification tools can enforce limits without collecting unnecessary personal data. These tools let users prove their age without revealing their full identity, making digital identity a help rather than a hurdle.

Parents and educators also play a key role. Teaching digital literacy and having open conversations about risks are all essential alongside technical safeguards. Technology alone can’t replace human judgement and guidance.

When combined, these actions offer a well rounded strategy that respects privacy, safeguards minors, and is consistent with larger EU ideals. Digital identity promotes compliance and helps keep underage persons safe online when used responsibly.

2025 in Review: cheqd’s Year of Building Trust, Identity, and Verifiable AI

As 2025 comes to a close, it’s the perfect time to reflect and celebrate everything we’ve achieved together at cheqd. This year has been one of growth where we’ve expanded our partnerships, strengthened our product offerings, and continued to build the foundations for a more trustworthy digital ecosystem.

From breaking new ground in verifiable AI and agent trust solutions, to driving real adoption of verifiable credentials, this year has been full of big steps for us in digital identity, privacy, and decentralised trust. We’ve teamed up with amazing organisations across different industries, shown up at major global events, launched new tools and protocols, and supported our community in building things that genuinely make a difference.

In this end-of-year recap, we’re pulling together the highlights: our key milestones, partnerships, product updates, awards, and some of the stories that show what cheqd’s mission looks like in action. We bring self-sovereign identity and verifiable trust into the real world.

Product & Protocol Updates

One of the biggest things we pulled off this year was launching our MCP-enabled Agentic Trust solution, built on the Model Context Protocol. In simple terms, it gives AI agents a secure, accountable way to operate by giving them cryptographically verifiable identities and permissions. Developers can now build apps where AI agents actually manage their own DIDs and Verifiable Credentials, all backed by cheqd’s trust graph and the TRAIN validation engine to make sure everything lines up with proper governance rules. Our Verifiable AI demos with Claude and the MCP server really brought this to life. They showed that secure, accountable AI agents aren’t some far-off idea. They’re doable right now.

On the protocol side, we pushed out several upgrades focused on making everything more scalable, more usable, and more interoperable. The v3.1.9 release introduced fee abstraction, which basically means people can make transactions using IBC-enabled tokens like USDC. Later, v4.1.1 brought improvements across identity management, token transfers, and IBC functionality. And then v4.1.4 tightened things up even further with better state sync, cleaner error-log pruning, and a dedicated relayer channel to support USDC through Osmosis. All together, these updates made transactions way more flexible, boosted cross-chain capabilities, and generally improved network performance.

Outside the protocol layer, our ecosystem and tooling moved forward in a big way too. cheqd Studio was upgraded to make credential issuance simpler, including direct integration with Dock and its wallet, allowing developers to issue and manage credentials more easily. The update also improved how trust registries and trust graphs are managed, giving organisations clearer control over who can issue and verify credentials. And with deeper integrations into AI agent systems, developers can now use verifiable credentials in much more creative ways, letting autonomous agents authenticate, transact, and interact securely while keeping trust and accountability baked into every step.

Partnerships & Alliances

cheqd’s partnerships and alliances to build interoperable trust networks

  • Dock Labs: They migrated their entire credentialing setup and production traffic to cheqd;  they have then had a new influx of partners, e.g. Telefonica 
  • Hovi: They used our infrastructure to make issuing and verifying credentials easier for regulated pilots in education and security; two pilots are already underway, one in the UK with Salibo (Security sector) and another in Brazil with G7MED (EdTech).
  • Anonyome Labs: We joined forces to push forward better consumer privacy, cyber safety, and verifiable digital identity tools; one client is running on the network albeit unpublicised, more will be shared later.
  • OriginVault: Built a platform for generating content credentials using cheqd DIDs and DID-Linked Resources.
  • VERA: We partnered to bring secure digital identity and trusted messaging to businesses across Africa, helping grow verifiable ID adoption globally.
  • JuliaOS: They integrated cheqd so their ecosystem can issue credentials, support verifiable trust, and provide auditor attestations natively.
  • PlatformD: Built a privacy-preserving compliance layer for DeFi using cheqd, enabling real-time validator credential checks without any central authority.
  • Sovereign AI Alliance (SAIA): Alongside Datahive, Nuklai, and Datagram, we formed this alliance to build frameworks for decentralised, user-owned AI.
  • Artificial Superintelligence Alliance (ASI): Collaborated with Fetch.ai, SingularityNET, Ocean Protocol, and CUDOS to implement the Agentic Trust Solution across decentralised AI ecosystems.

Accelerator Programs

This year was a big one for us on the accelerator front. cheqd got into three really prestigious programmes, and each one helped us grow in different ways.

First up was the JPMorgan Chase Fintech Forward Program, where we got access to mentorship, industry connections, and support that helped us sharpen how we position our trust infrastructure for the financial sector. It was a great chance to validate our market approach and refine the product in a really focused way.

We also joined the Barclays Eagle Labs Scaleup Program, which gave us guidance on our commercial strategy, plus tons of networking and validation opportunities. The workshops, peer sessions, and introductions to corporate partners were genuinely valuable for helping us scale.

And on top of that, we were selected for Tech4Trust Season 7 in Switzerland. It is an accelerator focused on digital trust and cybersecurity. It renders coaching across sales, legal, and operations, along with direct access to corporate decision-makers. We also have the chance to compete for prizes worth up to 150k CHF, another nice competitive edge to the whole experience.

Verifiable AI Hackathon

The Verifiable AI Hackathon 2025, which we hosted together with Dorahacks, Verida, and Sprite+, brought a whole community of builders together to explore how cheqd’s infrastructure can power secure, verifiable, and trustworthy systems. Participants tackled everything from identity verification to how AI agents interact with each other.

The projects this year were seriously impressive.

In the Agentic Economy & AI Agents track:

  • Identone took first place with a really clever approach to verifiable, voice-based agent interactions.
  • Kith came in second for building AI agent passports.
  • SNAILS grabbed third with a solution focused on identity and content verification inside Telegram.

In the Content Credentials & Other track:

  • CheqDeep won first place with a tool for proving whether media is authentic.
  • Trusty Bytes placed second by helping AI agents access trusted datasets.
  • crdbl came third with a system for making digital content auditable and verifiable.

And on top of that, Viskify won the Verida Bounty for its AI-powered talent verification platform. A great example of how verifiable credentials are starting to make a real impact in professional settings.

All in all, the hackathon really showed what’s possible when verifiable data meets creative builders.

Community & Governance

Our community continued to play a central role in shaping cheqd’s network and ecosystem throughout 2025.  A total of 11 proposals have been launched with 10 of them successfully passing as shown below.

  • Proposal 59: Put 1,000,000 $CHEQ into boosting liquidity on both Osmosis and Uniswap.
  • Proposal 60: Updated the mainnet to v3.1.5 to make things run smoother and add a few improvements.
  • Proposal 61: Added the USDC IBC denom so people can send USDC across the network.
  • Proposal 62: Fixed an expired IBC connection with Secret Network to keep cross-chain transfers working.
  • Proposal 63: Did the same for Gravity Bridge; renewed the expired IBC client to keep everything interoperable.
  • Proposal 64: Upgraded to v3.1.9, which included general improvements and fee abstraction for IBC tokens.
  • Proposal 65: Moved up to v4.1.1, adding better identity features, improved token transactions, and smoother IBC transfers.
  • Proposal 66: Updated to v4.1.4 to sort out state sync issues, clean up logs, and open a new relayer channel for USDC via Osmosis.
  • Proposal 68: Reduced network inflation to roughly 1.5% to stay on track for the long-term 1 billion $CHEQ supply target.
  • Proposal 69: Approved 1,000,000 $CHEQ for OriginVault’s Public Utility Tool to onboard up to 10,000 Verified Person Credentials and support the launch of a C2PA registry under OpenVerifiable.

Market Recognition & Awards

This year, we were genuinely honoured to get some recognition from the wider tech community for what we’re building in digital trust. In HackerNoon’s Startups of the Year 2024 (announced in 2025), cheqd came in 10th and Creds took 18th in the London region. A really nice nod from the community and a sign that our work on self-sovereign identity and verifiable credentials is resonating.

We were also named one of the Top 100 Web3 & Blockchain Companies by the World Future Awards 2025, which highlighted the work we’re doing around secure, decentralised, privacy-preserving infrastructure. On top of that, Startups.co.uk ranked cheqd 34th in their Startups100, pointing to another great milestone for us.

And to cap things off, at South Summit Korea 2025, cheqd was awarded “Most Scalable Business”, a recognition that really speaks to the momentum we’re building with verifiable credentials and Agentic AI across different industries.

All of this meant a lot to us, and it’s a credit to the team, our partners, and the community around cheqd.

Events & Conferences

In the first half of the year, we stayed pretty active on the events front. cheqd sponsored the didx Unconference Africa, DICE, and the Internet Identity Workshop (IIW) Spring, helping move conversations forward around verifiable credentials, digital trust, and AI-driven ecosystems. At the European Identity & Cloud Conference, Fraser delivered a keynote on how identity ecosystems actually grow and create value, and Ankur joined panels on Verifiable AI and Personhood Credentials, connecting with a lot of the key players in the space. Later on, at Identity Week Europe, we had the chance to speak directly with policymakers and enterprise teams, and our Verifiable AI work was also highlighted at MIT’s Decentralised AI event (Project NANDA).

In the second half of the year, the momentum continued. We showed up at Identity Week America and Korea Blockchain Week, digging into regional trends around digital identity and decentralised trust. We also sponsored IIW Fall, which gave us another great opportunity to showcase real-world implementations of DIDs, SSI, and Verifiable AI. Fraser co-presented a session there on how OpenID Federation can blend with decentralised identifiers, which sparked some really productive conversations. And at the Global Digital Collaboration Conference, our team explored new trust frameworks and open standards for scaling verifiable credentials across borders.

The year wrapped up on a high note at South Summit Korea 2025, where cheqd was selected as a finalist and awarded “Most Scalable Business.”

Media, Thought Leadership & Education

All through 2025, we kept sharing our thoughts on digital trust, self-sovereign identity, and AI, and it was great to see that work picked up by the wider community. Fraser was featured in places like Cointelegraph, Binance, Messari, and Blockster, which helped bring more attention to the ideas we’ve been pushing forward.

We also put out a bunch of blogs covering everything from AI Agents and Digital Product Passports to Credential Payments and the different layers of the SSI industry. The aim was to break things down in a way that made both the technical and economic sides of verifiable credentials easier to understand.

On top of that, our team joined a lot of podcasts and AMAs, including Cryptopolitan, Unfungible, INATBA, Aeonix, and Agentic AI X Spaces. We also saw coverage in outlets like TNGlobal, Thales Cloud Security IAM360, Deloitte, and Hackernoon, which was a really nice boost for the broader digital identity work we’re doing.

And at the Sprite+ showcase, we took things a step further by showing real, practical examples of how our AI Agent Frameworks and Relationship Guides work in practice, including both the Human-Centred and Entangled versions that help developers and designers build better agent interactions.

All in all, it’s been a strong year for getting our ideas and our tech out into the world.

Next Horizons in 2026

Moving Into 2026 we aim to double down our efforts on verifiable AI, developing narrower products across segments of the AI Agent market. This focus will encompass the technology stacks from cheqd Studio, applying them coherently to different problems that AI companies are facing. For example, we aim to utilise our trust graph model to create an algorithmically defined and pluggable reputation score for companies and agents, across multiple marketplaces. We will also continue exploring how to apply DID-Linked Resources to Agent Passports, cryptographically linking associated metadata that requires persistent and highly available storage.

We will launch our Oracle release at the start of 2026, releasing a guide for validators to securely transition to the next major version in January 2026, upgrading to testnet and then mainnet. This release will stabilise pricing for all identity transactions on the network against fixed dollar values, giving cheqd’s customers and partners far more confidence in using the network, without price volatility. We expect this release to pave the way for new clients to launch live on cheqd mainnet with strong guarantees around pricing. 

Concurrently, we will continue supporting our core SSI partners as their clients continue moving into production environments, optimising the ledger to continually improve transaction speeds and state handling. If 2025 was the year of first adoption, 2026 will be the year of scale.