Bringing Decentralised Identity to the Interchain at Scale | Announcing Our Cross-IBC ZK-Proofs

At cheqd, with the initiative and creativity of NYMLAB, we’re pleased to announce our leading Decentralised Identity functionality is now available across IBC, bringing DID to the Interchain at scale.

Expanding the reach of the cheqd network and our products are crucial enablers of a user-centred world of identity we are striving towards at cheqd, and we firmly believe this is only achievable through working on the two goals in parallel.

The introduction of cross-IBC proofs to the cheqd network, made possible by NYMLAB’s community-funded work and our recent upgrade to v2.0.1, is a perfect example.

NYMLAB’s core belief underpinning this project is that dApp developers looking to include SSI primitives in their product should not worry about the entire stack. They should be able to pick and choose what they need in a plug-and-play fashion.

Specifically, for this product, projects should be able to deduce which issuers’ credentials can be trusted, ensuring that the credential underpinning the Zero Knowledge presentation has not been revoked and must be verified according to custom requirements. This is what’s been built and released this week, which we’ll uncover in this blog.

First up, some key definitions. IBC stands for the Inter-Blockchain Communication Protocol (IBC), an open-source protocol to handle authentication and transport of data between blockchains. Deployed in March 2021, IBC has been adopted by 110+ sovereign chains and can be thought of as the blockchain equivalent of the internet’s TCP/IP, where it gets its name common name “the interchain”.

Tl;dr - the critical bit

Until recently, cheqd network-specific modules, such as DID (x/did) or DID-Linked Resources (x/resource), could be consumed by third-party applications over more traditional transfer protocols over HTTP, using messaging protocols such as JSON or Protobuf (Protocol Buffers).

The latest cheqd network upgrade (v2) allows this module’s functionality to be callable by other IBC-enabled chains, meaning cheqd native identity features can be used across the Interchain. Additionally, the feature set includes types for packaging smart contract messages, thus activating on-chain lookups even for smart contracts deployed on other IBC-enabled chains.

This is where NYMLAB’s vision comes into play. Currently, service providers (dApps) can use on-chain data such as addresses and NFT ownerships to provide gated services, such as access to exclusive groups and purchase opportunities, however, they cannot use credentials as these are inherently off-chain, making it challenging to offer the benefits of DID to other blockchain networks. NYMLAB believes that bringing the information proven in off-chain credentials on-chain can unlock the cheqd’s potential on-chain and across the Interchain.

To do this, zero-knowledge proofs are required, so that the data itself remains private, yet the outcome that they lead to can be enabled. NYMLAB has, therefore, built a mechanism and toolkit to enable developers to create on-chain zero-knowledge proofs for off-chain verifiable credentials, a novel approach to blending DID with the broader blockchain landscape.

Go deeper…how does it work?

In 2023, we introduced zkCreds (Anonymous Credentials) to our offerings, becoming one of the first chains to support this credential format outside of Hyperledger Indy. In doing so, we inadvertently built a module that evolved to become much more than our initial intentions.

Our DID-Linked-Resources (DLR) module allows developers to improve how resources are stored, referenced and retrieved, in line with the existing W3C DID Core standard (learn more about ‘resources’ in the context of SSI here). Beginning with the necessary resources to support AnonCreds — Credential Definitions & Schemas — we’ve found other novel use cases for them, including enabling our Credential Payments by creating the required Status Lists as DLRs, Trust Registries, Verifiable Accreditations, and now, the team at NYMLAB are using it once more to offer AnonCreds to dApps/Appchains via IBC (Interblockchain Communication Protocol)

Architecture for how NYMLAB’s IBC zkProofs work with cheqd network

How this could all tie together in the future

The mainnet upgrade of cheqd network to v2.x shipped a lot of the building blocks for how IBC zkProofs could combine the best of cheqd network and bring it to other Cosmos chains. This work is proposed to be carried out through future stages of community funding, but will have three critical elements: on-chain trust infrastructure, a wallet to store the credentials in, and smart contracts that can verify these zkProofs

  1. Trust infrastructure on cheqd network: cheqd network provides the DID and DID-Linked Resource (DLR) functionality, where issuers can anchor trusted data as they do today.
  2. A smart contract framework to verify/consume on-chain zkProofs: In this Stage 1 development, this was accomplished using NYMLAB’s AVIDA (Atomic Verification of Identity for Decentralised Applications), which used DID-Linked Resources published on cheqd network. AVIDA provides users the ability to create a privacy-preserving zkProof with self-derived binding between the proof presentation, their smart contract account and their Passkey credentials through AnonCreds Linked Secrets. dApps that want to utilise zkProofs can request proof presentation through their UI  as part of the user’s transactions to be signed and broadcasted on-chain. After the zkProof is shared, dApps can verify the zkProof presentation using  AVIDA AnonCreds Verifier, a CosmWasm contract deployable on any Cosmos SDK chain that supports CosmWasm that provides the core verification logic for dApps. This CosmWasm contract can be customised to the needs of the dApp based on their requirements, e.g., what are the trusted issuer DIDs, revocation checks and required selectively disclosure attributes. 
  3. A wallet to generate zkProofs from: In the current code developed for testing, this was NYMLAB’s Vectis wallet, which was built to combine non-custodial crypto and SSI wallets and is extensible with plugins (Passkey authentication, automation, account recovery, etc). In theory, this wallet could be or any other wallet that is capable of interacting with the cheqd ledger.

In this community grant, NYMLAB adds IBC interfaces for dApps to verify presentations with issuers committed data (expanding cheqd resource usage by other chains). 

Follow the progress NYMLAB’s Github here.

Use Cases: Bringing the Code to Life

We’re looking at three initial directions for how this can be applied in practice:

  1. Verifiable AI: A range of different use cases for “verifiable AI” (vAI) such as Content Credentials against deepfakes, and credentials for data for AI/ML training and trained models require ZKProofs/credentials that are persistently on-chain and can be verified using on-chain smart contracts.
  2. Interchain ID/KYC: A ZKproof of a KYC credential issued by a cheqd DID, consumable via another AppChain (KYC-4-IBC). The powerful differentiator here is the ability with our network to either have “full KYC” credentials (with personal details) that are shared on an extremely selective basis by the use as well as “basic/zkKYC” where proofs are consumable by other Cosmos chains and dApps.
  3. Creds-gated Governance: Using a blend of off-chain reputation credentials and on-chain zkProofs, held by Collectors to gate access to voting and/or proposing a vote in popular Cosmos governance tools

Next Steps

We’re continuing to work with the NYMLAB to bring this work to life. As a reminder of this community-funded project, the development was broken down into three stages within Prop 31.

Stage 1, completed and now distributed, included adding the ability to publish ZKproof as a DID-Linked Resource. Of the total amount allocated for this project — 45,000 EUR — 27,500 EUR has been paid to NYMLAB across two transfers (10,000 EUR at the start of the project, + 17,500 on completion).With these features now implemented and available on the cheqd network, our focus shifts towards creating an accessible developer experience.

Stage 2 will see the development of a Command Line Interface (CLI) allowing developers to jump into the code and start playing around with creating ZKproofs in seconds. This stage also includes developing and automating smart contracts that consume the proofs and verify them across IBC, the outcome being a fully functioning demonstration for issuers, verifiers, dApps and holders interacting on separate chains with cheqd’s resources. We’re also exploring running a hackathon once the CLI tools are available to encourage developers to experiment with what’s on offer.

Stage 3 wraps up the project, with all tooling open-sourced and tutorials plus extensive documentation published and available to all.

If this project is of interest, reach out to us directly, either on X (@rosspower), Telegram (ross_power) or via email ([email protected]).

Harnessing Verifiable AI to Defend Against Deepfakes

Reestablishing Trust with Content Credentials

In October 2023, Slovakia went to the polls to elect a new government. Just days before the country went to vote, an audio recording of the leader of the Liberal Progressive Party, Michal Šimečka, speaking to a journalist about how they planned to rig the election spread like wildfire through social media apps like Telegram. The audio was immediately denounced as fake by both Šimečka and the journalist, but in the final two days of the election, in which politicians were not allowed to speak to the press, the damage was done. With few in-app tools available to consumers to verify the accuracy of what they saw online, only those willing to leave their platform of choice and actively check against other news sources would have been able to confirm that the recording was fake. A very close-run campaign, the election swung in favour of the rival party and the deepfake may have been directly attributed to this shift.

With more than 64 countries around the world going to the poll booth this year, it seems likely that misinformation using Generative AI may unfortunately play a huge part in the future of democratic states. We are quickly hitting a point where we are losing our ability to believe what we see with our eyes and hear with our ears. Deepfakes have the potential to impact society in a multitude of ways directly. Photoshop may have existed for a long time, but the ability to create misinformation at scale has only just arrived. Trust is quickly becoming the most important currency we have. In a digital world, where 90% of the material we consume may soon be AI-generated, we need to be able to trust the information in front of our eyes.

Could you tell if this latest video version was real or AI-generated?

Although this problem has rushed upon us, that is not to say we do not have the tools to begin combating it. It is a problem that some of the biggest companies in the world have been thinking about for some time. Through using Content Credentials, a form of Verifiable Credentials, we can embed trust within the metadata of the content we see online so we know where that image or video was taken, who took it (whilst accounting for privacy concerns), and how (and by whom) it was edited. Verifiable AI (vAI) offers a way to rebuild trust and offer consumers the tools necessary to evaluate what they see and hear. A world where a 38-second deepfake that cost $522 in 2019 can now be created for free is in dire need of a solution.

Deepfake: A History of Visual Manipulation

1917 – Cardboard cutouts made to look like fairies are published in UK Magazine Strand and convince Sir Arthur Conan Doyle of the existence of magic beings

1988 – Photoshop is first developed

1992 – The word ‘Photoshop’ enters the Oxford English Dictionary as both a noun and a verb

2003 – Kate Winslet speaks out against GQ magazine after they significantly alter her image, halving the size of her legs.

2004 – An image of Jane Fonda is edited into a picture of John Kerry to make it appear he was at an Anti-War protest during the Presidential Campaign

2008 – An image of Vice-Presidential candidate, Sarah Palin in a bikini is widely circulated by media and social media, before being revealed to have been photoshopped to discredit her

2017  – a user on reddit called u/deepfakes publicly releases algorithms on the site, allowing anyone with the skills and computing power to have a go. The term ‘deepfake’ becomes popularised from the user’s name

2018 – FakeApp, an app to make Deepfakes more accessible, is released

2019 – Popular application FaceApp goes live with an age filter that allows users to see older versions of themselves

2019 – First fraud case using Deepfake techology is recorded. Criminals mimic the voice of a chief executive and walk away with €220,000 ($243,000).

2020 –  A deepfake Queen’s speech is done by channel 4 on Christmas Day

2021 – A lawyer in the United States goes viral with the phrase “I’m not a cat” whilst taking part in a court hearing whilst his streaming video was stuck on an image of a talking cat using deepfake tech

2022 – Stable Diffusion is released, enabling non-technical users to create generative  AI images using textual prompting

2023 –, known for its suite of AI tools for photo editing, launches significant enhancements including advanced face-swapping and photo manipulation tools

2024 – OpenAI announces Sora – a video generation AI platform capable of generating photo-realistic videos with realistic physics of up to one minute

The Inherited Problem with Content

Can you spot the 11 telltale signs that this image was manipulated?

Image manipulation has been a problem for some time. Even before the advent of Generative AI and deepfake generation, Photoshop has already eroded trust in what we see with our eyes.  The Princess of Wales, Kate Middleton, for example, invited weeks of social media speculation about herself a few weeks ago when she posted a
Photoshopped image of herself to social media, leading the Associated Press and other press agencies to issue a ‘kill notice’, stating that “it appears that the source has manipulated the image…” and requesting that media organisations remove it from their media lists and online articles. Kill notices are issued when an image or story they have shared is flagged as being untrustworthy – a huge embarrassment for the Royal Family. Fuelled by the already-building speculation caused by her prolonged absence from the media, this kill notice sent the internet into a frenzy as people endlessly discussed, theorised and farmed engagement trying to work out what had happened to the Princess. This situation goes to show that consumers and citizens have a strong desire to know when they are being lied to, and that tools which reduce opacity in where our content comes from are needed. While in this situation, it was easy to spot the clear signs of Photoshopping, what happens when it is not so obvious? When transparency is built into the system, it becomes a lot harder to be dishonest and a lot easier for organisations or individuals to judge an image’s origin and accuracy.

Now people are even questioning whether this video is AI generated! Without Content Credentials, telling the difference will soon be almost impossible.

Although Generative AI offers huge benefits to all layers of society, it also makes our problems with trust in content 10x worse. Not only does Generative AI make the production of photorealistic images possible (soon the days of Generative AI struggling with hands, feet, and shadows will be gone), but it can also do so at scale. This means that we could soon see a digital world where companies can spin up deepfake adverts with
‘real humans’ selling you a product with a script based around you and what your social media apps say about you. With Sora by OpenAI already capable of creating very realistic videos, it will not be long before these tools are in the hands of nefarious actors and fooling humans with video content becomes entirely possible. The implications for society are huge as now almost anyone, from a terrorist or rogue state actor to a teenager in their bedroom has access to tools capable of significantly moving markets, affecting election results, or defrauding someone.

The Solution: Verifiable Content Credentials

The problem of verifying content is not a new one but has been something industry giants have been working on for some time. Even Fox News has been developing its own verification system on the Polygon Blockchain, known as Verify. Additionally, a larger group of different media organisations has got together to create a standards body capable of dealing with the complexities of technology and international organisations. Known as the Coalition for Content Provenance and Authenticity (C2PA), with founding members including Adobe, BBC, Google, Intel, Microsoft and Sony, the coalition aims to set interoperable standards which enable consumers of content to understand where the content was made, by whom it was made or by what.

By using a form of Verifiable Credential, it becomes possible to verify at the moment of a picture being taken, important metadata about its provenance, such as the location, the time, the fact a camera took it and was not AI-generated,  and possibly the owner of the camera in question. 

Any edits then made to the picture are also added to the metadata. It is important to note that not all edits are bad: news publications regularly retouch pictures to fix lighting or redact faces. Even metadata needs editing: it may be that a photographer wishes to remain anonymous, and instead, the press agency publishing it removes the Personal Identifying Information (PII) from the metadata. Editing is a necessary part of the journey; what is important is that these edits are recorded. The use of Verifiable Credentials creates a chain of trust which enables the privacy of individuals and the production of content ready for publication. This enables the creation of established trust anchors able to digitally sign and verify information, including that which has been removed or changed.

Content Credentials are a great tool to both protect against misinformation and yet protect individual privacy. They can establish a verifiable chain of custody for digital media, documenting its origins and any subsequent modifications. This information can be valuable for forensic analysis and attribution, helping to trace the source of deep fake content and identify the individuals or entities responsible for its creation. In the examples below, we illustrate how they would work with both the truth and falsehood if Content Credentials become widely adopted:

  1. Content Creation: A protestor in a state governed by an authoritarian regime records acts of police brutality at a peaceful protest with their smartphone. Their smartphone is enabled with C2PA-enabled hardware, which records important metadata like the location and time, as well as information such as the owner of the phone’s Personal Identifying Information (PII). This information can be removed from the picture’s metadata at a later date (though this action will itself be recorded).
  2. Editing: The photographer edits the photo using C2PA-enabled editing software to blur the faces of some of the protestors; these changes are logged in the image’s metadata and signed by their Decentralised Identifier (DID). They then send the image, including its metadata to a journalist at a foreign publication, such as the BBC or al-Jazeera.
  3. Signing: The journalist looks at the image’s metadata and verifies that the photo they are looking at was shot on a C2PA-enabled device and therefore not AI-generated, as well as checking what edits were made by the photographer. He signs a content credential with his organisation’s DID, attesting to the accuracy of the photo
  4. Redaction: He redacts the photojournalist’s PII, signing a record of these changes into the content credential, attesting that they are a trustworthy source.
  5. Publication: After final edits are made to get the picture publication-ready, the journalist posts the image on his news publication’s website and social media.
  6. Verification: Readers are then able to look at the image metadata and check the Content Credentials to see that:
    1. The image was not AI-generated.
    2. It was taken at the time and location they claim it was taken at.
    3. The faces of the protestors were blurred
    4. The photographer’s PII was removed
    5. No other edits were made.
    6. The news agency has vouched for the trustworthiness of the photographer

Meanwhile, a government misinformation officer from the authoritarian regime wishes to create evidence showing the protestors being violent to justify the police attack:

  1. Content Creation: The misinformation officer quickly generates images and videos of protestors holding weapons at the same rally using an image generator such as Dalle or MidJourney. The image generator attaches Content Credentials to the metadata attesting that the image is AI-generated.
  2. Editing: They then use PhotoShop to make further edits which are logged as Content Credentials in the image’s metadata.
  3. Signing: As the information officer does not have a good reputation, they must sign with a DID from an unknown account without a public reputation (or may not attest to the accuracy at all)
  4. Publishing: The misinformation actor posts their image on social media and amplifies the post through multiple bot accounts. 
  5. Verification: When readers check the still-intact metadata of the image, they can see that:
    1. The image does not have a credential proving that it was taken by a device
    2. The image has a credential showing that it was generated by AI
    3. The image has a credential showing that multiple manipulative edits were made 
    4. The image’s has not been attested to by any trust anchor of good standing

Demonstrably, Content Credentials, in this case, make it much easier for people to spot misinformation, but if the misinformation officer is smart, he would remove the Content Credentials from the image’s metadata. However, even if removed, the lack of Content Credentials proving the image’s provenance will increase scepticism in the provenance of the image, preventing it (at least) from being publicised by major news organisations. By creating tools that enable us to inspect and create a base standard for information (e.g. this is AI generated, this was taken at the correct location etc), we can start relearning how to trust the information we see or apply greater scepticism to content which lacks a chain of custody or provenance of how it was generated.

Why might an organisation or individual be incentivised to adopt Content Credentials?

Content credentials solve huge problems that we face in the wake of the proliferation of at-scale deepfakes and AI-generated misinformation, but they also offer huge opportunities.

Reducing the spread of misinformation: The greatest benefit of this kind of technology is how it aids in helping better inform the public about the provenance of what they are seeing online. The better informed the public is, the greater the capacity to critically examine what they are consuming and the less likely people are to be taken in by misinformation. 

Brand protection: The use of a company’s brand in a fake news article or screenshot can cause a lot of damage to public trust in that organisation. Content credentials can create a useful source of truth that brands can point to as being their official facts, or used to establish if an AI model with the correct licensing generated an image. 

Brand Legitimacy: Being a trust anchor in an established chain of content provenance also improves the legitimacy of brands by enabling them to become established sources of truth that people trust to give them the facts.

Meeting compliance requirements: Many industries have legal requirements for content authenticity and attribution. For example, the UK Advertising Standards Authority (ASA) recently reminded brands that ads using AI-generated content will need to comply with existing advertising rules such as the rules on misleading advertising, especially the rules concerning testimonials and endorsements. Content Credentials can support compliance by providing a verifiable trail of the content’s origin and claims. With codes of practices being refined and the political spotlight likely to once again fall on disinformation over the coming years, it seems likely that many companies will use Content Credentials to improve compliance with regulations and standards.

Agreeing to disagree: Organisations on different sides of the political spectrum, for example, CNN and Breitbart, may have very different perspectives and facts on an unfolding situation. However, it is still important that users have the opportunity to choose which organisation they should trust within the same framework. As long as everyone agrees on shared rules, for example, that an image should show if it is a genuine picture or something created or adjusted by AI, then we can begin having more trust in the images shared across the political spectrum.

Decentralisation of Trust: Many people today get their news from social media influencers, freelance journalists or other non-traditional news sources. By agreeing to a shared system of attestation, independent reporters, photographers and fact-checkers can establish themselves as trust anchors outside of traditional news media organisations and networks.

Financialisation of Trust: Establishing oneself as a ‘trust anchor’ may enable new commercial models for organisations currently struggling with their commercial model. Being a Trust Anchor holds value, and payment systems capable of microtransactions will enable new ways to monetize one’s credibility.

Seamless integration: Content credentials can seamlessly integrate into existing authentication pipelines. By embedding content credentials into the metadata of digital content, verification is made much simpler. Deepfake detection systems can cross-reference Content Credentials with detection results to confirm or refute suspicions about the content’s authenticity.

What challenges might we still face with Content Credentials?

Although Content Credentials present huge opportunities, they are not a deus ex machina which can solve all our problems related to establishing Verifiable AI(vAI). Many issues still be dealt with and explored to ensure a working system of trust based on Content Credentials.

Metadata can be redacted: The ability to remove information from the metadata, or just to not have any metadata in a piece of content at all is up to the person sharing that content. If the majority of the media does not adopt these standards, then it will become a mark of establishment-approved content, rather than an overarching system used by everybody to establish a shared truth.

Who do you trust? Unless you are getting your news directly from the source, or the source is happy to be public, you still have to trust whatever organisation is attesting to a fact before believing it. Just because a picture of a UFO has been confirmed as real by InfoWars, it does not follow that you should believe it is a UFO. However, it may be that others think an image attested to by InfoWars is more trustworthy than one attested to by the BBC. Although Content Credentials do create more space for the truth, who you believe will always remain a factor.

Can the system still be gamed? A smart misinformation spreader could generate a picture with AI, then take a picture if it using a C2PA-enabled camera, thus starting the chain of provenance later than where it actually started.

Do you trust the tech? Owning the right Content Credentials may become synonymous with being accurate, but if this can be gamed, or the technology can be hacked, it creates opportunities for misinformation that has been verified to spread. 

Are Content Credentials fake news? Those who have interest in spreading misinformation to large audiences are incentivised to discredit a content credential systems (just as many attempt to discredit peer-reviewed scientific papers) which may potentially lead to mistrust in the technology as an ‘establishment surveillance tool’

How can the cheqd network help?

Here at cheqd, we have been working on Verifiable Credential technology for over 3 years. We have done seminal work in the creation of trust registries and helped to create the w3c standards on which Verifiable Credentials are based. Our products are fully compliant with eIDAS2 EU regulations on identity and we are in the process of becoming an EU-recognised Electronic Distributed Ledger. As well as being fully compliant, we are one of the most interoperable DID methods on the market, meaning our Verifiable Credentials can interact with multiple DID networks, and we have the capability of enabling payments for microtransactions. 

Our unique privacy-preserving payment rails for Verifiable Credentials unlock the possibility of new commercial models for trust anchors involved in the issuance of Content Credentials. For example, perhaps a news organisation can use its established reputation to charge for fact-checking an independent journalist’s work or improve the possibility of royalty payments for photojournalists not previously associated with a news organisation.

Coming back to our first example of a photojournalist getting a photo from a violent protest verified and published, payment rails enable the commercialization of this model at the republishing stage. Websites that wish to redistribute or republish the image could verify that they have the stamp of approval from the publishing press agency, and once established as attested for, pay for the rights to the image. Due to the customisable nature of cheqd’s payment rails for Verifiable Credentials, this would also allow for split payments for the multiple parties involved, enabling both the news agency and the photojournalist to be paid for their work, creating a new potential automated royalties commercial model built around microtransactions. 


Content credentials are poised to become a significant growth area and a major use of Verifiable AI (vAI)  in the coming years. Given the mushrooming of deepfake misinformation and the breakdown in trust in our societies, a new way to track trust is desperately needed to provide some kind of faith in a shared version of the truth for society. We believe that the cheqd network offers a range of tooling as an infrastructure partner that will be of use to any organisation which sees itself as part of the content verification process.

Contact us

Are you a content creation platform, a publisher, news aggregator or media agency?   Contact cheqd to see how you can use Verifiable AI to add trust and authenticity to content and your brand, and how we can help to unlock new business models. We are always up for a chat – contact us at [email protected]!

Andromeda Partners with cheqd: Empowering Developers with Trusted Data Markets

We are pleased to share a partnership between Andromeda and cheqd, a leading innovator in data management and trust infrastructure. This collaboration marks a significant milestone in our mission to enable the creation of marketplaces for Trusted Data while empowering developers and enhancing the Andromeda community with secure and user-controlled data solutions.

cheqd is revolutionising the Trusted Data landscape by providing a powerful platform for managing data while prioritising user control and privacy. Our technology enables the creation of Trusted Data Markets across various industries, allowing for the data exchange and monetisation in a verifiable, portable, and privacy-preserving manner. By leveraging Self-Sovereign Identity (SSI) and blockchain technologies, cheqd is building the payment infrastructure and trust layer necessary to create Trusted Data marketplaces.

At the heart of cheqd’s offering is a commitment to empowering consumers to fully own and control their data. With our public permissionless network, cheqd provides first-of-its-kind payment rails, decentralised identity, customizable commercial models, and governance structures for Trusted Data. From consumer credit data to Web3 lending environments, manufacturing, education, DAOs, and gaming, cheqd’s network serves as the foundation for businesses to build upon.

Through the partnership with cheqd, Andromeda aims to empower developers within their community with access to secure and user-controlled data solutions. By integrating cheqd’s technology into their platform, developers can build decentralised applications (dApps) with enhanced data management capabilities, creating more efficient and user-centric user experiences.

One key benefit of this partnership is that developers on the Andromeda platform can leverage Trusted Data Markets powered by cheqd. By utilising cheqd’s technology, developers can enhance their applications’ security, privacy, and transparency, ultimately building users’ trust and confidence. This improves the user experience and opens up new opportunities for innovation and growth within the Andromeda ecosystem.

Furthermore, the partnership with cheqd aligns seamlessly with Andromeda’s commitment to decentralisation and user empowerment. By providing developers with the tools and infrastructure needed to build secure and user-controlled data solutions, Andromeda and cheqd are driving the adoption of decentralised technologies and empowering individuals to take control of their digital identities and data.

Core Contributor at Andromeda Labs, Mant Hawkins said, “Andromeda is excited to be partnering with cheqd. The needed capability for everyone to have trusted data fits a core belief of the Andromeda Team where privacy, trust, and sovereignty reign supreme. Moving forward together our partnership will bring trusted platform composability to not only Web3, but to the Fortune 500.”

We at cheqd are firm believers in Andromeda’s mission to simplify and speed up the developer experience for anyone wishing to build decentralised applications. We have been focusing on simplifying that same experience for self-sovereign / decentralised identity companies and developers and it is extremely exciting to find a partner building to this same target for an even greater community.

Together, we will enable much greater trust between individuals, companies and things at scale, with the ultimate aim of creating the trusted data economy.” said Fraser Edwards, CEO and co-founder of cheqd.

The partnership between Andromeda and cheqd represents a significant step towards advancing the adoption of decentralised data solutions within the blockchain space. We are fostering a more secure, transparent, and user-centric ecosystem for decentralised applications by empowering developers with access to Trusted Data Markets. Andromeda and cheqd are paving the way for a future where individuals have full ownership and control over their data.

About Andromeda

Andromeda is an all-on-chain suite of products, tools, and utilities enabled by a decentralized operating system called aOs, or the Andromeda Operating System. aOS is designed to make Web3 simpler and building on-chain Easier, Better, Faster.

aOS allows users, creators, and developers to rapidly build dApps, dropping development time from months to minutes. Developers can compose ADOs and dApps across the Cosmos Ecosystem and beyond to maximize their total addressable market and interoperability with the best Web3 projects and purpose-built blockchains.

aOS is where Web3 starts.

Introducing aOS
Community Testnet

About cheqd

cheqd is the only privacy-preserving payment and credential network that empowers users and organizations with ownership, portability, and control over their data.Building upon Decentralised Identity (DID) and Verifiable Credentials, data can be transacted while prioritizing individual privacy.

Creds is cheqd’s first product, a no-code decentralized reputation platform for communities to build trust and loyalty. Creds also provides the ability to verify someone’s reputation, including who they work for within Telegram i.e. without either party needing to leave the app, to prevent scams and fraud.

First vAI Partnership Unleashed: cheqd Collaborates with Nuklai to Power the AI Industry with Trusted Data

Last week, we made an exciting announcement regarding a strategic partnership between cheqd and Nuklai, aimed at revolutionising the landscape of data credibility and authenticity. It’s time to dive deeper into the infrastructures behind this collaboration and explore how we will reshape the future of data verification and trust in the AI sphere.

The First vAI Partnership — What does it entail?

At the heart of this partnership lies a shared commitment to enhancing the credibility of datasets within the Nuklai smart data ecosystem. By joining forces, cheqd and Nuklai aim to add an indispensable layer of trust and reliability to datasets, empowering data contributors and consumers alike.

With this collaboration, data contributors on the Nuklai platform will now have the option to incorporate verifiable credentials to their datasets, significantly enhancing their authenticity. This infusion of authenticity translates into increased trust in the data, fostering greater adoption and, ultimately, driving the growth of the Nuklai ecosystem.

Moreover, the partnership will introduce the option to have validated contributors to datasets, further bolstering the credibility and value of Nuklai datasets. This initiative will enable Nuklai datasets to feature a ‘credibility score’ based on the trustworthiness of their data sources, providing users with tangible proof of the reliability of the data.

How does cheqd fuel the collaboration?

cheqd’s robust infrastructure serves as the backbone for verifying the authenticity and credibility of data sources. Contributors to Nuklai’s datasets can obtain verifiable credentials from trusted issuers, which attest to their identity, qualifications, and the reliability of their data. These credentials are cryptographically signed, providing cryptographic proof of their validity.

Verifiable credentials ensure that only credible and trustworthy information is utilised for training and inference purposes in the Nuklai ecosystem. Additionally, cheqd’s infrastructure enables traceability and auditability, allowing users to trace the lineage of data inputs and verify the integrity of AI-generated outputs.

Are you an AI project or its community members that yearn to leverage Verifiable Credentials and Decentralised Identifiers to verify the source of your data? Contact us or get your favourite team to contact us at [email protected]!

Enabling Nuklai Powered AI to Become Verifiable AI

This integration of cheqd’s infrastructure empowers Nuklai to establish a trusted environment where AI models can confidently operate with the provision of Trusted Data, providing users with assurance in the accuracy and reliability of AI-driven insights generated from Nuklai’s datasets.

Through this collaboration, cheqd and Nuklai are poised to advance the AI industry towards verifiable AI, driving innovation and excellence in the field of artificial intelligence while upholding the highest standards of data integrity and trust.

What’s in Store for Verifiable AI?

We have released (or will release) a series of vAI blogs to guide you through the interplay between AI and verifiable credentials, its use cases across industries, and vAI designed by cheqd.

Here’s a list of vAI materials you can dig into at the time of writing. More guides will be published soon.

Get in touch with cheqd at [email protected] to verify the source of data for your AI projects!

What is Nuklai

Nuklai is an innovative layer 1 blockchain infrastructure to host a collaborative data ecosystem that will fuel the next generation of AI and Large Language Models (LLMs) with world-class data.

Nuklai’s data ecosystem is built on two blockchain networks, supporting the data ecosystem and AI with data management and computational power. Through data enhancement tools, a public data marketplace, and a private data-sharing solution for businesses, they’re building the most vibrant data ecosystem in the world.

The high-quality data developed by Nuklai’s ecosystem and the computation power provided by the decentralised infrastructure accelerates AI model training and utilisation.

Kicking Off the Year at cheqd – A Product Roundup for 2024 Q1

As we round off the first quarter in what’s shaping up to be a momentous year for the crypto and identity market, we reflect on the progress of the Product & Engineering team so far.

At the end of January, our product roadmap and accompanying product vision blog laid out a bold vision for what we wanted to achieve this year.

As we round up the first quarter of the year, and just back from the brief Easter break, we’re reflecting on what we’ve achieved so far, what we’ve learnt, and assessing the direction we’re moving in as we head into Q2.

Highlights & Key Achievements

1. cheqd Network Upgrade

First up, our initial network upgrade of the year which feeds into Goal 1:

1️⃣ Enhance cheqd Network through tokenomics improvements and additional identity functionality through ecosystem integrations

Central to our success this year is maintaining our excellence as a network while adding features that give us a competitive edge in the identity space.

With this in mind, our mainnet upgrade process is well underway. Our testnet was successfully upgraded today and we’re now running necessary checks ahead of the mainnet upgrade to be voted upon and completed in due course.  We have a more detailed breakdown of this upgrade in our governance forum here.

A key feature we’re particularly excited about with the upgrade is the introduction of on-chain ZK proofs for off-chain credentials. This is somewhat of a first, particularly in the IBC ecosystem, and one we think speaks to the industry’s general direction towards modularity and interoperability (more on this trend in Ross’ ETHDenver write-up)

Through the work completed by Nymlab, all IBC-enabled projects will be able to leverage our DID modules, including our novel DID-Linked Resources, meaning companies across the fast-growing and quickly evolving IBC space can issue and verify credentials, including ZKCreds.

2. Standards and industry alignment

In the background, we’ve continued to continue to innovate with technical standards and to align ourselves with trust frameworks such as eIDAS 2.0, in line with our goal:

2️⃣Keep up-to-date with standards and industry best practices to maintain product differentiation

With a number of our partners and industry leaders, we have begun working on a solution to use DID-Linked Resources to build “trust chains” or “trust registries” that roll up into traditional trusted lists. This is intended to improve the scalability, flexibility, and ease of integration of building a fully trusted identity ecosystem under the new eIDAS regulation. 

We will provide more on this architecture and approach as the work progresses. Still, it is a very exciting development that positions cheqd to solve a major industry challenge, and as such, creates a huge competitive advantage for organisations transitioning to verifiable credentials.

3. Credential Service Portal

Next up is our Credential Service progress. To release our credential payments into the wild, we need the tech to make it as easy as possible to integrate and play with them. Onboarding developers is our highest priority for Credential Service, and this requires removing as much friction as possible to get going.

Goal 3 for 2024 is about this:

3️⃣ Enhance Credential Service with a dashboard, feature additions and new regulated payment schemes

 Our portal to onboard developers is now nearing a beta version, with basic features to manage pricing plans, generate and manage API keys, view credential usage metrics and more. Whilst this remains pre-launch, here’s a sneak peek at a few screens that’ll be coming soon to our partners and prospects…

Credential Service developer pages

We have also advanced in our work on credential payment schemes, progressing in multiple partnership conversations with large existing industry consortia and payment schemes.

Payment Scheme operating flow

4. Creds Insights & Analytics

As we’ve been carving out our niche with Creds, our attention has shifted increasingly towards providing a credential offering that is much more than the issuance of credentials alone. 

A hypothesis we’ve been testing with the market is that community management is lacking data on their community members, making it difficult to gain insights on how to activate the best members and drive engagement and loyalty. As such, we structured one of our 2024 goals around this:

5️⃣Enable Creators to build journeys to gain insights into their members so that they can offer more personalised experiences, products & service offerings for their Collectors


With Creds, we foresaw a means of providing insights to Creators/Issuers, whilst respecting the privacy of the members themselves. In this quarter, we’ve set the groundwork for this, completing a migration from our initial database – which was initially selected when Creds was merely a cheqd network demo – into a scalable production-ready database that’ll enable the wide variety of features we want to offer.

These features include extensive “Insights” on both Credentials and Collectors, plus an overarching view of community engagement and activity. We’re still finalising some APIs to retrieve the data from the new DB and present it whilst respecting member privacy, but we thought we’d share some initial designs, coming to the Creator Studio soon…

Creator Studio Insights pages

This database migration is also set to accelerate our ability to support Quests & Campaigns, creating a stickier and even easier user journey for Creators as they get started with Creds.

5. Our experimental Ad Tech use case is ready to test with the market

Whilst we’re keeping our cards close to our chest on this one, we do want to mention the work that has been achieved in this quarter to turn this idea into something real we can test with the market.

Core to our mission at cheqd has been to disrupt entrenched ways Web 2 has been built and architected, both technically and in its commercial models where generally the user gets the worse end of the deal. And, there’s nothing so deeply embedded into the psyche and operating models of the web than the way advertising is managed.

With our credential payments, we envisage new business models which incentivise issuers to release users’ data and advertising, with its already hugely profitable commercial models, is a powerful means to achieve this. 

To get there, we’ve built out some initial designs that help demonstrate the opportunity. We’ve also completed in-depth research into what the post-third-party cookies era looks like and how we can capitalise on it while respecting users’ privacy and sovereignty. Expect more on this as we move into Q2…

6. Affirmed our instinct on the opportunities in AI

Through conference attendance and ongoing discussions with Decentralised AI organisations, we’ve made strong headway in carving out a niche that we believe will differentiate our offerings from the market and, most importantly, provide trust and sovereignty for individuals in a time of immense uncertainty and distrust in digital media and other areas. 

Much of our research is being published over the coming weeks, with our first and second blogs already out now. 

As these shape up, we’ll look to get building in this area, providing both our leading technology already mature enough to solve a number of these problems, plus our expertise in identity and verification.

Q2 and beyond… what’s next?

As you will have noted from our roadmap, we fell short on a few goals purely because the prerequisite work required, i.e., our upgrade and database migrations were more complex and time-consuming than anticipated. 

That said, these are already proving invaluable to us in terms of speed of development, given the improvements made in optimising the codebase and underlying databases. 

Moving forward, Q2 is going to be an exciting time, as we see various workstreams in cheqd converging, ultimately enabling us to demonstrate the potential of our combined offering.

Roadmap items for payments incorporated into Creds

With this in mind, our top priority is to incorporate credential payments into creds, giving a first-of-a-kind product-ready solution to solve the chicken-and-egg problem of incentivising issuers to release data. Through Creds, Creators will be able to choose to monetise credentials, charging either for verification or purchase. In parallel, we’ll be finalising the insights mentioned to give Creators greater visibility on their communities and learn what’s working best and where they can improve.

Preview screens for Creators to set price for verification and purchase of a cred

We’ll also release the first version of Quests in Creds to streamline the creator experience and make it easier to build end-to-end Collector journeys that drive consistent and reliable community engagement.

Preview screens for Quests build journeys

On the Credential Service side, in addition to improving the developer onboarding experience, we’re also looking to go deeper into our differentiated Trust Registries offering, leveraging our DLR module to build “trust chains” or “trust registries” that roll up into traditional trusted lists as mentioned.

We’re also excited to start working on a few major network items. With our first upgrade soon to be complete, we’re looking for what will further improve our tokenomics, accruing greater value to the cheqd token holders and, of course, offer more identity features to the builders.

With this in mind, on the network side, this quarter is going to be all about payments and fees, looking to simplify further the experience of working the cheqd network by enabling payments in Noble USDC, plus introducing fee abstraction which would allow payments in any Cosmos token.

Roadmap items for the cheqd network in Q2

Thanks as ever to our incredible community at cheqd. As a product team, we want to give a special shoutout to our ambassadors, who have been incredibly generous with their time and engagement, helping to drive leads, offer ideas, and crucially continue to spread the message for cheqd.

2024 is already shaping up to be a year like none other in crypto, and we feel more confident than ever that we’re positioned to seize a vast array of opportunities. 

These opportunities will ultimately give users control over their data and confidence that the future of technology is bright.

As always, you can reach out to the product team at [email protected] or tag any of us in the cheqd Telegram community.

Verifiable AI in Action: Challenges and Opportunities

This is the second article in a series of five.

In our previous article, we explored the critical need for Verifiable AI in a world where artificial intelligence use is ubiquitous. We introduced the concept of the Information Supply Chain, in which data is transformed into working AI models, and how Verifiable Credentials (VCs) can embed trust at almost every stage within this chain, ensuring data provenance &  integrity, and creating transparent frameworks across the entire AI lifecycle. 

In this second article, we will further delve into the various challenges previously touched upon and the opportunities this presents for builders looking to be first-movers in creating trustworthy, transparent systems which can be relied upon to both have, and produce Trusted Data.

Recap: The Information Supply Chain

The production of any useful artificial intelligence model output is a complex, multi-step process involving a large number of dependencies across what can broadly be defined in three separate stages. 

The first stage is that of collecting and collating Data to be used to develop the model, as an AI model is only as good as the data it is trained on. AI models require astronomical amounts of data to work successfully but this must be of good quality: AI-generated content can produce hallucinations and a lot of datasets can be inherently biased. Ensuring that a model is trained on high-quality data is essential to having a good model.

The second stage is the actual training process, in which models use this data to discover patterns and begin ‘learning’ through the use of various machine learning algorithms. Although these fall broadly into three categories – unsupervised learning, supervised learning and reinforcement learning – the specific algorithms used to train these may be proprietary information that builders may not wish to share. This stage is the most computationally heavy, with large clusters of GPUs or TPUs needed to allow countless hours of training with this data to be able to produce any kind of useful output.

The third stage is known as Inference, the actual usage of the model for its intended purpose. The output from a ChatGPT prompt is an example of inference. At this stage, trust becomes incredibly relevant as this is where models begin to interact directly with humans, whether through the use of AI agents to perform tasks previously done by humans or the creation and consumption of AI-generated images or text. 

At the fourth stage, that of actual deployment, ensuring that the model itself is high quality, built using good datasets, regulatorily compliant and has a well-established reputation may be key to anyone looking to deploy a model – thus making the aggregated reputation of a model, through its datasets, training methods, and ongoing reviews incredibly important to any decision-maker.

Each stage along the Information Supply Chain requires verification in multiple ways, almost all of which can leverage Verifiable Credentials and Decentralised Identifiers to build Trusted Data packages, baking trust into the process in a chain-agnostic, privacy-preserving, and fast-moving way.

Challenges and Opportunities at the Data Level

Challenge One: Size matters, but so does quality

Known as the ‘Bitter Lesson’ within Artificial Intelligence circles, this states that the best way to get the most out of an AI model is simply to have more computing power and more data. Training an AI to think like a human does, does not do enough to create an effective tool. With more data and more computing power, AI models are much better able to effectively learn on their own, often discovering patterns unseen by humans. The more data, the more computing power, and the more time an AI has to learn, the better it will be. However, this data needs to be quality data. Incomplete datasets may reflect societal biases or miss important nuances that can make an AI model useless.

The implications of this at the data level are clear – creating an effective model will require incredibly large, high-quality datasets. These are not necessarily easy to come by – there are only so many of these datasets available. Wikipedia for example, is only 20 GB of data, not enough to effectively train an AI model. This makes it likely that in the future, a sign of a quality AI model will be the datasets on which it has been trained, without which, the model may not be considered of any value. Verifying that a dataset is part of an AI model may become key to ensuring any model used by a person or enterprise will be at all useful. 

Opportunity: Verified Datasets

Verifiable Credentials have a perfect use-case here in ensuring that any model can clearly be labelled with the datasets used, without having to trust the producers of the model themselves, or waste time cross-referencing directly with the dataset producers. In practical terms, this would work as follows:

  1. The AI model producer would download or license and use a dataset in the training of their AI model
  2. Along with the dataset itself, its owners would also ‘issue’ a Verifiable Credential, ‘signed’ with their Decentralised Identifier
  3. The AI model itself would essentially be the ‘holder’ of the Verifiable Credential within the Trust Triangle, able to showcase when requested by a ‘verifier’ which datasets have been used to train it.

This allows for a quick, trusted way to prove that a model is using the required models to be effectively trained.

Challenge Two: You must comply

As AI models become more prevalent, the scrutiny to which their creators and users are held will increase. Ensuring that models are compliant with existing rules around Environmental, Social and Governance (ESG) standards and regulations will become important labels for any procurement officer to look at before choosing a model for their company. For example, many early AI predictive policing models were subject to large biases due to incomplete datasets, brought about by discriminatory policing practices that then are fed into the data on which models were trained. This may be an extreme example, but can play out in other ways, for example when an AI model is used to screen job candidates or decide who is eligible for a government benefits scheme. Even Data Privacy must be considered by anyone using an AI model. 

AI models are often black boxes, where the exact ‘thinking process’ which leads to a decision is not clear, thus making it important that the data on which a model is trained is clearly tracked and verified as much as possible. Although the sector may be in a ‘wild west’ period currently, with incoming EU regulations this situation will change very quickly.

Opportunity: Verifiably Compliant Datasets

The EU’s incoming ‘AI Act’ specifically refers to the importance of the “high quality of the datasets feeding the system to minimise risks and discriminatory outcomes” when it comes to ‘High Risk’ uses of AI – areas which can have a significant impact on someone’s life such as education, public services or law enforcement. Verifiable Credentials can be used here by standards bodies such as governmental bodies, companies and even an open source group, such as a DAO, to quickly showcase whether a training dataset is compliant.

In practice:

  1. A dataset is inspected by a regulatory, or industry standards body, to verify that it is following all required regulations and/or standards.
  2. The regulatory or standards body issues this dataset with a Verifiable Credential signed with the body’s Decentralized Identifier
  3. This Verifiable Credential is then ‘held’ by the dataset
  4. AI model trainers can check that the dataset they are looking to train their model on is fully compliant for specific jurisdictions or industries.
  5. When this dataset is then used by a model, a Verifiable Credential can then be issued by the standards body to the model, showing that it is trained with a regulatory-compliant dataset

Challenge Three: Where did you get that idea?

The use of enormous datasets for the production of Large Language Models (LLMs) is, as mentioned in the previous challenge, necessary for the production of good-quality models. ChatGPT and other state-of-the-art Large Language Models, for example, were trained on Common Crawl, which contains 450 TB of data, essentially the entirety of the internet to get to the level of quality they have currently achieved. However this has come at a cost to holders of intellectual property, as OpenAI essentially crawled through every newspaper, and paywalled media organisation, then scraped their sites for information and insights to feed into their training process without paying any of these oraganisations for the privelige. This means that potentially every time ChatGPT provides an answer, it may be plagiarizing, or using analysis taken from another entity’s Intellectual Property (IP). The New York Times, for example, is currently suing OpenAI for training its LLM on millions of its articles, enabling it to compete with NYT as a source of reliable information. 

This is likely to grow as an issue, as artists, journalists and organisations find their Intellectual Property used to create imitated versions of their own work. It is not necessarily always a bad thing that others’ IP is used, but what is important is that, if it is used, the IP holders are correctly compensated and appropriate licensing is acquired.

Opportunity: IP-compliant credentials

Verifiable Credentials which showcase legitimate use of data from IP holders may present a great opportunity for both holders of intellectual property and AI model developers. Just as websites can decide if they are happy for a search engine to scrape their site for information, it should be possible for website owners to provide permission for their IP data to be used for training LLMs. This would allow AI models to showcase that they have a legal right to produce inferences that make use of high-quality sources, as well as produce a way to monetize the use of IP in services which ‘generalize’ information without citation. If verifiable credentials are used across the information supply chain, it would also allow IP holders to check that an AI-produced inference was produced by a compliant LLM which had permission to use intellectual property.

In practice:

  1. AI model trainers purchase the right to use an entity’s IP 
  2. The entity supplies the model trainer with the required dataset and issues a Verifiable Credential confirming that the organisation has the right to use their data
  3. The AI model would then hold this credential, which could be showcased both at the deployment level and potentially at the inference level
  4. Users or IP holders could then double-check that a model’s output or another entity’s output has used IP-compliant data

Challenges and Opportunities at the Training Level

Challenge Four: Coordinating the Compute

Within the emerging world of Decentralised AI, Decentralised Computing and Decentralised Physical Infrastructure Network (DePIN), in which different devices must interact with each other to form a network, coordination is of extreme importance. The ‘Bitter Lesson’, as mentioned above shows that an AI model is only as good as the amount and quality of its data, and the amount & quality of the GPUs it uses for training.  In the context of networks training AI, such as Bittensor, mistakes made on one machine may affect the entire LLM, or massively increase training time or reduce the quality of inference. This means that before setting up any training cluster capable of competing with a centralised service, picking your GPUs is key.  For example, Machine Learning training is much faster when the hardware doing the computing is geographically close together – this helps to reduce the communication and latency overhead which can massively bottleneck training.

Reputation and information here matter, and given the large number of different protocols, subnets and compute hardware which can potentially get involved in the process of training artificial intelligence (or forming a network of decentralised compute), a way of identifying different players in an interoperable way becomes necessary to keep things coordinated. 

Opportunity: Know Your GPU

Ensuring a verifiable reputation for network participants in computing clusters is an excellent opportunity to increase the efficacy of any Decentralised Compute network which requires a large degree of coordination. Verifiable Credentials can potentially be issued by actors within the network for specifications such as geographical location, RAM and memory bandwidth. Additionally, credentials can potentially be issued for good performance, such as for having excellent uptime, or labelling for compliance with certain standards such as SOC2, ISO standards and HIPAA..

An advantage of this over a more protocol-specific approach is that once verified, a GPU can hold this credential indefinitely and can be used across multiple platforms, allowing its reputation to transfer over to other protocols, creating a more efficient marketplace for owners of valuable GPU processors.

In practice:

  1. A GPU achieves 100% uptime for 6 months
  2. It receives a verifiable credential signed by the DAO running the Decentralized Computing network it is part of
  3. The DAO has an important training project to run and requires reliable GPUs with no down time
  4. The GPU owner an showcase his 100% uptime verifiable credential in order to access more lucrative training project
  5. If the GPU owner is no longer achieving 100% uptime, his verifiable credential could be revoked by the original provider, preventing him from showing a false reputation

Challenges and Opportunities at the Inference Level

Challenge Five: Is seeing still believing?

The quality of generative AI image and video generation has improved exponentially in the past few years. Software such as MidJourney is now capable of creating images almost indistinguishable from reality, and OpenAI’s Sora is capable of creating video clips with realistic physics and photorealism.  As technology continues to improve, it will become increasingly difficult to distinguish fake from reality. This can, in some regards, be considered old news – Photoshop has allowed for this kind of misinformation to spread for some time, but what is different with the advent of generative AI is the sheer scale of the potential images generated and the potential to create tailored misinformation for any one person. 

With a small amount of data, people’s voices or faces can be replicated, and soon tell-tale signs of AI content will no longer be visible to the human eye. This has huge implications for society and the future of democracy, as well as creating new attack vectors for fraud. In this ‘Year of Elections’, a year in which more people are going to the polling booth than any other in history, this is a problem that needs solving as soon as possible. The recent elections in Slovakia for example may have been swayed by an audio deepfake of the then-prime minister plotting to overthrow the election which was circulated just two days before the country went to the vote.

Image and video provenance needs to improve if these media forms are to retain any real societal trust. Just as a blockchain notes every step in a Bitcoin’s journey, so too should a record of edits exist for anyone to be able to inspect. Blockchains, however, do not usually have the privacy features necessary in a world in which journalists are killed every day for reporting the truth, and neither do most have the storage requirements to hold the provenance of every digital image created. 

Opportunity: Content Credentials

Verifiable credentials have already found a use here, being the technological basis behind the C2PA, a new standards alliance of companies, including Adobe, Microsoft, Arm and Intel. These new industry standards aim to improve image and video provenance with a recorded supply chain going all the way back to the camera which took the picture. Over time, images without a C2PA credential will likely be far less trusted as we will be missing the provenance of that image. Currently, the C2PA standards define ‘what’ should be included in any picture to ensure trusted provenance, but the ‘how’ is still being worked on. VCs are the perfect tool for this due to the maintenance of self-disclosure &  privacy, self-storage of data and tamper-protection.

In practice:

  1. A camera takes a picture. The camera itself has tamper-proof hardware on it that asserts that the picture was taken by this specific piece of hardware, at a specific location and time along with other important information
  2. The hardware ‘signs’ a C2PA credential using its DID attesting to the submitted data
  3. As the picture is uploaded to editing software, such as Adobe Photoshop, any changes made to it are recorded as additional credentials
  4. The use of AI generation can also be recorded here, so it becomes possible to tell what percentage of an image is real, vs generated – especially important at its point of origin
  5. Images  or videos then uploaded to the internet would come with a content credential which consumers can check to see its known provenance
  6. Images without these content credentials could then be viewed with greater scepticism as they are missing the ‘chain of custody’ showing the image’s journey from creation to dissemination.

Challenge Six: On the Internet, no-one knows you’re a dog (or a bot)

As much of our lives happen increasingly online, it becomes important that we know that we are speaking to a human being. The scale of potential misinformation and fraud grows exponentially if a person’s face, voice or writing style can easily be imitated with just a few images or recordings. Moreover, as we take part in the cultural conversation on X/Twitter or other platforms, how do we know that the deluge of opinions we see are from real people? A blue tick costing $8 a month is not enough to trust that you are speaking to a human – it just proves there is an attached credit/debit card, which could be stolen! Within the Web3 space, airdrop farmers often create botnets, or use multiple wallets to game the system in sybil attacks which enable the taking of outsized rewards by those with the time and know-how to do it, locking out potential users and community members from the start. Distributed Denial of Service (DDOS) attacks increasingly make it more important for any website to ensure those entering their domain are real humans, condemning us all to a life spent identifying bridges and fire hydrants in CAPTCHAs (something many AIs can now do as well as human beings).

Solution: Proof-of-personhood through Verifiable Credentials

Proving you are a real person is a huge use-case for a world where CAPTCHAs are no longer effective and the ability to tell who is a bot or not is increasingly difficult. This can range from a ‘weak’ approach to Proof of Personhood, for example, by sharing various credentials signed by multiple people attesting that they have met someone in real life, connecting to a long-running social account, for example, Spotify (how many AIs are listening to music and podcasts?) or tracking the way a mouse moves across a screen. These are of course, gameable and therefore only useful when a low level of confidence of personhood, mixed with an easy-to-use UX is needed. 

For a higher degree of confidence, a ‘strong’ approach to proof-of-personhood is needed. Here Verifiable Credentials really come to the fore, enabling easy-to-use Reusable KYC that quickly allow users to prove that they have been confirmed as a human being. Although the User Experience in first receiving this can be a little tiresome, once a user has a credential, it can be used repeatedly with very little work on the user’s end. To ensure ongoing Proof of Humanity for each time this credential is used, a biometric template could be placed in the credential itself which could be checked locally, as is currently possible with banking apps on our phones.

In practice:

  1. User or ‘Holder’ submits their KYC information to an ‘Issuer’.
  2. The Issuer conducts a proper KYC check, and issues verifiable credentials attesting to the information in this KYC (e.g. “This is a real person”) and ‘signs’ with their Decentralised Identifier.
  3. A ‘Verifier’ requests that the Holder prove their personhood.
  4. The Holder shares their ‘Proof of Personhood’, for example, a reusable KYC attestation, with the Verifier. This would be a zero-knowledge proof, with no need for the gatekeeper to see a full KYC, ensuring trust within the system without compromising privacy.
  5. The Verifier checks the Issuer’s Decentralised Identifier against the publicly available record, ensuring they are of good reputation – and once confirmed, lets the holder through.

Challenge Seven: No Bots Allowed

AI agents are one of the most powerful potential tools that consumers and enterprises may soon have access to. AI agent systems such as AutoGPT and will be increasingly able to complete complex tasks for users, such as research and write essays, pay bills, negotiate deals on users’ behalf, or trade stocks and cryptocurrencies. This could be a wonder for productivity, as agents may be able to complete tasks with minimal supervision at a fraction of the cost of an employee, but this quickly rubs up against the Proof of Personhood above – how can an AI Agent prove that it is working on behalf of someone and should be allowed through despite not having a Proof of Personhood? We can already see this working somewhat with trading bots using exchange APIs, but this is still very time-consuming for the users running the agents – each exchange, or website someone uses would require a new API to be set up and maintained – not ideal when the entire idea of an agent is to save their user time by working on their behalf.

Opportunity: Proof of Permission

To gain access through the various gates set up to ensure only humans and their agents can go through, Verifiable Credentials could be used to showcase that an Agent has permission to work on their behalf.  

In practice:

  1. The user of the AI agent creates their own Decentralised Identifier which is written onto a public network for reference
  2. The user connects their relevant account with their DID for later checking by gatekeepers 
  3. They then act as the Issuer, issuing verifiable credentials to their AI agent, stating the actions the agent has permission to perform, for example, trading on a user’s exchange accounts, negotiating on behalf of a user, or hiring other agents to perform complex tasks
  4. Gatekeepers to these services act as Verifiers, requesting the appropriate Verifiable Credentials from the agent
  5. They then check these against the provided Decentralised Identifier connected to the user’s account and the publicly available DID written on the public network to ensure the agent does, in fact, have permission to act on the user’s behalf.
  6. The agent can use this Verifiable Credential to gain access to the user’s permissions on connected accounts

Challenge Eight: Not all agents are created equally

As AI agents become more ubiquitous, it is likely that not only will agents need to interact with gatekeepers, but also with each other. One agent may be good at research, whereas another may be better at data analysis. Just as humans outsource their work to other companies and people, so too will AI agents in pursuit of a goal set by their users, leading to a rich ecosystem of agents all interacting, negotiating and working with each other. This will increase the importance of ‘reputation’ and the ‘brand’ of an AI agent, just as a company today must maintain a good brand with satisfied customers, as users will want to keep counterparty-risk to a minimum – a poor-performing or malicious agent could create issues further down the Information Supply Chain. 

Opportunity: Know Your AI

Just as a human or company can gain a reputation through ongoing business relationships and reviews, so too could AI agents issue, or be issued, Verifiable Credentials attesting to positive interactions with other bots. Rather than needing to cross-reference with third-party websites (probably then requiring their own AI agent to scrape for data, analyse and evaluate), Verifiable Credentials could be held by agents and shown before any interaction to prove a good upstanding reputation. 

In practice:

  1. AI Agent A acts as an ‘Issuer’, first writing their DID to a public network for reference
  2. After a positive interaction with AI Agent B, Agent A issues B with a Verifiable Credential, ‘signed’ with their DID, stating that the work provided was satisfactory.
  3. AI Agent C then wishes to contract Agent B for some work. Before doing so, they request multiple attestations to ensure Agent B is trustworthy.
  4. Agent B sends Verifiable Credentials from other Agents, including Agent A, attesting to their trustworthiness.
  5. Agent C checks Agent A’s DID on the public network, and decides that A is a reputable issuer, and thus that B is safe to do business with.

Challenges and Opportunities at the Deployment Level

Challenge Nine: Deployment Decisions

It is likely over the coming years that we will see a ‘Cambrian Explosion’ of different AI models with a multitude of different use cases, training methods and datasets. As mentioned in many of the solutions above, Verifiable Credentials will be necessary all along the Information Supply Chain, from data creation, collection, and collation, to training and inference to create verifiable, easy-to-check ways of managing reputations and build trust. Procurement officers for companies and individuals looking to use AI models, for whatever purpose, will need to ensure that they are using good quality, compliant models which answer their needs correctly.

Opportunity: It’s Verifiable Credentials all the way down…

As each step in the Information Supply chain can have its own Verifiable Credentials, the final model can also hold those credentials before being deployed. This means that before purchasing or using a model, any decision-makers looking to choose a specific model can look through all the relevant credentials which may affect the final product. E.g. Is the product compliant with local regulations? Is it legally using the intellectual property of a specific company or person? Is it trained on high-quality, well-trusted data sets? Was it trained using high-quality GPUs? Do other users/AI agents review the model positively? 

All of these are crucial to the creation of a high-quality model and therefore are also of great importance to anyone choosing which model to deploy. 

In practice:

  1. An AI model is trained on ESG EU-compliant data, and is trained on data from licensed sources (such as the New York Times)
  2. Before deployment, the model creator requests the issuance of verifiable credentials from the ESG standards body and the New York Times
  3. The issuers check that the model creator has in fact received a license to use the New York Times data, and that all the datasets on which it is trained are compliant and already signed by them
  4. They then issue these credentials to the model owner, who can display them or allow the model to share them when requested
  5. Any user wishing to verify that the model is in compliance can then request these verifiable credentials from the model, cross-referencing on an open network that the Decentralised Identifiers match what is on the verifiable credentials  


The exploration of Verifiable AI in action uncovers the intertwined challenges and opportunities at the heart of AI development, from data collection and model training to deployment and inference. With an emphasis on the critical need for data integrity, regulatory compliance, and ethical use of intellectual property, the article underscores the potential of Verifiable Credentials and Decentralized Identifiers to embed trust and transparency across the AI lifecycle. Highlighting innovative solutions to ensure the authenticity of AI-generated content and the verification of human versus automated interactions, the discussion points towards a future where AI systems are powerful, and principled, offering a blueprint for builders to create trustworthy and transparent AI technologies.

Contact Us

Are you a team member, or community member of an AI project that you think could use Verifiable Credentials and Decentralised Identifiers? We are always very happy to have a conversation, learn about your pain points and see how we can work together to create a more trust-filled world. Contact us or get your favourite team to message us at  [email protected]!