Skip to content

Verifiable AI in Action: Challenges and Opportunities

This is the second article in a series of five.

In our previous article, we explored the critical need for Verifiable AI in a world where artificial intelligence use is ubiquitous. We introduced the concept of the Information Supply Chain, in which data is transformed into working AI models, and how Verifiable Credentials (VCs) can embed trust at almost every stage within this chain, ensuring data provenance &  integrity, and creating transparent frameworks across the entire AI lifecycle. 

In this second article, we will further delve into the various challenges previously touched upon and the opportunities this presents for builders looking to be first-movers in creating trustworthy, transparent systems which can be relied upon to both have, and produce Trusted Data.

Recap: The Information Supply Chain

The production of any useful artificial intelligence model output is a complex, multi-step process involving a large number of dependencies across what can broadly be defined in three separate stages. 

The first stage is that of collecting and collating Data to be used to develop the model, as an AI model is only as good as the data it is trained on. AI models require astronomical amounts of data to work successfully but this must be of good quality: AI-generated content can produce hallucinations and a lot of datasets can be inherently biased. Ensuring that a model is trained on high-quality data is essential to having a good model.

The second stage is the actual training process, in which models use this data to discover patterns and begin ‘learning’ through the use of various machine learning algorithms. Although these fall broadly into three categories – unsupervised learning, supervised learning and reinforcement learning – the specific algorithms used to train these may be proprietary information that builders may not wish to share. This stage is the most computationally heavy, with large clusters of GPUs or TPUs needed to allow countless hours of training with this data to be able to produce any kind of useful output.

The third stage is known as Inference, the actual usage of the model for its intended purpose. The output from a ChatGPT prompt is an example of inference. At this stage, trust becomes incredibly relevant as this is where models begin to interact directly with humans, whether through the use of AI agents to perform tasks previously done by humans or the creation and consumption of AI-generated images or text. 

At the fourth stage, that of actual deployment, ensuring that the model itself is high quality, built using good datasets, regulatorily compliant and has a well-established reputation may be key to anyone looking to deploy a model – thus making the aggregated reputation of a model, through its datasets, training methods, and ongoing reviews incredibly important to any decision-maker.

Each stage along the Information Supply Chain requires verification in multiple ways, almost all of which can leverage Verifiable Credentials and Decentralised Identifiers to build Trusted Data packages, baking trust into the process in a chain-agnostic, privacy-preserving, and fast-moving way.

Challenges and Opportunities at the Data Level

Challenge One: Size matters, but so does quality

Known as the ‘Bitter Lesson’ within Artificial Intelligence circles, this states that the best way to get the most out of an AI model is simply to have more computing power and more data. Training an AI to think like a human does, does not do enough to create an effective tool. With more data and more computing power, AI models are much better able to effectively learn on their own, often discovering patterns unseen by humans. The more data, the more computing power, and the more time an AI has to learn, the better it will be. However, this data needs to be quality data. Incomplete datasets may reflect societal biases or miss important nuances that can make an AI model useless.

The implications of this at the data level are clear – creating an effective model will require incredibly large, high-quality datasets. These are not necessarily easy to come by – there are only so many of these datasets available. Wikipedia for example, is only 20 GB of data, not enough to effectively train an AI model. This makes it likely that in the future, a sign of a quality AI model will be the datasets on which it has been trained, without which, the model may not be considered of any value. Verifying that a dataset is part of an AI model may become key to ensuring any model used by a person or enterprise will be at all useful. 

Opportunity: Verified Datasets

Verifiable Credentials have a perfect use-case here in ensuring that any model can clearly be labelled with the datasets used, without having to trust the producers of the model themselves, or waste time cross-referencing directly with the dataset producers. In practical terms, this would work as follows:

  1. The AI model producer would download or license and use a dataset in the training of their AI model
  2. Along with the dataset itself, its owners would also ‘issue’ a Verifiable Credential, ‘signed’ with their Decentralised Identifier
  3. The AI model itself would essentially be the ‘holder’ of the Verifiable Credential within the Trust Triangle, able to showcase when requested by a ‘verifier’ which datasets have been used to train it.

This allows for a quick, trusted way to prove that a model is using the required models to be effectively trained.

Challenge Two: You must comply

As AI models become more prevalent, the scrutiny to which their creators and users are held will increase. Ensuring that models are compliant with existing rules around Environmental, Social and Governance (ESG) standards and regulations will become important labels for any procurement officer to look at before choosing a model for their company. For example, many early AI predictive policing models were subject to large biases due to incomplete datasets, brought about by discriminatory policing practices that then are fed into the data on which models were trained. This may be an extreme example, but can play out in other ways, for example when an AI model is used to screen job candidates or decide who is eligible for a government benefits scheme. Even Data Privacy must be considered by anyone using an AI model. 

AI models are often black boxes, where the exact ‘thinking process’ which leads to a decision is not clear, thus making it important that the data on which a model is trained is clearly tracked and verified as much as possible. Although the sector may be in a ‘wild west’ period currently, with incoming EU regulations this situation will change very quickly.

Opportunity: Verifiably Compliant Datasets

The EU’s incoming ‘AI Act’ specifically refers to the importance of the “high quality of the datasets feeding the system to minimise risks and discriminatory outcomes” when it comes to ‘High Risk’ uses of AI – areas which can have a significant impact on someone’s life such as education, public services or law enforcement. Verifiable Credentials can be used here by standards bodies such as governmental bodies, companies and even an open source group, such as a DAO, to quickly showcase whether a training dataset is compliant.

In practice:

  1. A dataset is inspected by a regulatory, or industry standards body, to verify that it is following all required regulations and/or standards.
  2. The regulatory or standards body issues this dataset with a Verifiable Credential signed with the body’s Decentralized Identifier
  3. This Verifiable Credential is then ‘held’ by the dataset
  4. AI model trainers can check that the dataset they are looking to train their model on is fully compliant for specific jurisdictions or industries.
  5. When this dataset is then used by a model, a Verifiable Credential can then be issued by the standards body to the model, showing that it is trained with a regulatory-compliant dataset

Challenge Three: Where did you get that idea?

The use of enormous datasets for the production of Large Language Models (LLMs) is, as mentioned in the previous challenge, necessary for the production of good-quality models. ChatGPT and other state-of-the-art Large Language Models, for example, were trained on Common Crawl, which contains 450 TB of data, essentially the entirety of the internet to get to the level of quality they have currently achieved. However this has come at a cost to holders of intellectual property, as OpenAI essentially crawled through every newspaper, and paywalled media organisation, then scraped their sites for information and insights to feed into their training process without paying any of these oraganisations for the privelige. This means that potentially every time ChatGPT provides an answer, it may be plagiarizing, or using analysis taken from another entity’s Intellectual Property (IP). The New York Times, for example, is currently suing OpenAI for training its LLM on millions of its articles, enabling it to compete with NYT as a source of reliable information. 

This is likely to grow as an issue, as artists, journalists and organisations find their Intellectual Property used to create imitated versions of their own work. It is not necessarily always a bad thing that others’ IP is used, but what is important is that, if it is used, the IP holders are correctly compensated and appropriate licensing is acquired.

Opportunity: IP-compliant credentials

Verifiable Credentials which showcase legitimate use of data from IP holders may present a great opportunity for both holders of intellectual property and AI model developers. Just as websites can decide if they are happy for a search engine to scrape their site for information, it should be possible for website owners to provide permission for their IP data to be used for training LLMs. This would allow AI models to showcase that they have a legal right to produce inferences that make use of high-quality sources, as well as produce a way to monetize the use of IP in services which ‘generalize’ information without citation. If verifiable credentials are used across the information supply chain, it would also allow IP holders to check that an AI-produced inference was produced by a compliant LLM which had permission to use intellectual property.

In practice:

  1. AI model trainers purchase the right to use an entity’s IP 
  2. The entity supplies the model trainer with the required dataset and issues a Verifiable Credential confirming that the organisation has the right to use their data
  3. The AI model would then hold this credential, which could be showcased both at the deployment level and potentially at the inference level
  4. Users or IP holders could then double-check that a model’s output or another entity’s output has used IP-compliant data

Challenges and Opportunities at the Training Level

Challenge Four: Coordinating the Compute

Within the emerging world of Decentralised AI, Decentralised Computing and Decentralised Physical Infrastructure Network (DePIN), in which different devices must interact with each other to form a network, coordination is of extreme importance. The ‘Bitter Lesson’, as mentioned above shows that an AI model is only as good as the amount and quality of its data, and the amount & quality of the GPUs it uses for training.  In the context of networks training AI, such as Bittensor, mistakes made on one machine may affect the entire LLM, or massively increase training time or reduce the quality of inference. This means that before setting up any training cluster capable of competing with a centralised service, picking your GPUs is key.  For example, Machine Learning training is much faster when the hardware doing the computing is geographically close together – this helps to reduce the communication and latency overhead which can massively bottleneck training.

Reputation and information here matter, and given the large number of different protocols, subnets and compute hardware which can potentially get involved in the process of training artificial intelligence (or forming a network of decentralised compute), a way of identifying different players in an interoperable way becomes necessary to keep things coordinated. 

Opportunity: Know Your GPU

Ensuring a verifiable reputation for network participants in computing clusters is an excellent opportunity to increase the efficacy of any Decentralised Compute network which requires a large degree of coordination. Verifiable Credentials can potentially be issued by actors within the network for specifications such as geographical location, RAM and memory bandwidth. Additionally, credentials can potentially be issued for good performance, such as for having excellent uptime, or labelling for compliance with certain standards such as SOC2, ISO standards and HIPAA..

An advantage of this over a more protocol-specific approach is that once verified, a GPU can hold this credential indefinitely and can be used across multiple platforms, allowing its reputation to transfer over to other protocols, creating a more efficient marketplace for owners of valuable GPU processors.

In practice:

  1. A GPU achieves 100% uptime for 6 months
  2. It receives a verifiable credential signed by the DAO running the Decentralized Computing network it is part of
  3. The DAO has an important training project to run and requires reliable GPUs with no down time
  4. The GPU owner an showcase his 100% uptime verifiable credential in order to access more lucrative training project
  5. If the GPU owner is no longer achieving 100% uptime, his verifiable credential could be revoked by the original provider, preventing him from showing a false reputation

Challenges and Opportunities at the Inference Level

Challenge Five: Is seeing still believing?

The quality of generative AI image and video generation has improved exponentially in the past few years. Software such as MidJourney is now capable of creating images almost indistinguishable from reality, and OpenAI’s Sora is capable of creating video clips with realistic physics and photorealism.  As technology continues to improve, it will become increasingly difficult to distinguish fake from reality. This can, in some regards, be considered old news – Photoshop has allowed for this kind of misinformation to spread for some time, but what is different with the advent of generative AI is the sheer scale of the potential images generated and the potential to create tailored misinformation for any one person. 

With a small amount of data, people’s voices or faces can be replicated, and soon tell-tale signs of AI content will no longer be visible to the human eye. This has huge implications for society and the future of democracy, as well as creating new attack vectors for fraud. In this ‘Year of Elections’, a year in which more people are going to the polling booth than any other in history, this is a problem that needs solving as soon as possible. The recent elections in Slovakia for example may have been swayed by an audio deepfake of the then-prime minister plotting to overthrow the election which was circulated just two days before the country went to the vote.

Image and video provenance needs to improve if these media forms are to retain any real societal trust. Just as a blockchain notes every step in a Bitcoin’s journey, so too should a record of edits exist for anyone to be able to inspect. Blockchains, however, do not usually have the privacy features necessary in a world in which journalists are killed every day for reporting the truth, and neither do most have the storage requirements to hold the provenance of every digital image created. 

Opportunity: Content Credentials

Verifiable credentials have already found a use here, being the technological basis behind the C2PA, a new standards alliance of companies, including Adobe, Microsoft, Arm and Intel. These new industry standards aim to improve image and video provenance with a recorded supply chain going all the way back to the camera which took the picture. Over time, images without a C2PA credential will likely be far less trusted as we will be missing the provenance of that image. Currently, the C2PA standards define ‘what’ should be included in any picture to ensure trusted provenance, but the ‘how’ is still being worked on. VCs are the perfect tool for this due to the maintenance of self-disclosure &  privacy, self-storage of data and tamper-protection.

In practice:

  1. A camera takes a picture. The camera itself has tamper-proof hardware on it that asserts that the picture was taken by this specific piece of hardware, at a specific location and time along with other important information
  2. The hardware ‘signs’ a C2PA credential using its DID attesting to the submitted data
  3. As the picture is uploaded to editing software, such as Adobe Photoshop, any changes made to it are recorded as additional credentials
  4. The use of AI generation can also be recorded here, so it becomes possible to tell what percentage of an image is real, vs generated – especially important at its point of origin
  5. Images  or videos then uploaded to the internet would come with a content credential which consumers can check to see its known provenance
  6. Images without these content credentials could then be viewed with greater scepticism as they are missing the ‘chain of custody’ showing the image’s journey from creation to dissemination.

Challenge Six: On the Internet, no-one knows you’re a dog (or a bot)

As much of our lives happen increasingly online, it becomes important that we know that we are speaking to a human being. The scale of potential misinformation and fraud grows exponentially if a person’s face, voice or writing style can easily be imitated with just a few images or recordings. Moreover, as we take part in the cultural conversation on X/Twitter or other platforms, how do we know that the deluge of opinions we see are from real people? A blue tick costing $8 a month is not enough to trust that you are speaking to a human – it just proves there is an attached credit/debit card, which could be stolen! Within the Web3 space, airdrop farmers often create botnets, or use multiple wallets to game the system in sybil attacks which enable the taking of outsized rewards by those with the time and know-how to do it, locking out potential users and community members from the start. Distributed Denial of Service (DDOS) attacks increasingly make it more important for any website to ensure those entering their domain are real humans, condemning us all to a life spent identifying bridges and fire hydrants in CAPTCHAs (something many AIs can now do as well as human beings).

Solution: Proof-of-personhood through Verifiable Credentials

Proving you are a real person is a huge use-case for a world where CAPTCHAs are no longer effective and the ability to tell who is a bot or not is increasingly difficult. This can range from a ‘weak’ approach to Proof of Personhood, for example, by sharing various credentials signed by multiple people attesting that they have met someone in real life, connecting to a long-running social account, for example, Spotify (how many AIs are listening to music and podcasts?) or tracking the way a mouse moves across a screen. These are of course, gameable and therefore only useful when a low level of confidence of personhood, mixed with an easy-to-use UX is needed. 

For a higher degree of confidence, a ‘strong’ approach to proof-of-personhood is needed. Here Verifiable Credentials really come to the fore, enabling easy-to-use Reusable KYC that quickly allow users to prove that they have been confirmed as a human being. Although the User Experience in first receiving this can be a little tiresome, once a user has a credential, it can be used repeatedly with very little work on the user’s end. To ensure ongoing Proof of Humanity for each time this credential is used, a biometric template could be placed in the credential itself which could be checked locally, as is currently possible with banking apps on our phones.

In practice:

  1. User or ‘Holder’ submits their KYC information to an ‘Issuer’.
  2. The Issuer conducts a proper KYC check, and issues verifiable credentials attesting to the information in this KYC (e.g. “This is a real person”) and ‘signs’ with their Decentralised Identifier.
  3. A ‘Verifier’ requests that the Holder prove their personhood.
  4. The Holder shares their ‘Proof of Personhood’, for example, a reusable KYC attestation, with the Verifier. This would be a zero-knowledge proof, with no need for the gatekeeper to see a full KYC, ensuring trust within the system without compromising privacy.
  5. The Verifier checks the Issuer’s Decentralised Identifier against the publicly available record, ensuring they are of good reputation – and once confirmed, lets the holder through.

Challenge Seven: No Bots Allowed

AI agents are one of the most powerful potential tools that consumers and enterprises may soon have access to. AI agent systems such as AutoGPT and will be increasingly able to complete complex tasks for users, such as research and write essays, pay bills, negotiate deals on users’ behalf, or trade stocks and cryptocurrencies. This could be a wonder for productivity, as agents may be able to complete tasks with minimal supervision at a fraction of the cost of an employee, but this quickly rubs up against the Proof of Personhood above – how can an AI Agent prove that it is working on behalf of someone and should be allowed through despite not having a Proof of Personhood? We can already see this working somewhat with trading bots using exchange APIs, but this is still very time-consuming for the users running the agents – each exchange, or website someone uses would require a new API to be set up and maintained – not ideal when the entire idea of an agent is to save their user time by working on their behalf.

Opportunity: Proof of Permission

To gain access through the various gates set up to ensure only humans and their agents can go through, Verifiable Credentials could be used to showcase that an Agent has permission to work on their behalf.  

In practice:

  1. The user of the AI agent creates their own Decentralised Identifier which is written onto a public network for reference
  2. The user connects their relevant account with their DID for later checking by gatekeepers 
  3. They then act as the Issuer, issuing verifiable credentials to their AI agent, stating the actions the agent has permission to perform, for example, trading on a user’s exchange accounts, negotiating on behalf of a user, or hiring other agents to perform complex tasks
  4. Gatekeepers to these services act as Verifiers, requesting the appropriate Verifiable Credentials from the agent
  5. They then check these against the provided Decentralised Identifier connected to the user’s account and the publicly available DID written on the public network to ensure the agent does, in fact, have permission to act on the user’s behalf.
  6. The agent can use this Verifiable Credential to gain access to the user’s permissions on connected accounts

Challenge Eight: Not all agents are created equally

As AI agents become more ubiquitous, it is likely that not only will agents need to interact with gatekeepers, but also with each other. One agent may be good at research, whereas another may be better at data analysis. Just as humans outsource their work to other companies and people, so too will AI agents in pursuit of a goal set by their users, leading to a rich ecosystem of agents all interacting, negotiating and working with each other. This will increase the importance of ‘reputation’ and the ‘brand’ of an AI agent, just as a company today must maintain a good brand with satisfied customers, as users will want to keep counterparty-risk to a minimum – a poor-performing or malicious agent could create issues further down the Information Supply Chain. 

Opportunity: Know Your AI

Just as a human or company can gain a reputation through ongoing business relationships and reviews, so too could AI agents issue, or be issued, Verifiable Credentials attesting to positive interactions with other bots. Rather than needing to cross-reference with third-party websites (probably then requiring their own AI agent to scrape for data, analyse and evaluate), Verifiable Credentials could be held by agents and shown before any interaction to prove a good upstanding reputation. 

In practice:

  1. AI Agent A acts as an ‘Issuer’, first writing their DID to a public network for reference
  2. After a positive interaction with AI Agent B, Agent A issues B with a Verifiable Credential, ‘signed’ with their DID, stating that the work provided was satisfactory.
  3. AI Agent C then wishes to contract Agent B for some work. Before doing so, they request multiple attestations to ensure Agent B is trustworthy.
  4. Agent B sends Verifiable Credentials from other Agents, including Agent A, attesting to their trustworthiness.
  5. Agent C checks Agent A’s DID on the public network, and decides that A is a reputable issuer, and thus that B is safe to do business with.

Challenges and Opportunities at the Deployment Level

Challenge Nine: Deployment Decisions

It is likely over the coming years that we will see a ‘Cambrian Explosion’ of different AI models with a multitude of different use cases, training methods and datasets. As mentioned in many of the solutions above, Verifiable Credentials will be necessary all along the Information Supply Chain, from data creation, collection, and collation, to training and inference to create verifiable, easy-to-check ways of managing reputations and build trust. Procurement officers for companies and individuals looking to use AI models, for whatever purpose, will need to ensure that they are using good quality, compliant models which answer their needs correctly.

Opportunity: It’s Verifiable Credentials all the way down…

As each step in the Information Supply chain can have its own Verifiable Credentials, the final model can also hold those credentials before being deployed. This means that before purchasing or using a model, any decision-makers looking to choose a specific model can look through all the relevant credentials which may affect the final product. E.g. Is the product compliant with local regulations? Is it legally using the intellectual property of a specific company or person? Is it trained on high-quality, well-trusted data sets? Was it trained using high-quality GPUs? Do other users/AI agents review the model positively? 

All of these are crucial to the creation of a high-quality model and therefore are also of great importance to anyone choosing which model to deploy. 

In practice:

  1. An AI model is trained on ESG EU-compliant data, and is trained on data from licensed sources (such as the New York Times)
  2. Before deployment, the model creator requests the issuance of verifiable credentials from the ESG standards body and the New York Times
  3. The issuers check that the model creator has in fact received a license to use the New York Times data, and that all the datasets on which it is trained are compliant and already signed by them
  4. They then issue these credentials to the model owner, who can display them or allow the model to share them when requested
  5. Any user wishing to verify that the model is in compliance can then request these verifiable credentials from the model, cross-referencing on an open network that the Decentralised Identifiers match what is on the verifiable credentials  


The exploration of Verifiable AI in action uncovers the intertwined challenges and opportunities at the heart of AI development, from data collection and model training to deployment and inference. With an emphasis on the critical need for data integrity, regulatory compliance, and ethical use of intellectual property, the article underscores the potential of Verifiable Credentials and Decentralized Identifiers to embed trust and transparency across the AI lifecycle. Highlighting innovative solutions to ensure the authenticity of AI-generated content and the verification of human versus automated interactions, the discussion points towards a future where AI systems are powerful, and principled, offering a blueprint for builders to create trustworthy and transparent AI technologies.

Contact Us

Are you a team member, or community member of an AI project that you think could use Verifiable Credentials and Decentralised Identifiers? We are always very happy to have a conversation, learn about your pain points and see how we can work together to create a more trust-filled world. Contact us or get your favourite team to message us at  [email protected]!


Related articles

join the community

Become a cheqmate

Join our community to learn more about what we’re building. Get the latest news and insights in our groups below.