This is the fourth article in a series of five.
In an era where Artificial Intelligence (AI) is increasingly embedded in our daily lives, ensuring the authenticity and trustworthiness of data is crucial. The implementation of Verifiable Credentials (VCs) across various industries offers significant potential, from protecting intellectual property to ensuring data integrity and enabling secure AI interactions. As AI continues to transform how we live and work, the adoption of Verifiable AI (vAI) strategies becomes essential. This article explores the diverse applications of vAI across multiple sectors, providing detailed use cases and examining broader industry impacts.
Recap: What is Verifiable AI?
Introduced in our first article in this series, Verifiable Credentials are portable packets of data that are held by the user (which could be a person, an AI agent or simply a dataset) in a digital wallet and ‘signed’ or attested to by a ‘Trust Anchor’ – a trusted institution, company or individual that vouches for the information held in that Verifiable Credential. This means users are able to carry attested data in their digital wallet, just as we can carry attested identifications in our wallets, such as government-signed driver’s licences. The use of verifiable credentials in creating trust in AI-related interactions can also be known as ‘Verifiable AI’ (vAI), which could be used across many industries, from labelling the huge amounts of data needed to train AI models, to improving IP holder rights, to allowing for trusted interactions between autonomous AI agents.
Although this may seem like navel-gazing, the truth is that all this technology already exists and countless companies are working on improving its application to our public and private lives. As much as every forward-looking company needs an AI strategy, they also need a Verifiable AI strategy to ensure that they avoid many of the pitfalls in terms of data quality, deep fakes and IP protection that they may run into in our changing world.
vAI Use Cases Across Industries
In this article, we will be looking at multiple use cases for Verifiable AI across various industries, including Media & Entertainment, Social Media, Search Engine Optimisation, Healthcare, Customer Service, Manufacturing & Supply Chain, Cybersecurity, and the Financial & Legal industries. However, this list is by no means exhaustive. Any industry which is affected by AI will be affected by Verifiable AI, which presents multiple opportunities as well as necessary features for any AI use case.
If you think your industry or organisation presents Verifiable AI opportunities, please contact us at [email protected].
1. New Revenue Models
One area in which Verifiable AI is lacking, is a way to monetise the act of verification. Complying with Anti-Money Laundering regulations, creating Decentralised Identifiers and transacting on any SSI identity system is a costly endeavour for those using it. One reason why we have not yet begun using verifiable credentials ubiquitously is that there has been little economic incentive to make the switch. Contemporary online identity models make money for the current trust anchors in the system, and therefore they are unlikely to change models until they are incentivised financially to do so.
Using services such as cheqd’s Credential Payments enable those participating in the trust triangle to commercialise these relationships. By making it possible for microtransactions for Verifiable Credential verifications to occur on chain, yet abstracted away so users can still pay and be paid in fiat, cheqd enables any network participant to develop a new revenue model. This is a thread that runs through the rest of the document. Using cheqd’s credentials services for payments turns many situations in which verifiable credentials can be used into a potential economic transaction – as AI grows in use, the value of trust also rises.
How might Verifiable AI and cheqd help create new revenue models for the media and entertainment industry?
Content credentials enable trusted organisations to create a new commercial role for themselves as ‘Trust Anchors’, who verify the content of material, and can then get paid for offering this service. A video is more trustworthy, and therefore more valuable, if fact-checked by the BBC and/or Fox News, than if that video has no content credentials confirming its genuinity. This could create a new business model for trusted news organisations, which can then earn through using their reputation to verify as well as publish.
How might Verifiable AI and cheqd help create new revenue models for Social Media platforms?
Social Media platforms have some of the largest, most valuable datasets available for the training of AI about countless different individuals and their browsing patterns. Selling this as verified data (with the correct verifiable credentials regarding data privacy permissions etc.) could become a huge money-maker for these organisations.
Additionally, having better information regarding the status of content (e.g. is it synthetic data or not) will increase the value of their data to others. Features already built on some social media platforms, such as the Community Notes section on X could also be disrupted and commercialised using content credentials to help create a more reputable system for combating misinformation.
How might Verifiable AI and cheqd help create new revenue models for verifying the provenance of data?
Verifying the provenance of data is going to grow in importance as the value of quality datasets rises to meet the training needs of AI models. Being able to prove that a dataset is of high quality, with the correct licensing, certificates, lack of synthetic data and a high level of accuracy offers value to those looking to create models, and therefore it makes sense that those offering these verification services get paid.
How can Verifiable AI technology help create new business opportunities for the Legal industry?
As contracts are increasingly signed digitally, it will create new opportunities for notaries to build their businesses in a more digital fashion, enabling digitised versions of notarised contracts to begin appearing. Given the scale possible in a digital setting, this could be a huge opportunity to grow the number of notarised, digital contracts – allowing them to work with new business models.
In Summary
Once microtransactions are possible for verifiable credential transactions, many commercial opportunities for creators, trust anchors and fact-checkers to be paid via microtransactions for their services can be unlocked, such as:
- Micropayments to content creators who are officially recognised as the creator of an image
- Payments for trusted organisations such as academic institutions or healthcare companies confirming the accuracy of data
- Payments to an auditing firm who can confirm that an AI Agent is working on someone’s behalf
2. Content Credentials
Content credentials were the focus of the third piece in this series, but are worth bringing up here due to the multiple important industries that they impact.
As AI-produced content becomes more ubiquitous, knowing where an image comes from becomes important, not only to avoid the spread of misinformation, but also because training AI models on synthetic data (AI-produced data) can lead to total model collapse.
Championed by the C2PA and other new standards organisations, they aim to create a ‘chain of custody’ from the moment a picture is taken, to the moment it is viewed and checked by someone online.
This has huge implications for combating misinformation, protecting Intellectual Property and creating new value creation opportunities for potential Trust Anchors, be them organisations or specific individuals.
In our below use case, we will show how content credentials, which are already being used in many places, can create new methods of IP protection and reward for content creators.
How do content credentials work in practice?
- A content creator takes a career-ruining image of a British politician eating a bacon sandwich. His camera has tamper-proof hardware on it that records important metadata about the video, such as the location, time, and specific camera used (this helps prove that the picture was not AI generated). The hardware ‘signs’ a content credential attesting to the recorded metadata.
- As the picture is uploaded to editing software such as Adobe Photoshop, any changes made to it including AI generation or the removal of metadata, are recorded as additional credentials.
- Publishers looking to use an image for a news article check the content credentials to ensure the image is genuine before purchasing the rights to use the image
- Images or videos published on websites and social media would be able to check the content credentials of the image
- Republishers looking to use this content then make payments to the publisher, and others with IP rights, such as the photographer.
How might content credentials benefit the media & entertainment industries?
The world of media and entertainment will hugely benefit from the introduction of Verifiable Credentials. By providing explanations for how media content was generated and manipulated, organisations can combat misinformation and ensure their brand is not brought into disrepute by nefarious actors impersonating them to spread misinformation. The fact that creators and different entities can ‘Digitally Sign’ their intellectual property will help many organisations in protecting their IP rights.
How might content credentials benefit content creators?
Smaller, independent creatives, be they artists, freelance journalists or even TikTok influencers can also benefit from content credentials through enabling the protection of their intellectual property. It may also help freelance journalists to more independently build their own self-publishing networks whilst still enabling them to receive attestations from respected trust anchors for their work.
How might content credentials benefit the social media industry?
Social Media companies also have a vested interest in ensuring their platforms are not used to spread misinformation, as doing so can cause huge reputational damage and even political problems, as seen in the scrutiny Mark Zuckerberg faced in the wake of the 2016 election and TikTok’s recent woes in the US. By arming users on the ‘frontline’ of the misinformation war with the ability to check an image’s content credentials, it could help to reduce reliance on moderation teams to ‘fight misinformation’ and improve brand reputation.
How might content credentials benefit Search Engines?
Synthetic data is a huge danger to AI models, consuming too much of it can lead to total model collapse due to the AI ‘mutating’ as it copies more and more of itself. Using content credentials to determine whether content is AI generated or not will help organisations avoid the collection of synthetic data for their datasets, whether they sell this on or just use it for their own proprietary models.
In Summary
Content credentials unlock a huge amount of value for multiple industries. They can:
- Enable clearer knowledge of content’s provenance, creating a fingerprint on an image or video which can help showcase ownership.
- Reduce the spread of misinformation and help protect brands
- Create new commercial opportunities for multiple industries involved in the production and consumption of content
- Improve the quality of data being collected by search engines and other crawlers
3. Data Provenance
“You are what you eat” applies to machines as well as humans – data is the most important input there is for AI models.
A model trained on IP-protected data may risk the user getting sued by the rights-holders; a model trained on a small sample size of inaccurate health data could lead to incorrect diagnoses and patient deaths; a model trained on synthetic data may completely collapse as it begins replicating AI-generated data, leading to a ‘Habsburg Chin’ in the data. It is therefore crucial to any AI model that there is clear labelling for all of the datasets used to train their models.
Verifiable Credentials are highly useful tools in this case, enabling datasets to carry verified information about themselves along with them. Rather than an AI Model developer having to cross reference reviews on different websites and consider if the reviews or ratings have been gamed, they can check that the Verifiable Credentials of a given dataset, verify that the accuracy of the data has been attested by an organisation they trust and decide if the data is of good enough quality for their model.
How does data provenance verification work in practice?
- Healthcare provider publishes anonymised data records on a data marketplace, most likely to have obtained consents from patients or their doctors
- They sign a Verifiable Credential with a DID from their organisation attesting that the data is accurate, whilst the patients sign a Verifiable Credential confirming they consent to the sharing of their data. They also receive a Verifiable Credential from standards bodies showing that their data complies with Data Sharing and Privacy laws
- Before purchasing a dataset, an AI model developer requests verification that the dataset is compliant with local data privacy laws and that the data is accurate and from a trusted source. These verifications can be supplied by the marketplace in the form of attached verifiable credentials.
- Once the verifiable credentials have verified that everything is in order, the model developer purchases the dataset
- A payment is made to the organisations which have verified the data’s accuracy and compliant
- The originally issued Verifiable Credentials can be reused repeatedly with attestors continually receiving payments
How might verifiable credentials enable better provenance of data in the healthcare industry?
Data provenance is critically important in the healthcare industry, where the integrity and reliability of data can have significant legal, ethical, and reputational consequences. Ensuring data accuracy and avoiding biases are essential to prevent incorrect diagnoses, ineffective treatments, and potential harm to patients. Given the sensitivity of healthcare data, compliance is paramount, and the use of data in models must always rely on sources verified as compliant with health and data protection regulations.
Through the use of Verifiable Credentials, any dataset could come with a set of attestations attached, enabling anyone planning to train a compliant model to quickly confirm if a dataset is from a trusted source with high-quality data, without having to directly confirm that the dataset has the correct credentials.
How might verifiable credentials enable better provenance of data in the financial and legal industries?
In the financial and legal industries, accurate data provenance is critical due to their significant impact on our economies. It is essential that datasets used for training are correctly labelled, as bankers and lawyers handle other people’s money and business contracts. Any inaccuracies or biases in these datasets can lead to severe financial losses, legal liabilities, and reputational damage. These organisations, often holding substantial social capital, can also serve as ‘verifiers’ of data, ensuring that specific datasets are legally compliant.
How might verifiable credentials enable better provenance of data in the cybersecurity industry?
As with any AI model, the quality of data used to train it is key, meaning those buying data for their own models will want the utmost certainty that the data they are using is genuine and of high quality, especially given the sensitivity of cybersecurity to the functioning of any business. It may be that in the future, ensuring that cybersecurity departments at companies may demand that only certain verified datasets can be granted, to ensure bad quality data does not infect their models.
In Summary
The use of dataset labelling has many benefits for countless industries as they develop their AI capabilities:
- Ensure high quality data, non-synthetic data for many industries with high compliance requirements.
- Create new ways to monetise data and creates new revenue models for Trust Anchors (e.g. universities, auditing companies)
- Help create safer, more trustworthy datasets
4. Know Your AI Agent
OpenAI’s most recent offering, ChatGPT-4o showcases a newer kind of AI model – one capable of not just being able to break down tasks into smaller more manageable ones, but also going out and performing actions.
The fact that this is now something even the most widely used LLM is becoming capable of means that any business able to implement this technology should be thinking about it. Very soon, AI agents will be able to automate many complex tasks previously performed by humans – an agent could make a restaurant reservation, implement a trading plan, or design and run a social media advertising campaign.
Despite the exponential growth in capabilities, AI agents will remain hugely limited until the amount of permissions available to them increases and the process of ‘trusting’ the AI on each side of the interaction is possible. Verifiable Credentials can be signed by humans, giving AI agents permissions to act on their behalf, and can also be used to give these agents ‘reviews’ and attestations about their effectiveness (on top of important information regarding how an agent was trained).
How would AI agent verification work in practice?
- A customer wishes to book the time of a business dinner at an uptown restaurant. He signs verifiable credentials (VCs) giving his AI agent permission to book calendar events for him and to represent him in certain interactions, as well as giving the agent permission to share his dietary restrictions with other agents.
- His AI agent exchanges VCs with his colleagues’ agents to verify they are representing them, then all agree together on the most ideal time and date, they also confirm the dietary restrictions of their users, so this can be communicated with the restaurant.
- His AI agent then interacts with the AI customer service agent for the restaurant. Both share their representation credentials and the preferred time is agreed on, dietary restrictions are shared to ensure that the restaurant is able to accommodate them..
- Both agents use their Edit permission VCs to edit the calendars
How might ‘Know Your AI’ improve the customer service industry?
The customer service industry is poised for significant disruption by AI and AI agents. Ensuring that these AI agents have verifiable credentials is crucial. Users can assign these credentials to agents, granting them the correct permissions to perform tasks on their behalf.
Verifiable credentials are essential for maintaining high standards and ensuring that only specific, high-quality verified agents represent individuals or companies. Additionally, as AI agents become more prevalent and competitive, having verified reviews of different agent models will be invaluable. This will contribute to a more seamless user experience for potential customers, reducing waiting times and improving overall customer satisfaction.
On the other hand, AI agents should verify their answers with verifiable credentials to assure people of the quality of customer service and the accuracy of the information provided. This added layer of verification builds trust and confidence in the AI agents’ capabilities, leading to more effective and reliable customer interactions.
In short, ‘Know Your AI’ can enhance the customer service industry by ensuring AI agents are properly credentialed, providing high-quality service, and verifying their responses to assure customers of their reliability and accuracy.
How might ‘Know your AI’ improve the Manufacturing and Supply Chain industries?
For many years, people have predicted that logistics and manufacturing would be revolutionised through the use of blockchain, but the no-edit features of traditional public blockchains limited their use. Verifiable credentials however can have a strong impact, allowing for each node in a logistics network to give its stamp of approval along each step in an item’s journey. As all of this becomes more automated, ensuring the actors at each step – be they human or bot, have their own identity, will make ensuring an improved record of custody from beginning to end much more possible.
How might ‘Know Your AI’ improve the healthcare industry?
There are already huge moves in healthcare to use artificial intelligence for diagnoses – and this trend will likely continue as AI becomes more powerful and effective than current human healthcare workers. Therefore, having ongoing recording of diagnoses made and medicines prescribed, by whom and by what will be incredibly important to ensure decision-making processes are documented.
How might ‘Know Your AI’ improve the cybersecurity industry?
Being able to recognise approved agents and what access and permissions they have will also likely be an important industry update in the future, enabling another method for avoiding sybil attacks or sophisticated bot attacks. Additionally, the imprints that approved agents would make, using their own DIDs, would enable better understanding of how security decisions were made within a company in which decision-making may be done by AI agents.
How might ‘Know your AI’ improve the Finance and Legal industries?
As AI agents begin taking on more responsibility for decision-making, the ways in which decisions were made will need to be better understood. Agents with their own Decentralised Identifier (DID) will be able to sign off on decisions they made so people and organisations have a better understanding of why certain decisions have been made by which agents. Having a record of what decisions were made, by which programmes and how will be very important, especially when it comes to important financial transactions.
In Summary
Verifiable Credentials save a lot of friction when used in AI agent interactions. They enable us to:
- Verify that agents have the correct permissions to complete their task on behalf of someone
- Review and attest to the effectiveness of different agents , and different agent models
- Create a record for the decisions made by AI agents
5. Proof of Personhood
To be able to verify the AI agents you are using is important, but to be able to verify the humans you are working with is of equal importance. Millions have been lost to fraud in recent years due to deepfake impersonations, and sybil attacks and Distributed Denial-Of-Service attacks can waste thousands every hour if successfully targeting the right organisation’s applications. Although captchas are somewhat effective, they are time consuming and annoying for users and no longer filter out every bot which attempts it, meaning new methods to prove proof of personhood will soon be needed.
It is therefore likely that just as proving one is a good bot will be important, so too will proving that one is a real person.
How can proof-of-personhood credentials help improve outcomes for cybersecurity departments and companies?
As AI makes cybersecurity attacks more sophisticated, the need for tighter proof of personhood protocols are clearly needed more than ever. Proving that you are truly talking to the correct person through the use of verifiable credentials could become hugely valuable as deep fakes become easier and cheaper to produce, as could replacement for low-security bot checks such as CAPTCHA with other proof-of-personhood credentials a person would pick up in their identity wallets.
How can proof-of-personhood credentials help prevent Customer Services being overrun by bots?
Distributed Denial-Of-Service and other types of sybil attacks can completely jam up customer service queues. Now that captchas can now be solved by AI, the use of Proof of Personhood credentials can be used by a person to prove that they are in fact a person and not a bot. This could range from supplying several social media accounts to sharing a reusable KYC credential, with the level of certainty up to the parties involved.
In Summary
Verifiable credentials offer a simple way to prove that one is interacting with a real person and not a bot. This can:
- Prevent DDOS and sybil attacks
- Mitigate fraudulent deepfakes aiming to socially hack companies
6. Additional Opportunities Enabled by Verifiable AI
How can Verifiable AI help keep patient’s records available for diagnoses and training but still remain secure?
As healthcare diagnostics are increasingly performed by disjointed AI services rather than coordinated by a single professional (e.g. your family GP), more and more details about a person’s medical history are likely to get recorded purely in the need to make better diagnoses. Storing this centrally creates a huge honeytrap for data. Enabling users to hold their data as verifiable credentials themselves (and decide for themselves if they wish to sell or share it) would make the reviewing and recording of a patient’s medical records more decentralised, privacy-preserving and secure.
How can Verifiable AI create new business niches for the legal industry?
The ability to see Verifiable Credentials and Content Credentials will open up new legal avenues for litigation and compliance – especially around issues related to intellectual property. It is likely that verifiable credentials will become important parts of legal law cases in this area in the future as they create strong proofs around Intellectual Property and truthfulness that are hard to refute.
How can Verifiable AI create new business niches for the cybersecurity industry?
As the use of verifiable credentials as a safer, more convenient and privacy-preserving method of digital identity grows, likely fuelled by regulations such as the EU’s eIDAS2 coming into force, it is likely that there will be a growing niche of cybersecurity experts needing to focus on verifiable credentials.
In Summary
Verifiable credentials as a technology are likely to create new ways of working and new roles for the workforce. Using verifiable credentials will in all likelihood:
- Make personal, sensitive data such as health records more secure and self-sovereign
- Create new legal niches for IP-protection
- Create or scale new identity management systems within cybersecurity
Final Words
Verifiable AI (vAI) presents transformative opportunities across various industries, enhancing transparency, efficiency, and trust. From improving content creation and media integrity to streamlining customer service interactions and securing financial transactions, the implementation of verifiable credentials is set to revolutionise the digital landscape. As AI continues to integrate into everyday operations, the need for verifiable credentials will become increasingly critical, ensuring data authenticity, protecting intellectual property, and fostering trust in AI-driven processes. Forward-looking companies must adopt vAI strategies to navigate the complexities of our evolving technological world effectively.
How can the cheqd network help?
The cheqd network has been working for three years on Decentralised Identity infrastructure. As well as being thought leaders in the SSI space, we have been increasingly involved in the steering committees of the DIF. Much of the work we have done in the past few years is to make the world of verifiable credentials much more interoperable. Recently, we joined the Content Authenticity Alliance and the C2PA, the major bodies setting standards for content credentials. Our major innovation in the last few years has been the introduction of Credential Payments, which enable network users to charge for verifying credentials, enabling new business models which have been keeping back the SSI community for years.
Are you interested in having your organisation or industry to capitalise on the benefits of Verifiable AI? Please don’t hesitate to contact us at [email protected]!