How Verifiable AI Enables Trust for AI Agent Adoption

Artificial intelligence is sweeping across the internet and dominating news headlines. The promise of change is colossal. Yet, this promise is tempered by a deeply rooted challenge: trust.  An uncomfortable truth is overwhelming the society despite the fact that AI agents have taken on roles ranging from junior, senior, to critical decision makers — Shall we trust these agents? How can we verify that they are acting in our best interest, without hidden agendas or biases? Apparently, we need a solution that builds trust or confidence in AI’s actions. This is where the concept of Verifiable AI comes into play, bridging the gap between innovation and trust.

The Trust Issue in AI Agents

The mechanisms behind AI decisions often remain like a black box. Without clarity on how these agents interpret information and provide solutions, we find ourselves navigating a maze of uncertainty. Verification is crucial.

1. Lack of Transparency and Accountability in AI Decision-Making

One of the most distressing concerns with AI agents lies in the lack of transparency and accountability in their decision making processes. In some cases, even the system creators struggle to explain how their algorithms reach certain conclusions. Take Google’s AI, for example, which was supposed to help with hiring by looking at patterns from past hiring decisions. But it turned out that the AI was unintentionally favoring male candidates over equally qualified women, all because the data it was trained on was biased. Similarly, Uber’s self-driving cars, which were hailed as the future of transportation, were involved in a tragic accident in 2018, exposing the potentially fatal consequences of unaccountable AI systems. These instances show the real problem: we don’t have reliable ways to track or fix AI decisions, making it hard to hold these systems responsible when they fail.

2. Data Privacy, Security, and Ethical Concerns

The digital landscape is flooded with concerns surrounding the security and privacy of personal data handled by AI agents. As we laid out in previous blogs, the AI information supply chain starts with massive datasets feeding and training, often involving sensitive personal information. If that data gets leaked, the fallout could be huge, given how much data is involved. Let’s look at some real life examples. Meta (Facebook) ran into legal trouble for using people’s personal data to train its AI. A privacy group in Europe, NOYB, filed complaints saying Meta violated the GDPR by using people’s data without their consent. This created a snowball effect, leading to 11 more complaints filed with national authorities, all demanding that Meta halt this practice. Similarly, while Boston Dynamics’ robots were breaking new ground in technology, there were concerns about how these robots could be used to spy on people or for other bad reasons. In both cases, we’re left wondering how much control we really have over the data we share and whether AI agents are acting ethically or just pushing other agendas.

3. Regulatory and Legal Uncertainty

As AI keeps growing, it’s running into a lot of legal and regulatory challenges. Take Clearview AI, for example — a facial recognition system that’s been used without people’s permission. It’s stirred up a lot of debate about privacy and surveillance. Then there’s deepfake technology, which can make videos that look incredibly real, showing people doing or saying things they never actually did. This has raised big concerns about fake news and manipulation. These technologies are in a sort of legal gray area, with laws struggling to keep up with how fast things are changing. And all this uncertainty makes it even harder to trust AI, because people are left wondering: who’s responsible when AI does something wrong or invades privacy?

4. Limited Verification of AI Claims

Another striking challenge with AI is how hard it is to verify the claims these systems make. In many cases, AI solutions are sold with big promises, but when the infrastructure is being examined, those promises don’t always hold up. Take Babylon Health, for instance. This digital health company claimed that its AI powered chatbot could diagnose medical conditions as precisely as a doctor. Nonetheless, when independent experts checked it out, they found that the system often gave wrong or misleading diagnoses, which raised serious concerns about patient safety. This illustrates how risky it can be when AI systems make unverified claims, especially in high stake fields like healthcare. As more AI companies emerge, it’s crucial to set up solid ways to test these systems to make sure they actually work as promised and aren’t pushing dangerous, unproven tech.

All these challenges add up to a real mess of trust issues when it comes to AI agents. Without mechanisms to address transparency or accountability, it is hard to keep users loyal to the solution, as they would remain hesitant to place their faith in systems they cannot fully rely on. It’s therefore imperative to make AI actions traceable and verifiable. We need a system that guarantees our best interests, that is not working against us. That’s where Verifiable AI comes in.

What is Verifiable AI?

Verifiable AI (vAI) ensures that AI agents can prove the authenticity, integrity, and source of their actions, decisions, and outputs. Unlike traditional AI systems, which operate as “black boxes” with little transparency, Verifiable AI leverages technologies such as verifiable credentials, zero-knowledge proofs, trust registries, and cryptographic attestations to create audit trails that can be independently verified.

In essence, Verifiable AI allows users to answer critical questions: Who created this AI? What data was it trained on? Can its decisions be traced and verified? By embedding mechanisms for proof and validation, Verifiable AI transforms AI agents from opaque, unaccountable entities into transparent and trustworthy participants in the digital ecosystem.

How Verifiable AI Enables Trust for AI Agent Adoption

Verifiable AI leverages a combination of Decentralised Identifiers (DIDs), Verifiable Credentials (VCs), Trust Registries, and Zero Knowledge Proofs (ZKPs) to provide immutable records of AI agent behavior. These technologies enable AI agents to prove the authenticity of their decisions without revealing sensitive data.

1. Decentralised Identity

  • Each AI agent is issued a unique Decentralised Identifier (DID), enabling it to authenticate and interact with other entities in a trustless manner.
  • The DID is anchored on a blockchain or other distributed ledger, ensuring that identity claims cannot be tampered with.
  • Example: An AI trading bot in DeFi can use a DID to sign transactions, proving it was the authorised entity executing trades.

2. Verifiable Credentials

  • AI agents can issue and verify cryptographically signed credentials, proving they are trained on specific datasets or adhere to ethical AI guidelines.
  • These credentials can be issued by trusted authorities, such as regulators, AI research institutions, or independent auditors.
  • Example: A healthcare AI diagnosing patients can present a VC signed by a regulatory body, confirming that its training data meets compliance standards.

3. Trust Registries

  • A Trust Registry is a database that maintains lists of verified AI agents, credential issuers, and auditors; referencing the entities with DIDs and representing the trusted relationships between the entities with verifiable credentials.
  • AI agents can query Trust Registries to verify whether another AI, organisation, or individual is trustworthy before engaging in transactions.
  • Example: A financial institution wants to integrate an AI risk assessment tool. Before deploying it, the institution queries a Trust Registry to confirm that the AI has received compliance credentials and permission from a certified regulatory body.

4. Zero-Knowledge Proofs

  • AI agents can prove they have followed predefined rules or used certified data without exposing the data itself.
  • This allows for privacy-preserving verification, critical in industries like finance, law, or medicine.
  • Example: A compliance AI in banking can prove it did not use blacklisted customer data without revealing the full list of transactions it processed.

Audit Trails: Tracking AI Actions, Decisions, and Outcomes

A core aspect of Verifiable AI is auditability, providing a transparent history of AI actions, from data ingestion to final decisions. This ensures that AI models operate accountably and fairly.

Example: AI in Content Creation

  • AI generated content, such as deepfake videos or AI articles, raises trust concerns.
  • Verifiable AI ensures:
    • Authenticated source tracking: AI generated content carries a cryptographic proof linking it to its original dataset and creation process.
    • Tamper-proof content provenance: Content is timestamped and stored on decentralised ledgers, ensuring it wasn’t altered post publication.
  • A journalist uses an AI assistant to generate news articles. The AI attaches a cryptographic signature, allowing readers to verify the source materials used and prove that the content hasn’t been manipulated.

Verifiable AI: Proof of Trust

Trust is the currency of progress. Without it, AI adoption will be hindered. Despite AI agents quickly taking on roles like managing finances and giving medical advice, it’s not enough for them to just be impressive. They need to prove their actions. Proof that these systems are acting transparently. Verifiable AI is that proof. It shifts AI from an era of blind trust and black box decisions to one where every claim can be validated, every action can be traced, and every agent can be held accountable.

Contact cheqd to build trust into your AI solution.

Credential Businesses: Embrace Embedded Finance

Co-authored by Fraser Edwards and Teresa Chan

In previous sections / blogs (“Breaking and forging the value chain anew with SSI” and “The Value of Verifiable Credentials”) we have illustrated that value is embodied into credentials according to a range of factors:

  • What insight can be derived from said data (either standalone or in conjunction with other data)
  • How much effort has been expended to derive said data?
  • Is it accurate?
  • Is it precise?
  • Is it current?
  • Is it in a usable format?
  • Is it from a trustworthy source?
  • Is the data reusable?

This mirrors the existing paradigm where companies often collect, aggregate, analyse and sell data. However, rather than data largely being valuable in the aggregate, in this new paradigm of SSI / DID, value is much more granular and will be quantified per credential, varying according to the factors above. This requires a mindset shift by organisations who currently do not monetise credentials.

Monetising said credentials or data will establish new revenue streams. It will also reshape the commercial approaches of vendors providing technology and potentially provide routes for consortia to successfully sustain themselves. 

Viewing credentials as stores of value, rather than simply provable and traceable statements of fact fits into the trend of industries transforming themselves by embracing embedded finance, e.g. Airlines (loyalty schemes), e-commerce (buy-now-pay-later) and financialisation in general.

What is embedded finance and financialisation?

Embedded finance is the term for integrating banking and other financial services into nonfinancial apps and services. Companies are merging banking, lending, insurance, and investment services with their customer offerings through application programming interfaces (APIs) linked to financial partners. (Source: Investopedia)

A well known example in recent years is buy-now-pay-later (BNPL), where third party loans / credit are offered at the point of sale, typically in e-commerce. This takes a typical sale transaction and “embeds” a loan or credit agreement. The benefits of this arrangement are typically, consumers:

  1. Are less likely to abandon their basket
  2. Will, on average, checkout baskets with higher value than without BNPL
  3. Are more likely to remain brand-loyal

All of these present direct benefits to the merchant and are an excellent example of the overall financialisation of the economy over time.

Financialization refers to the increase in size and importance of a country’s financial sector relative to its overall economy. The term also describes the increasing diversity of transactions and market players as well as their intersection with all parts of the economy and society. (Source: Investopedia)

A great example of financialisation’s impact is the airline industry. As highlighted in “Airlines Are Just Banks Now“, airlines such as Delta have transformed their frequent-flyer programs into sophisticated financial instruments. These loyalty programs, originally designed to reward frequent travellers, have become major revenue streams. Airlines now derive substantial income from selling miles to banks, which in turn offer them to consumers through co-branded credit cards. This shift reflects a broader trend of financialisation, where the focus has moved from merely providing transportation to leveraging financial products and services to drive profitability.

The same should be expected with credentials / trusted data.

New revenue streams

For organisations

It is obvious and easy to see that any organisation charging for the issuance, or re-use, of credentials (in the verifier-pays-issuer model) will begin generating revenue. Whilst the average per-credential value may be low, this will be countered by the sheer volume of credentials being issued across all use-cases and industries. A prime example of this are receipts, as explained in “The Value of Verifiable Credentials”, where the vast majority are low value (e.g. groceries) but some (e.g. white goods purchases) are valuable due to the information they would provide insurers. An illustrative example of the value of receipts and loyalty data are the loyalty schemes administered by the UK supermarket chains, Tesco, Sainsbury’s, Asda, etc. Whilst all make frequent reference to their loyalty schemes in their annual financial statements, none provide a definitive number to maintain some secrecy. However, Sainsbury’s purchased Nectar from Aimia for £60m in 2018, providing some reference point to the value of this data.

Sainsbury’s, which had been part of the Nectar scheme since its launch in 2002, acquired all the assets, staff, systems, and licenses necessary to operate the Nectar loyalty program independently in the UK. Loyalty cards, first popularised by Tesco’s Clubcard over two decades ago, have become a staple in British retail, enabling supermarkets to gather detailed insights into customer preferences and behaviours. Sainsbury’s noted that the acquisition would be immediately cash positive and earnings accretive, reflecting the strategic importance of controlling such data. This example highlights how even seemingly low-value transactions, like grocery receipts, can collectively generate significant value when aggregated and analysed. Conversely, higher value credentials, e.g. university degrees, will require less volume to achieve noticeable revenues. 

At high volumes, revenue from credentials in aggregate will become somewhat stable and, crucially, predictable. This opens up the opportunity to establish secondary markets using financial products for these revenue streams. Similar to mortgage-backed-securities (MBS), we could, in future, see data-backed-securities, or similarly named financial products. For companies issuing credentials / data this establishes not just new revenue streams but alternative financing options.

Identifying opportunities

Recognising value per credential or credential type creates opportunities beyond immediate revenue, the value a credential can command and revenue it can generate can provide insight into what is in demand, what isn’t but also untapped opportunities. By leveraging data-driven insights, identity companies can identify key trends and patterns in customer behaviour and preferences. This knowledge enables businesses to refine their services, tailoring offerings to better meet the evolving needs and demands of their clients.

Companies can also uncover valuable data that users, or other companies might be willing to pay for. For instance, if a particular credential is already being monetised, similar credentials could be identified and offered. As before, this creates a new revenue stream, benefiting both the users and the company.

For vendors

The commoditisation of Self-Sovereign Identity, Decentralized Identifiers, and credential software is an inevitable trend as the market matures and adoption scales. Currently, most vendors in this space charge based on volume or support levels, but as credential issuance grows exponentially, particularly in high-volume use cases like digital receipts, the cost per credential will need to decrease significantly to remain competitive. For instance, the cost of paper for a physical receipt is estimated at €0.0081 ($0.0083), setting a benchmark for how low the cost of issuing a generic digital credential could reasonably go. This downward pressure on pricing reflects the natural progression of commoditisation, where standardized products or services become widely available, driving prices down as competition intensifies.

However, commoditisation doesn’t mean that all credentials will be low-value. While high-volume, low-value credentials (like receipts) will likely trend toward minimal costs, there remains significant revenue potential in higher-value credentials. For example, charging a small percentage (e.g., 1%) of the value of a credential like a university degree, where the cost to verify it online might be £14 (17.82), could yield £0.14(0.1782) per transaction. This approach, though simplified, demonstrates how a percentage-based model for higher-value credentials could generate 20 times more revenue compared to low-value, high-volume use cases.

This dual dynamic, commoditisation of low-value credentials and premium pricing for high-value ones, creates a balanced revenue model for vendors. As the market evolves, vendors will need to optimize their pricing strategies to cater to both ends of the spectrum, ensuring scalability for mass adoption while capitalising on the higher margins offered by premium credentials. This shift will also encourage innovation, as vendors differentiate themselves through value-added services, enhanced security, or specialised use cases, rather than competing solely on cost. Ultimately, commoditisation will drive efficiency and accessibility, making SSI and credentialing technologies more widely adopted across industries.

For consortia

Consortia are frequently formed to help kickstart and manage ecosystems, often with an umbrella organisation established to provide neutral governance and management. However, there have been frequent examples of these organisations and hence the ecosystems themselves failing due to a lack of commercial sustainability. For SSI / DID consortia, the model has typically been to charge membership fees typically tiered by size. 

The emergence of payments for credentials, whether individual (transactional commercial model) or in aggregate (other commercial model) provides the opportunity for consortia to establish sustainable models.

A lesson can be taken from Rivopharm, a Swiss company, covered in the book Factfulness who combined advanced manufacturing automation with intelligent financial optimisation, to sell anti-malaria tablets to UNICEF at “price… …per pill lower than the cost of the raw materials”.

He smiled. “It works like this. A few years ago we saw that robotics was going to change this industry. We built this little factory, with the world’s fastest pill-making machine, which we invented. All our other processes are highly automated too. The big companies’ factories look like craftsmen’s workshops compared with us. So, we order supplies from Budapest. On Monday at six a.m. the active ingredient chloroquine arrives here on the train. By noon Wednesday afternoon, a year’s supply of malaria pills for Angola are packed in boxes ready to ship. By Thursday morning they are at the port in Genoa. UNICEF’s buyer inspects the pills and signs that he received them, and the money is paid that day into our Zurich bank account.”

“But come on. You are selling it for less than you bought it for.”

”That’s right. The Hungarians give us 30 days’ credit and UNICEF pays us after only four of those days. That gives us 26 days left to earn interest while the money is sitting in our account.” Factfulness

A similar approach taken by Amazon is explained by Eugene Wei:

Almost all customers paid by credit card, so Amazon would receive payment in a day. But they didn’t pay the average distributor or publisher for 90 days for books they purchased. This gave Amazon a magical financial quality called a negative operating cycle. With every book sale, Amazon got cash it could hang on to for weeks on end (in practice it wasn’t actually 89 days of float since Amazon did purchase some high velocity selling books ahead of time).

In both of these scenarios, working capital was optimised to either minimise costs to clients, or to generate additional revenue.

In the case of DID / credential consortia, the introduction of payment flows via the consortium (as a clearing house to minimise the payment graph and assuming appropriate levels of privacy)provides the opportunity to generate revenue for said consortia without needing to default to charging membership fees.

As above, these examples may be oversimplified. However, they illustrate the opportunities created by the monetisation of credentials.

Conclusion

As DID / credential ecosystems grow and mature, monetising credentials directly and embedding financial products into the credential flow presents a significant opportunity for companies to reshape and / or expand their revenue streams:

  • For issuing organisations: new revenue stream
  • For DID / credential vendors: alternate commercial structure for SaaS, charging against value of the credentials
  • For consortia: alternate funding option, using yield on capital

Furthermore, monetising credentials requires calculating or assigning value to these credentials. As demand grows, firms will be able to use analytics to uncover hidden patterns and trends that reveal other monetisable credentials or data. 

Ultimately, by embracing data monetisation, innovative financial products and analytics, companies can generate new or increased revenue whilst positioning themselves in a new and dynamic market.

cheqd and YourD Announce Strategic Partnership to Drive Web 3 Data Ownership

The path to Web3 should be built on trust and made as seamless as possible. cheqd is excited to team up with YourD (Your Data), a Korean Web3 RegTech infrastructure provider specialising in decentralised identity authentication and data management, to bring this vision to life. This partnership simplifies how individuals take control of their identity in Web3, fostering a more user-centric and privacy-first digital ecosystem.

By combining YourD’s expertise in decentralised authentication with cheqd’s robust decentralised identity (DID) and credential payment infrastructure, we empower businesses and developers to build trusted Web3 applications that offer seamless user authentication, secure payments, and data ownership.

A Strategic Alignment to Bridge DID and Payment

This collaboration is built on a shared vision and technological synergy. YourD provides a suite of Web3 solutions, including passwordless authentication and privacy-preserving analytics. Its dedication to enabling true data ownership aligns with cheqd’s mission to create trusted, decentralised identity solutions.

cheqd, on the other hand, leverages decentralised identity, verifiable credentials, and trust registries to provide a first-of-its-kind credential payment model that allows issuers to monetise credentials sustainably. Through the integration of zero-knowledge proofs, cheqd ensures privacy by enabling selective disclosure, offering a trusted, decentralised, scalable, and privacy-first framework for identity verification and data management.

By integrating with cheqd, YourD strengthens its ability to deliver user-controlled authentication and transaction models that comply with global privacy standards.

Together, cheqd and YourD are setting a new standard for identity and payments in Web3, equipping businesses and developers with the necessary tools to build trust-driven digital ecosystems.

What’s Involved in the Partnership?

The collaboration focuses on integrating YourD’s authentication and data services with cheqd’s DID and credential payment infrastructure to enhance interoperability, monetisation, and compliance in Web3 identity and payments.

  • DID-Based Authentication
    • YourD will integrate cheqd’s DID and verifiable credential (VC) issuance capabilities as a VDR (Verifiable Data Registry) option within the YourD platform, enabling seamless authentication for businesses and users.
    • Enable privacy-preserving authentication by supporting selective disclosure and zero-knowledge proof (ZKP) verification using cheqd’s infrastructure.

  • Decentralised Data Management

    • Expand the adoption of cheqd’s DID network in APAC markets through YourD, enhancing interoperability and user-controlled data ownership.
    • Develop a joint data governance and interoperability framework to support secure data verification and credential management.
    • Leverage cheqd’s DID-based trust registries to manage identity verification and credential revocation within the YourD ecosystem.

  • Web3 Payment Models

    • Integrate cheqd’s credential payment model into YourD’s authentication and verification services, allowing issuers to monetise credentials sustainably.
    • Explore DID-based Web3 financial solutions, including cross-border transactions, rewards, and voucher monetisation via cheqd’s payment infrastructure./
    •  
  • Adherence to Global Data Privacy Regulations

    • Ensure compliance with international standards such as GDPR, eIDAS 2.0, and APAC-specific data privacy laws through cheqd’s compliance-focused infrastructure.
    • Collaborating on the development of privacy-preserving solutions for businesses and governments by leveraging YourD’s decentralised authentication and cheqd’s selective disclosure mechanisms.

Shaping the Future of Digital Identity and Payments

Maintaining control over identity and transactions should be a right, not a privilege. cheqd and YourD are committed to realising this vision and are excited about the potential this partnership holds to push the boundaries of innovation in decentralised identity and payment solutions.

Businesses and developers seeking to build trust-driven Web3 applications can now leverage our integrated solutions to offer seamless authentication, secure payments, and verifiable data ownership.

About YourD

YourD (YourData) is a Korean Web3 RegTech infrastructure provider specialising in decentralised identity authentication and data management. Through its interoperable, privacy-first authentication solutions, YourD empowers users to control their digital identities while enabling businesses to adopt decentralised identity technology with ease.

YourD facilitates passwordless authentication, privacy-preserving analytics, and verifiable credential-based voucher systems, supporting diverse use cases across finance, retail, and enterprise sectors. Businesses leveraging YourD’s infrastructure can seamlessly implement next-generation identity management and trust-based authentication models in Web3 applications.

About cheqd

cheqd is the trust and payment infrastructure to enable the creation of eID, digital credential businesses, trust ecosystems, and personalised AI. We provide privacy preserving payments for data to incentivise its release from data silos, enabling previously impossible data combinations and unlocking new user experiences and personalised AI.

We provide bespoke network offerings and support multiple credential formats to underpin identity frameworks such as eIDAS 2.0 in Europe, and beyond. Our industry leading Trust Registries allow ecosystems to gate and govern themselves, creating permissioned, permeable, or permissionless trusted data ecosystems and trusted verified AI agents.