Imagine a world humming with AI agents – digital helpers managing your schedule, booking travel, executing transactions, even collaborating with each other. This future is rapidly approaching. But alongside the excitement comes a fundamental challenge: in a world increasingly reliant on autonomous AI, how do we establish trust? How do we know an agent is what it claims to be, that it’s operating securely, or that it has the right permissions for the tasks it performs?
To meet this challenge head-on, cheqd has developed a pioneering way to build verifiable trust directly into AI interactions. We are introducing cheqd’s Agentic Trust Solution, powered by the emerging Model Context Protocol (MCP). This new system provides AI agents with verifiable digital credentials, allowing their identities and permissions to be cryptographically proven. By ensuring AI agents operate securely and accountably, we aim to foster an AI ecosystem that is both powerful and trustworthy.
Without clear answers, we risk building our AI future on shaky ground. We’ve been focused on building Verifiable AI (vAI) solutions for these emerging issues for over a year at cheqd. From our market research and partnerships with AI companies (some existing, some prospective), we understand that one of the earliest issues that needs to be tackled is creating solutions for users and AI agents to understand which AI agents are trusted. This takes various forms such as:
Why has this been hard to bring to life until now?
Beyond just the questions of agentic trust, we’ve also been exploring scoped, granular permissions for AI agents (e.g., for an AI agent to “use my credit card to book a restaurant reservation, but NOT go on a spending spree on anything else”) and auditing. The hard part has been to translate this into enforceable and repeatable instructions that can be followed by a wide variety of AI apps and agents, without building a proprietary protocol to interact with any kind of decentralised identity network.
Recent advancements in this field have made it practically feasible for AI agents to understand and process digital identity. That’s why cheqd is taking a pioneering step forward, extending our vision of Verifiable AI into this new frontier. We’re thrilled to introduce cheqd’s Agentic Trust Solution, one of the world’s first digital identity toolkits to weave verifiable trust directly into the fabric of AI agent interactions.
The breakthrough: Model Context Protocol (MCP) gives AI agents access to tools
A key catalyst for this shift is the emergence of the Model Context Protocol (MCP), which was announced by Anthropic (the creators of Claude AI) in November 2024. Understanding a bit about MCP clarifies how verifiable trust can be technically achieved in AI interactions. It provides a much-needed common language for AI models, agents, and external tools to communicate context and capabilities securely. It’s the interoperability layer the ecosystem desperately needed.
AI agents and apps will need access to tools and data sources to do their work. Prior to MCP being released, the usual way in which developers would need to deal with such integrations was to provide complex prompts to AI agents/LLMs, and hope that they follow those instructions (and not end up misinterpreting the intent). Writing such prompts can be complicated and hard to repeat consistently. Alternatively, it’s also simpler than building custom integrations to a specific AI app, ChatGPT plugins, or other frameworks such as LangChain.
MCP had been gaining a lot of traction amongst the earliest adopters and developers — but was restricted so far only to “MCP Clients” (user software) such as Claude Desktop that supported the protocol. Within the last week, however, OpenAI announced it would support MCP in the OpenAI SDK and ChatGPT Desktop app…
…as well as Sundar Pichai, Alphabet’s CEO, hinting that Google would support the specification.
Enter cheqd's Agentic Trust Solution for AI agents and developers
Within the past few months, many MCP “Servers” (tools that can be used by AI apps to connect to particular sources, such as Google Search, Github etc) have been created. While MCP itself provides the communication rails, it doesn’t inherently solve the trust problem — that’s where cheqd’s Agentic Trust Solution comes in.
Our approach uses MCP as the communication backbone to enable Decentralized Trust Registries for AI agents. Think of it like a verifiable digital breadcrumb trail for AI identity and authorisation:
- Verifiable Credentials (VCs) for Agents: We empower AI agents to hold tamper-proof digital credentials. Think of these like highly secure digital ID cards or passports specifically for AI agents. Unlike a simple label, these credentials contain rich, verifiable claims — like a digital badge proving the agent’s developer, its safety audits, or specific permissions it has been granted (e.g., ‘authorized to access booking systems’) — issued by trusted authorities like developers, companies, or industry bodies like the Decentralized AI Agent Alliance (DAIAA), which cheqd recently joined.
- Trust Registries: These credentials and their issuers can be anchored and verified against Trust Registries. Imagine these as highly secure, public directories acting as sources of truth — instead of listing phone numbers, they might list trustworthy credential issuers (like the official body that certifies an agent) or the accredited agents themselves. Because these registries are stored in a decentralized way on the cheqd network, they aren’t controlled by any single company, making them resistant to censorship or gatekeeping and highly available.
- Verification Tools: cheqd has worked on software such as the TRAIN Universal Resolver from Fraunhofer (funded by the cheqd community’s decentralised governance), which will allow any user to verify an AI agent’s credentials by tracing them back to a trusted root issuer through this decentralised chain. Crucially, TRAIN allows blending decentralised as well as centralised trust chains (using DNS/X.509 certificates), providing a pragmatic model for AI developers to manage digital trust.
This creates a powerful, flexible system — all backed with innovations that cheqd has been building over the past few years, such as a highly scalable decentralised identity network and DID-Linked Resources, which allow these decentralised trust chains to be published. An agent might carry multiple credentials, proving different things to different parties, all verifiable through this trust chain.
Leveraging cheqd's MCP Server tooling for AI apps and agents
Our MCP Server is one of the world’s first to enable AI agents to read and write Decentralized Identifiers (DIDs) — which are like permanent, unique digital addresses that the agents control themselves — and issue digital credentials. Developers can use cheqd’s MCP toolkit today to start building applications where AI agents natively manage and present their own DIDs and Verifiable Credentials.
While this early release is developer-focused, MCP as a specification itself has been rapidly evolving. Currently, it requires developer skills to run a Docker container on your desktop. As recently as a week ago, core contributors announced that MCP would support remote MCP servers — which would allow any user, without developer experience or knowledge, to authenticate their account with services that add a data source. We plan to fast-follow with the MCP community to add support for a remote MCP Server that allows any AI app or agent to easily interact with decentralised identity networks such as cheqd.
What comes next: Building the future of Verifiable AI
Our vision for the Agentic Trust Solution doesn’t just end with Trust Registries for AI agents; it is about enabling responsible innovation for the field of AI. By providing verifiable answers to questions of identity, capability, and authorisation, we plan to expand our MCP tooling to support:
- Trusted AI agent/app trust chains: Engage with AI agents with greater confidence, knowing their claims can be verified, as well as AI agents to do this on their own. We will work with industry and alliance partners to establish these trust chains.
- Enhanced accountability: Establish clearer lines of responsibility with granular permissions for what you’ve allowed an agent to do or not to. We plan on implementing these by expanding the types of credentials an AI agent holds, from ones just issued to the agent (e.g., “Claude AI was created by Anthropic, Inc.”) to specific relationships between AI agents and their users (e.g., “this instance of Claude Desktop on Ankur’s laptop has been allowed to complete these tasks”). Every AI agent will come with their own identity wallets.
- Trust in the content: AI agents will not only be producing generated images, video, and audio but also consuming AI-generated media. Enabling AI agents to understand what content they can trust through Content Credentials will become increasingly important when we rely on AI agents.
- True interoperability: Thanks to specifications like MCP, enable any AI agent ecosystem to consume and write to decentralised identity networks.
- New possibilities: Enable more complex and sensitive tasks to be delegated to AI agents, knowing robust trust mechanisms are in place. Our vision here is that it will expand beyond just simple proof-of-humanity to have
We are actively developing and refining the Agentic Trust Solution, defining credential schemas, enhancing our MCP tools, and collaborating with partners across the ecosystem. The latest development can be tracked on our roadmap while the Verifiable AI use case is discussed in a dedicated section. The rise of AI agents is undeniable. By embedding verifiable trust from the outset, using open standards and decentralised principles, we can ensure this powerful technology evolves in a way that is safe, accountable, and ultimately beneficial for everyone.
Ready to build trust for your AI Agents?
- Explore the concepts behind Decentralised Trust Chains for AI agents
- Dive into our Github repo for cheqd MCP Toolkit and learn how to set up a cheqd MCP Server for development
- Contact us at product@cheqd.io if you’re an AI developer or company that wants to learn more on how to utilise the Agentic Trust Solution