Why cheqd is Changing from Trust Registry to Trust Graph

In the early days of digital identity and SSI, Trust Registries were a great starting point to prove who we are online. They acted like verified lists, showing which organisations or issuers could be trusted to issue credentials. Simple and effective for where the ecosystem was back then.

But the world has changed. We’re now entering an era of AI agents, decentralised systems, and constantly shifting digital relationships. In this environment, static lists just don’t cut it anymore. Trust isn’t something you tick off once, it’s something that needs to grow, adapt, and be reverified as people, organisations, and agents interact.

That’s why cheqd is moving from Trust Registries to Trust Graphs. Instead of a simple list, a Trust Graph maps the rich, living connections between people, organisations, and AI agents, showing not just who is trusted, but how, why, and to what degree. It’s a more dynamic and connected way to represent trust in the digital age.

The Limitations of Traditional Registries

Most of the trust systems on the market today are built around the idea of registries: structured lists that record which entities or issuers can be trusted. Registries are static by nature. Once someone or something is added, they stay there until a human updates the entry. There’s no automatic way to show how trust changes, how strong that trust is, or how it connects to others. They’re also isolated, each registry typically sits in its own silo, without the ability to connect to others or share context across ecosystems.

This creates a binary trust model, meaning you’re either trusted or you’re not. There’s no room for reputation or situational context. That’s a serious limitation, especially for AI ecosystems that rely on continuous verification and fluid collaboration.

Trust Graph — Goes Beyond the Market’s

cheqd’s Trust Graph takes things to a completely different level.

A Trust Graph is a living, interconnected network that shows how trust flows between people, organisations, and AI agents. It maps the relationships, hierarchies, permissions, and reputation signals between them. Think of it as a constantly evolving web of trust rather than a static spreadsheet.

Here’s what makes it stand out:

  • Dynamic by design: Trust Graphs adapt automatically as credentials are issued, verified, or revoked. This means trust is always up to date, without relying on manual updates or static approvals.
  • Hierarchical and contextual: They reflect real-world structures, like organisation → team → AI agent, showing who delegates authority to whom. Trust can cascade or be limited based on context, whether for compliance or access control.
  • Cross-connected: Multiple Trust Graphs can link up, which indicates that one company’s graph can connect with another’s, forming a broader network of verified relationships.
  • This creates federated or decentralised trust across industries and ecosystems. 
  • Trust and reputation scores: Because the graph keeps track of verification history and interactions, it can generate trust scores and reputation indicators. These can even be embedded directly into digital identifiers, helping systems instantly assess credibility.
  • Fluid and scalable:  In AI environments where agents are constantly being created, updated, or retired, the graph structure naturally scales. It’s flexible enough to grow as ecosystems expand, without losing accuracy or control.

Where current registries provide a snapshot of trust, cheqd’s Trust Graph delivers a living map of it. It brings nuance, adaptability, and intelligence to trust infrastructure and AI ecosystems.

The Competitive Edge of cheqd’s Trust Graph

What makes cheqd’s Trust Graph shine is how it ties everything together. It combines the best parts of decentralised identity, verifiable credentials, and trust infrastructure into one flexible, future-ready framework.

On the infrastructural level, the Trust Graph integrates Verifiable Credentials, Trust Registries, and DID-linked Resources into a single network. Every relationship, permission, and credential is anchored by cryptographic proofs, ensuring integrity and verifiability across systems. And because it’s built on open standards like W3C and Trust over IP, it’s interoperable by design, meaning it can plug seamlessly into other ecosystems rather than locking you into one.

Here’s what gives cheqd’s Trust Graph a real edge:

  • AI-Ready
    It’s built for the world of AI Agents where digital entities act, transact, and make decisions on behalf of humans or organisations. Each agent can hold its own wallet, receive credentials, and prove identity or capabilities securely through the graph. It’s the trust layer for the agentic web.
  • Hierarchical and Cross-Linked
    Trust Graphs represent complex, real-world relationships, like an organisation delegating authority to departments, partners, or AI agents. And since graphs can link across ecosystems, they support federated trust networks that scale globally.
  • Composable Trust
    Every organisation can define its own trust rules, frameworks, and policies, then connect them with others. This creates a flexible, composable system of trust that mirrors how collaboration works in real life — decentralised but connected.
  • Reputation Embedded in Identity
    Trust and reputation are no longer abstract concepts. With cheqd, these attributes can be embedded directly into digital identifiers, powered by verifiable credentials and comes with interaction history. This lets systems instantly gauge credibility without relying on static approvals.
  • Trust You Can Monetise
    Trusted data itself becomes an asset. Credentials and verification events can be packaged, shared, and monetised securely, opening up new streams to create business value.
  • Future-Proof and Interoperable
    While digital identity, AI, and reputation systems develop, cheqd’s Trust Graph is built to adapt and scale alongside them. Based on open standards, it’s fully interoperable and ready to evolve with whatever comes next.

cheqd’s Trust Graph doesn’t just record who’s trusted. It enables ecosystems to build, prove, and monetise trust at scale. It’s the missing infrastructure that makes the next generation of AI and identity truly verifiable.

Redefining How Trust Works

cheqd’s decision to shift from Trust Registries to Trust Graphs marks a major change from static lists of verified entities to living, breathing networks of trust that adapt in real time.

With our offerings, organisations can move beyond simple verification to build trust networks that grow smarter with every interaction, linking humans, agents, and organisations through verifiable, monetisable trust data.

Get in touch at [email protected] to learn more about how to build your own Trust Graph with cheqd. 

How Organisations Can Create Responsible AI Agents with cheqd

AI agents are doing more on their own these days, and that freedom comes with a catch: someone has to make sure they’re acting responsibly. If there’s no clear way to define what an agent can do or to check what it actually does, things can go wrong fast from mistakes to misuse or even breaches of trust.

Giving agents built-in responsibilities changes the game. With verifiable credentials, identification, permissions, and accreditations, organisations can set clear rules for each agent and monitor that those rules are being followed. It’s not simply about control; it’s about making their actions transparent and verifiable.

At cheqd, we’ve built the foundation that makes this possible. Our technology lets organisations assign identities, permissions, and accreditations to agents and verify them through trusted registries. That foundation means AI agents can operate in a way that organisations and their partners can trust, with accountability baked in from the ground up.

The Building Blocks of Responsible AI Agents

Building accountable AI agents requires clear definitions of what an agent can do and having a way to verify that they’re doing it right. There are a few components that enable this level of responsibility:

First comes agent identification. This is the anchor point for everything that follows. It establishes a unique, verifiable identity for each agent. No duplicates, no fakes, no anonymous actors slipping through the system. With a secure identity in place, every action the agent takes can be linked back to a real, traceable entity.

Question it answers: Who is this agent?
Example: This is SupportBot #92 created by Company X.

Then comes agent accreditations. Think of these as the agent’s credentials or badge of authority. They show that the agent is qualified to perform certain tasks. And in some cases, even issue credentials itself. Giving an agent the right accreditation is a way to make sure only trusted, capable agents are handling important functions, which helps prevent errors or misuse.

Question it answers: What is this agent trusted or qualified to do?
Example: This agent is accredited as an identity verifier.

Next come permissions. These are essentially the guardrails that keep an agent on track. They define what an agent is allowed and not allowed to do. With clear permissions in place, agents can operate safely within their boundaries, while organisations maintain oversight without needing to monitor every move.

Question it answers: What is this agent allowed to do right now in this context/system?
Example: This agent can verify users but cannot delete accounts.

All of these are anchored by trust registries. Think of them as an official record where all identifications, accreditations, and permissions are stored and verified. Whenever there’s a question about whether an agent is authorised to take an action, the trust registry provides the answer in real time. It keeps the entire system accountable and verifiable.

Put together, these building blocks create AI agents that are beyond smart, they’re responsible, accountable, and trustworthy. Organisations can let agents operate in complex environments while knowing there’s a system in place to verify every step they take.

Current Capabilities of cheqd

At cheqd, we provide the foundation infrastructure for accountable AI agents, enabling organisations to define and verify what their agents are allowed to do. Everything begins with identification. Before permissions or accreditations can be assigned, an AI agent must be uniquely identified so its actions can be linked back to a verifiable entity. Using Decentralized Identifiers (DIDs), cheqd ensures every agent has a secure, tamper-proof identity that can be proved on demand. This prevents anonymous or spoofed agents from operating in a system and establishes a root of trust for everything that follows.

Organisations can then assign accreditations to agents using Verifiable Credentials, formally recognising their capabilities and authority. These cryptographically signed credentials allow agents to prove they are trusted to perform specific tasks or issue credentials themselves. Accreditations can be chained and can be traversed back to a Root of Trust, such as a governing authority for a particular ecosystem. This creates a verifiable baseline of trust.

In addition, cheqd enables organisations to assign permissions to agents using the same verifiable credential model, specifying the scope of actions they are allowed to perform and under what conditions. Through cheqd’s Model Context Protocol (MCP), these accreditations and permissions are queryable. An agent can be challenged with a “/whois” query in real time: “Identify yourself and prove what you’re allowed to do.”  The agent must respond with cryptographic proof tied to its DID-based identity and permission credentials, giving organisations a way to validate authorisation before execution rather than analysing actions after the damage is done.

Our trust registry allows organisations to query and verify agent identities, accreditations and permissions, ensuring that any claim made by an agent can be checked against an authoritative record in real time. The trust registry acts as a lookup and verification layer, ensuring that issued credentials are valid, not revoked, and assigned by a trusted issuer.

By focusing on this foundational layer, cheqd lays the groundwork for organisations and partners to build responsible AI systems. While we provide the core infrastructure for accountability, partners can expand on this foundation to implement guardrails, logging, agent-to-agent communication, behaviour monitoring, and regulatory reporting, creating fully auditable and responsible AI ecosystems.

What Our Partners Could Enable with cheqd

While cheqd provides the foundation for embedding trust and accountability into AI agents, the broader ecosystem will evolve as additional capabilities are built on top. Our partners are exploring and could enable advanced responsibility and control mechanisms by leveraging the permissions and accreditations anchored on cheqd’s trust infrastructure. These capabilities are not yet supported today, but cheqd enables the pathway for them to emerge.

By taking the verified permissions attached to an AI agent, partners could build guardrails around agent behaviour, ensuring that an agent operates strictly within the scope of what it has been authorised to do. This would move accountability from theory to enforcement. In addition, partners could enable agent-to-agent communication, where agents not only interact but also verify one another’s permissions before collaborating. This would introduce trusted delegation between autonomous systems.

Another important future capability is logging and auditing of agent actions. Partners could build services that capture a transparent history of what an agent has done, providing organisations with traceability for compliance, troubleshooting, and accountability. Alongside this, partners could compare an agent’s actions against its assigned permissions, detecting unauthorised behaviour and preventing misuse or escalation risks.

cheqd’s infrastructure sets the groundwork for agents to prove who they are and operate with clear rules and accountability. While don’t support these capabilities just yet, our ecosystem is moving in that direction, and we’re actively working with partners to bring this vision to life.

What Our Partners Are Working On

Across the ecosystem, many potential partners are already exploring how cheqd’s infrastructure can add verifiable trust and accountability to AI agents. Some of these capabilities are still in the early stages and aren’t fully supported yet, but they show just how much demand there is for trust infrastructure in AI. cheqd is evolving to meet these needs and working closely with partners to shape the next generation of features and functionality.

Some are exploring how trust graphs on cheqd can be used to assign permissions and accreditations to AI agents and build a framework for measurable accountability. Their vision is to derive a dynamic trust score for each agent based on these credentials, allowing organisations to instantly evaluate how reliable and compliant an agent is before interacting with it. This aligns strongly with cheqd’s roadmap, and we are collaborating with partners to ensure our infrastructure enables the scoring and policy logic needed in future releases.

Others are taking a different approach, focused on trust at the model level. Their goal is to embed trusted identifiers from cheqd directly into AI agent models, effectively fingerprinting them so that each model instance can be uniquely and verifiably traced. They plan to combine this with a marketplace for trusted AI agents, where provenance and accountability become discoverable properties. We are actively engaging with potential partnersI to understand how decentralised identity primitives can support verifiable AI provenance at scale.

These discussions demonstrate what becomes possible with trusted agent infrastructure as well as directly inform cheqd’s product evolution.

Building Fully Responsible AI Ecosystems

Trust and accountability need to be built into the very DNA of autonomous systems to make agents reliable and responsible. cheqd provides the foundational infrastructure to make this possible, allowing organisations to assign verifiable identities, permissions, and accreditations, creating a trusted baseline for how agents behave.

The true potential emerges when this foundation is combined with partner-driven innovations. cheqd and its partners are collaborating closely to shape fully auditable, responsible AI ecosystems, where every agent and action can be held accountable.

Reach out to cheqd at [email protected] to enable trust in your AI agents.