Skip to content

AI Agents Framework: How to Plug Them with cheqd Trust Framework

AI agents are developing from simple chatbots into capable assistants that can carry out valuable tasks and automate decision making. As adoption accelerates, the need for a well defined framework to control how these agents operate is in demand.

Interestingly, the challenge isn’t just about capability, it’s about trust. How can we make sure AI agents are safe to use, compliant to legal requirements, and reflect user intent?

cheqd’s Trust Framework offers a solution to this issue. It is a modular infrastructure for embedding trust directly into AI systems. Using Trust Registries, Accreditations , and Verifiable Credentials, it protects organisations, individuals, and other agents from scams while monetising trust as a new revenue stream.

This blog explores how you can integrate cheqd’s Agentic Trust solutions into your AI agent framework to create safer, more accountable systems at scale.

The Identity Crisis: Know Your Agent (KYA)

Despite their growing role in digital workflows, AI agents today have one fundamental flaw: they don’t have a verifiable identity. Anyone can deploy an agent. No one can prove who made it, what it represents, or whether it’s authorised to act on someone’s behalf.

The implications are already playing out in the wild. Fake AI bots mimicking banks or the customers, impersonating support teams, or harvesting sensitive data are surfacing in phishing campaigns and scam networks. So far, these incidents are relatively low value. But the sophistication and scale are accelerating fast.

This explains why the concept of KYA (Know Your Agent) was invented. Similar to how we use Know Your Customer (KYC) or Know Your Business (KYB) mechanisms to prevent fraud and verify identity in financial services, KYA brings identity and accountability to the world of AI agents.

With cheqd’s trust infrastructure, every agent can be issued with verifiable credentials that answer key questions:

  • Who developed, trained, and deployed the agent?
  • Who does it represent, and is that relationship current?
  • Can it be trusted with sensitive information or tasks?

In short, KYA transforms agents from anonymous executors into auditable, accountable actors in your digital ecosystem.

Introducing cheqd’s Agentic Trust Solution

cheqd’s Agentic Trust solution brings a flexible, standards-based approach to building trust into the AI agent ecosystem. Built on tried and tested decentralised identity technologies — already in use across financial services sectors — it enables AI agents to operate with verifiable identity, clear permissions, and transparent provenance.


At its core, the solution is made up of three key components:

  • Agentic Trust Registries (Know Your Agent)
    Verify who built, trained, and deployed an agent. Understand who it represents, and whether that relationship is valid. These registries provide a discoverable, cryptographically-verifiable source of truth about agents and their associated trust frameworks.
  • Agent Credentials (Agent Permissions)
    Assign granular, role based credentials to agents, defining what actions they are authorised to take and in what context. These can include compliance certificates, scopes of authority, or operational permissions.

These capabilities are delivered via:

  • cheqd Studio: A no-code SaaS platform for creating your trust registries for AI Agents.
  • SDKs: Integration of SDKs like Credo for issuing credentials to your agents.
  • TRAIN: A trust engine for verifying and validating trust registries for your agents
  • Model Context Protocol (MCP): A tool to integrate decentralised identity seamlessly with existing AI/ML tooling, such as commonly used AI agents (Claude, Chat GPT, etc).

By offering both SaaS (cheqd Studio) and SDK-based options, cheqd enables organisations to choose the level of control or simplicity that best fits their needs. The solution is also built to integrate smoothly with popular AI/ML tooling, making it easy for agents to interface with decentralised identity frameworks without significant changes to existing stacks.

Trust Registries for Agent Verification

As the backbone of our Agentic Trust solution, cheqd’s Trust Registries (TR) provide the infrastructure to anchor trust at any level in an AI agent’s lifecycle. Our TR support multi-directional trust models:

  • Top-down: Start from a governance authority (e.g. a regulator or industry group) and discover all agents accredited under their framework.
  • Bottom-up: Begin with an individual agent and trace back to the organisations or frameworks that have accredited or endorsed them.

cheqd Trust Registries are designed to accommodate a wide range of actors:

  • Governance Authorities: Governments, industry bodies, or consortiums.
  • Accredited Organisations: Auditors, certifiers, and security firms.
  • Trusted Issuers: Companies and entities vouching for their own agents.
  • Agents: Providing any type of service.


This layered structure offers various features and benefits:

  1. Impersonation proof: All identifiers (e.g. DIDs for Agents), Registries, Accreditations, and Credentials are cryptographically verifiable back to their authors.
  2. Flexible Attestations: AI Agents can hold multiple attestations from different issuers and trust registries.
  3. Suspension, Revocation & Auditability: Attestations (held by agent) and Accreditations (held by other companies in trust registry) can be temporarily or permanently revoked, all with an auditable history providing clear time periods and validity checks.
  4. Flexible Hierarchies: Trust registries can be flat or hierarchical as desired. Monetisable: Trust registries can be payment-gated to generate revenue for Governance Authorities or any other organisation enabling trust.
  5. Standards Compliant: Built using W3C & Trust over IP standards & specifications in alignment with the European Blockchain Services Infrastructure. DID-based architecture being passed through CEN & ISO for standardisation.

cheqd’s Trust Registries ensure you don’t just take an agent’s word for it. You verify who stands behind it. Whether you’re a regulator, enterprise, or developer, this infrastructure lays the foundation for trustworthy, scalable agent ecosystems.

Plugging Agentic Trust into Your AI Framework

Integrating cheqd’s Agentic Trust infrastructure into your AI agent framework doesn’t require a complete overhaul. It’s modular, thus you can adopt components progressively based on your needs and maturity stage.

Step 1: Setup your cheqd MCP server to create AI Agent DIDs and issue Verifiable Credentials to AI Agents. Follow the tutorial here.  

Step 2: Assign Verifiable Identity – Start by giving your AI agents a Decentralized Identifier (DID). You can create these identifiers using cheqd’s DID method using the cheqd MCP server via its integration with Credo.  Follow the tutorial here.

Step 3: Build your agentic Trust Registries – Design and build your trust registry for your AI agent. Use our cheqd Studio APIs to issue verifiable accreditations down the trust chain – starting from a Root of Trust (Governance Authority), to accredited organisations, all the way to the AI Agent. Follow the tutorial here.

Step 4: Import Agentic Credentials – Import a credential that includes reference to the trust registry into your AI Agent using the cheqd MCP server and the Credo integration. Follow the tutorial here

Step 5: Integrate into Agentic Workflows – Integrate identity and credential checks straight into agent workflows by utilising the Model Context Protocol (MCP) and TRAIN framework. Use functionality such as the “whois” function to check the trust of the AI Agent. It will explain to you why it’s trusted in a clear response. Follow the tutorial here

Step 6: Monetise Trust – If you’re a governance body, auditor, or trust issuer, you can monetise your position in the trust ecosystem. cheqd enables payment-gated registries and credentials issuance, giving you a new revenue stream based on the value of verified trust.

By following these steps, you gain a clear roadmap for transforming agents from black boxes into transparent, verifiable, and monetisable digital actors.

Building Trustworthy Agent Ecosystems

cheqd’s Agentic Trust framework offers a scalable, standards based approach to embedding trust directly into the identity and actions of AI agents.

Our approach is built on recognised global standards, including W3C Verifiable Credentials and is aligned with initiatives like the European Blockchain Services Infrastructure (EBSI). cheqd is also an active member of leading digital identity and AI governance communities, including the Decentralized Identity Foundation (DIF), Trust over IP, INATBA Verifiable AI Working Group, Coalition for Content Provenance and Authenticity, Sovereign AI Alliance, Decentralised AI Agent Alliance, and more.

Whether you’re building AI agents, governing their use, or integrating them into your operations, now is the time to lay a solid foundation. With cheqd, trust is something you can prove. Contact us at [email protected]

Share

Related articles

join the community

Become a cheqmate

Join our community to learn more about what we’re building. Get the latest news and insights in our groups below.

Discover cheqd in your language

Select your language to view our content