Skip to content

Design AI Agents that Humans and the World Can Trust

Two practical guides for developers, designers, and policymakers building ethical, trustworthy, and relationship aware AI systems.

Designing Human Compatible AI.
Relationship Guides for Trust and Accountability

AI agents are no longer just tools. They’re becoming decision makers, collaborators, and even companions in our digital and physical worlds.

Created through an interdisciplinary partnership between cheqd and SPRITE+, these two Relationship Guides help you design AI agents that balance trust, safety, accountability, and ethical responsibility.

Whether you’re building personalised AI assistants, IoT ecosystems, or environmentally aware autonomous systems, these resources give you the principles, checklists, and UX patterns to make your AI systems more trustworthy and human compatible.

Please also read our experience report on co-creative socio-technical design for verifiable AI.

Are Today’s AI Agents Designed for Trust or Just for Output?

Too often, AI systems are optimised for speed and scale without considering the human and environmental relationships they affect.

These guides tackle that gap by offering:

  • Practical frameworks for trust, control, care, and lifecycle management
  • Design patterns for transparent communication, consent, and accountability
  • Developer checklists for embedding verifiability and ethical decision making
  • UX guidance for making AI behaviour legible and user-controllable

You’ll Learn from This Report

icons tick circle solid

Core relationship facets for ethical AI–human or AI–ecosystem interaction

icons tick circle solid

How to design for consent, control, and accountability at both user and system level

icons tick circle solid

Real world examples from the Verifiable AI Hackathon to inspire your next build

icons tick circle solid

 Developer checklists to operationalise trust, safety, and responsibility

icons tick circle solid

UX frameworks for making AI decision making understandable and controllable

icons tick circle solid

How to integrate Verifiable Credentials and Decentralised Identity as trust primitives

Human-Centred Relationship Guide

A practical guide for designing AI systems that embed trust, ethics, and human centred design approaches. Best for teams developing consumer-facing AI systems and services.

Human-centred

Entangled Relationship Guide

For teams building AI agents operating within machine networks, ecosystems, or shared environments, where responsibility and ethical interdependence matter.
Entangled

Trusted by

Get in touch

Partner with cheqd

If you are an SSI Vendor, Consultancy, Enterprise, Government agency, or a Web3 company contact us for a discovery call so we can learn about your use case, problem statements, and get you set up with cheqd.

Discover cheqd in your language

Select your language to view our content