The Verifiable AI Hackathon 2025, hosted by cheqd in collaboration with Dorahacks, Verida, and Sprite+, brought together builders, developers, and visionaries working at the intersection of AI, verifiable credentials, and decentralised infrastructure.
The hackathon was split into two cheqd main tracks:
- Agentic Economy & AI Agents
- Content Credentials & Others
Each track challenged participants to solve some of the biggest trust, provenance, and authenticity problems in AI and content workflows.
Besides, a special Verida Bounty was offered to projects that incorporated private, user-owned data via the Verida stack.
We have received an impressive amount of submissions. Upon reviewing all the solutions and the creativity, technical skill, and practical use of the cheqd and Verida stacks, the winners have now been selected. Here’s a closer look at the top projects.
cheqd Main Track: Agentic Economy & AI Agents
🥇 First Place – Identone
Project: Identone
Description: Verifiable voice based AI agent interactions
Submission: https://dorahacks.io/buidl/26280
How it works:
Identone allows users to interact with AI agents via voice in a way that is verifiable, traceable, and provable. The user and the AI agent both receive Verifiable Credentials to verify their identities. Then, during a call, these credentials can be seamlessly exchanged to prove that the parties on each side of the phone are trusted and verified.
How it utilises the cheqd infrastructure:
- User and AI agent receive Verifiable Credentials signed by cheqd DIDs
- Any cheqd DIDs and associated DID-Linked Resources can be validated in real-time during a phone call to prove account ownership or AI agent authorisation.
🥈 Second Place – Kith
Project: Kith
Description: AI Agent Passport for credentialing and scoring AI agents
Submission: https://dorahacks.io/buidl/26335
How it works:
Kith creates digital passports for AI agents that store their credentials, scores, and behavioural history. This allows other systems to assess trustworthiness, verify claims, and decide whether to interact with or delegate tasks to an AI agent.
How it utilises the cheqd infrastructure:
- Assigns a unique did:cheqd to every AI agent
- Issues Verifiable Credentials for agent accreditation
- Stores agent accreditations as DID-Linked Resources
- Enables compounding trust score for agent based on number of credentials issued to it, and trustworthiness of issuers
- Enables third party verification and score based selection via the cheqd trust model
🥉 Third Place – SNAILS
Project: SNAILS
Description: Dail Bot for verifiable identity and content in AI interactions on Telegram
Submission: https://dorahacks.io/buidl/26288
How it works:
SNAILS built “Dail,” a Telegram bot that enables users to verify their identity, authenticate AI agents, and track the credibility of shared content in AI driven conversations.
How it utilises the cheqd infrastructure:
- Issues Verifiable Credentials to Telegram users for identity proof
- Validates the origin and authorship of AI generated or human shared content
- Uses cheqd DIDs to distinguish humans vs AI agents in chats
cheqd Main Track: Content Credentials & Others Solution
🥇 First Place – CheqDeep
Project: CheqDeep
Description: A decentralized solution for verifying media authenticity using cheqd’s blockchain technology.
Submission: https://dorahacks.io/buidl/26299
How it works:
CheqDeep tackles the growing issue of fake or AI-generated media by allowing users to prove that their content is authentic, recorded by a human, at a specific time and place. Users upload a photo or video directly from their smartphone. The platform captures metadata, generates a unique DID for the creator, and links this to a DID Linked Resource that includes a timestamp and verification data. This results in a verifiable “proof of reality” for any media.
How it utilises the cheqd infrastructure:
- DID Linked Resources are used to anchor media metadata securely.
- The cheqd blockchain provides the timestamped, tamper-proof record of creation and origin.
- This forms a chain of custody that can be independently verified — critical for journalism, legal evidence, digital art, and content rights.
🥈 Second Place – Trusty Bytes
Project: Trusty Bytes
Description: A marketplace that enables AI agents to access and verify trustworthy datasets using cheqd and the Model Context Protocol (MCP).
Submission: https://dorahacks.io/buidl/26048
How it works:
Trusty Bytes connects AI agents to high-quality datasets in a decentralized data marketplace. Data providers list their datasets, which users can purchase via smart contracts. Once purchased, users receive a Verifiable Credential (VC) containing metadata about the dataset. AI agents authenticate via an access key, connect to the MCP server, and retrieve data. They can also verify the source using trust information from the cheqd network.
How it utilises the cheqd infrastructure:
- Verifiable Credentials are issued on cheqd to prove dataset origin and trustworthiness.
- Each dataset is linked to a DID of the data provider, ensuring agents only consume verified and credible data.
- The trust network ensures the provenance and integrity of every dataset ingested by agents.
🥉 Third Place – crdbl
Project: crdbl
Description: A platform to make research, journalism, AI workflows, and digital content ownership auditable and verifiable.
Submission: https://dorahacks.io/buidl/26336
How it works:
crdbl turns any human or AI-generated content into a verifiable credential — called a “crdbl” — tied to a decentralized identifier. These crdbls can reference each other, allowing for recursive provenance checking by AI engines. This builds a composable, cryptographically linked graph of trust. Users can issue and verify crdbls through a browser extension or programmatically via an API.
How it utilises the cheqd infrastructure:
- DIDs and VCs form the foundation of crdbl’s trust model.
- cheqd anchors every crdbl, enabling full traceability of claims and sources.
- The system creates a self reinforcing network of verified assertions, making it easier to distinguish credible content from misinformation or unverified AI outputs.
Verida Bounty: Build an Agent incorporating Private User Data
🎖 Honourable Mention – Viskify
Project: Viskify
Description: An AI assisted talent verification platform that issues verifiable credentials for, candidates, recruiters and teams.
Submission: https://dorahacks.io/buidl/26297
How it works:
Viskify allows candidates to connect their private data sources (Telegram, Gmail, etc.) through the Verida Vault, and gives AI-assisted summaries of their professional qualifications. After consent-based access is granted, recruiters receive real-time insights and verified credentials. Viskify uses deterministic DIDs for teams and issuers, and integrates credential issuance and verification via API.
How it utilises the Verida infrastructure:
- Verida’s encrypted data vault gives users full control over private data, accessed only with consent.
- LLM powered analysis of this data provides structured insights while preserving privacy.
- The platform integrates Verida’s secure data layer with cheqd’s credential issuance APIs to build an AI-friendly, privacy-preserving hiring system.
Wrapping Up
The Verifiable AI Hackathon 2025 showcased how powerful trust infrastructure can be when combined with emerging AI use cases. From authenticating media to building AI-aware marketplaces and content provenance layers, these projects pave the way for a more trustworthy digital future.
We’re incredibly proud of what the builders accomplished and we’re just getting started. The cheqd ecosystem is bright ahead!
If you’re interested in enabling trust in your AI model or build content credentials, feel free to give us a shout at [email protected]