Introduction
2024 saw the explosion of Artificial Intelligence (AI), especially generative AI, and at cheqd, we wondered about its impact on the world, the market, and ourselves. Would it be as simple as using MidJourney-generated images or would it be more transformative? (Clearly, we believed the latter).
At the start of the year we began exploring ways to combine our already established identity technology (DIDs, Verifiable Credentials, and more), our privacy-preserving payment network, and AI.
We started by pulling together hypotheses around the challenges and opportunities AI could introduce, then tested these in the market with prospective partners and clients. We joined associations such as Coalition for Content Provenance and Authenticity (C2PA) and Content Authenticity Initiative (CAI), and published thought leadership pieces to share our insights. All of this helped us validate which of our hypotheses currently held, or would eventually hold, genuine demand.
After testing these solutions and establishing partnerships, we identified an area within AI where we could lead. This area lies within our core business of trust and reputation, which becomes even more critical in a world where content, agents, and other entities, can all be fraudulently generated, creating a pandemic of mistrust.
This led us to coin the term verifiable AI, or vAI for short, which has been adopted by the wider industry.
The next section is the culmination of our explorations in vAI and the wider AI industry, followed by more details on our journey and approach.
Our direction
Whilst we started out with a variety of solutions and hypotheses to perceived problems, as documented in sections below, we have since significantly narrowed down our focus. Based on lessons, interest and a deeper understanding of AI and its intersection with decentralised identity technologie and use-cases, we have identified the most compelling offerings as:
- Powering AI agents (with trust and data), e.g. Proof of verified or empowered agent, providing trusted data for these agents.
- Content Credentials, i.e. proving authenticity and providing provenance for data as it is originated, edited and published.
- Proof of Personhood, e.g. proof of humanity, but without resorting to biometrics or equivalent.
Each of these leverage cheqd and DIDs’ unique capabilities to embed trust and economic incentives into new systems and paradigms through our capabilities of Verifiable Credentials, Trust Registries, and privacy-preserving payments for Credentials, all ultimately powered by $CHEQ. These solutions contribute to the realisation of the original cheqd mission: to restore privacy, data ownership, and control to individuals, enabling transformational customer experiences.
Furthermore, these three areas we are either directly building solutions or are aware of our partners doing so. Given the colossal opportunity in these fields and to accelerate development, we partnered with DIF to launch a hackathon (ending 4 Nov) to fuel the building of solutions in these areas.
In the coming months, we will continue with a refined approach of publishing thought leadership, executing partnerships, and adjusting our product roadmap as necessary to ensure that, as AI is adopted, it remains trustworthy and verifiable.
Initial hypothesis
As the AI mania swept the market, we came together at the end of 2022 and the beginning of 2023 to consider how AI would affect the world, our specific corner of it, identity and trusted data and vice versa, i.e. how trusted data may feed into AI. Based on our understanding of the market, we landed on those below with the associated rationale behind them:
- Content credentials – we were already seeing deepfakes and generated content causing confusion and being used in fraud.
- Verified datasets – biassed data or data already generated by AI was already skewing models, highlighting a need for a solution.
- Localised clustering, compliance and energy – AI infrastructure operates best at scale and when localised, posing a challenge for decentralised AI networks to compete with their centralised counterparts.
- Proof of personhood – cases of fraud were emerging rapidly, e.g. Arup Hong Kong being duped into sending HK$200m to fraudsters.
- Personalised AI agents – Agents had been spoken about for years to help users manage their data in the new SSI / DID paradigm, where they own their data since otherwise it would be overwhelming.
You can learn more about these use cases on our blogs.
Over the course of the year, we have presented these solutions to potential partners and clients, attended conferences and events, and observed the markets for shifting trends. We also contributed to industry association reports such INATBA’s “Report on Artificial Intelligence and Blockchain Convergences”. Our most valuable conversations have been with our partners where we could really dive into where the genuine issues are which we are specifically and uniquely positioned to address.
Other areas to explore
- Agents are consuming and operating upon trusted data from a verifiable source to make correct decisions.
- Crucial to this and our core mission at cheqd, is providing economic incentives to secure, use and re-use the right data.
- The agent or agents are empowered by the individual to represent them and make decisions on their behalf.
- That others can recognise the agent is empowered and trustworthy when dealing with it.
If you’re interested, please get in touch through the form below.
Next steps
We’re preparing to announce other partners that we’ve been working with but have not yet made public. Expect announcements in the coming months.
In the same vein, we will be continuing our outreach to identify partners who can help us build this new world and support our understanding of the capabilities we need to deliver as a network.
Finally, we will keep publishing content and thought leadership as we develop our offerings with our partners.
If you’re interested, please get in touch through the form below.