The role of cheqd in Trusted Data markets

The role of cheqd in trusted data markets

A technical approach to building Trusted Data Markets, reducing the time-to-reliance and compliance costs in digital interactions.

Introduction

The “Trust Gap”

As discussed in “The Anatomy of a Trusted Data Market, the composition of “trust” is a complex and interpersonal relationship between two parties. It is predicated on more than the mere reliance on a particular party; namely, it involves an “extra factor”, including the perception of good faith and the willingness to act in a credible way.

However, when considering “trust” in a digital context, it becomes increasingly challenging. As opposed to an “interpersonal” relationship, digital trust is often a “pseudonymous” relationship. Here we approach what is widely regarded by academics as the “trust gap”; the de facto lack of the capacity to make an informed judgement on the “extra factor” to build “trust” beyond “mere reliance”.

Therefore, to build a functional Trusted Data Market with cheqd, we need to augment the requirement for this “extra factor” using a combination of trust-building technologies and techniques.

See: Camila Mont'Alverne, Sumitra Badrinathan, Amy Ross Arguedas, Benjamin Toff, Richard Fletcher, and Rasmus Kleis Nielsen. The Trust Gap: How and Why News on Digital Platforms is viewed more Sceptically versus News in General. 2022. University of Oxford. Reuters Institute.

Available at: https://reutersinstitute.politics.ox.ac.uk/

The Technical Components of a Trusted Data Market

  1. Decentralized Identifiers (DIDs)
  2. Verifiable Credentials (VCs)
  3. Trust Management Infrastructure (TMI) such as Trust Registries (TRs) or Status Registries (SRs).
  • Legitimacy established by DIDs
  • Integrity established by VCs
  • Reputability established by TMI
Technical Composition of Trusted Data
Technical Composition of Trusted Data

Legitimacy through Decentralized Identifiers

Decentralized Identifiers (DIDs) are a relatively new technical standard, ratified by the W3C as a formal recommendation in 2022, for uniquely identifying a particular entity in a digital domain. Each DID can be “resolved” to fetch a data file called a DID Document, which helps prove legitimacy in three ways:

Verification

DID Documents must contain signing keys, known as Verification Methods, which can be used to cryptographically sign other data files (such as Verifiable Credentials). If a DID and associated Verification Method is found referenced in another data file, that DID and it’s key can be challenged, and authenticated against, to prove that DID is in fact:

  1. Legitimate;
  2. Associated with a particular DID Document (discussed in point 2);
  3. Associated with any other DID-Linked Resource (discussed in point 3).

If a DID is proved to be legitimate, it is possible to infer that the data file signed by the DID has a higher level of trustworthiness.

Resolution

Resources

Integrity through Verifiable Credentials

Verifiable Credentials (VCs) are another type of data file, again formalised by the W3C as a standard, designed to ensure absolute integrity of the “claims” listed in the data file. A “claim” in this sense is an assertion about a particular entity; for example, this could be attesting to someone’s name, address, date of birth etc.

VCs are able to carry out this function because the “claims” contained in the credential are intrinsically verifiable through cryptographic “proofs”.

VCs dovetail well together with DIDs, since the “proof” embedded in the VC is able to be signed by DIDs and their associated Verification Method keys. This allows the VC “proof” to be challenged and authenticated against using the Public Key Infrastructure from the DID and associated DID Document.

Once the proof is embedded in the VC, the VC may also be serialised as a JSON Web Token (JWT) or use a Data Integrity proof (VC-DI), to create a representation of the Credential that is tamper-evident. This means that if any modification is made to the serialisation, the embedded “proof” will become unverifiable.

Commonly therefore, VCs are issued to a “holder”, who holds this in a data wallet, and these VCs are cryptographically signed by a DID of the issuing entity “issuer”. This enables the “holder” to prove to a third party that the Verifiable Credential has both:

  1. Legitimacy — since it is signed by a particular entities DID; and
  2. Integrity — since the cryptographic proof is tamper-evident.

Different cryptographic signature schemes can also be layered on top of VCs to provide additional benefits, such as:

  1. Selective disclosure: where only a selected subset of VC claims, or selected claims from multiple VCs, are presented in one tamper-evident format (e.g. SD-JWT).
  2. Zero-Knowledge Proofs (ZKPs): where a VC can use its legitimacy and integrity to prove a particular fact, in a yes/no challenge/response mechanism, without revealing the actual “claims” written into the VC (e.g. AnonCreds).

VCs are highly flexible in their design, with certain flavours being useful for specific use cases. However, each type maintains the same underlying focus on data integrity. This data integrity coupled with the legitimacy of DID authentication, is in many cases, enough for a verifier to build a level of “trust” in a digital interaction, reducing the time-to-reliance significantly.

Reputability through Trust Management Infrastructure

Trust Management Infrastructure (TMI) can be used to move the needle from “low/medium” trust digital interactions to “high” trust digital interactions. As such, this infrastructure may not always be required in a trusted data market — but may be relied upon when necessary.

DID-Linked Resources (DLRs) may be used to establish TMI in a decentralized way. Examples of common TMI for Trusted Data Markets are Trust Registries (TRs) which may ascertain whether a DID belongs to a trusted set; or Status Registries (SRs), which may be used to check if the VC status has been revoked or not. However, for the purposes of this paper, we will use TRs as the canonical TMI to explain the concept of reputability.

A TR is a data object where one entity publicly attests to the legitimacy of other entities. For example, a Health Regulator such as the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK may create multiple trust registries of pharmaceutical manufacturers or wholesalers that are legally regulated to provide certain types of medicines, drugs or pharmaceutical products in the UK.

In the context of decentralised identity technology, TRs contain lists of DIDs pertaining to specific entities for a particular purpose. In the example above, MHRA could create a TR including the DIDs of each pharmaceutical manufacturer or wholesaler regulated to carry out a particular action.

Through resolving-to and parsing a TR, a verifier can traverse the DIDs and metadata that is listed to establish a root-of-trust and establish that the data they are receiving hits requisite levels of assurance for a specific governance framework.

TRs provides relying parties with additional assurance through this way of linking to a root-of-trust, resulting in:

  1. Reputability, since the “verifier” will be able to check that the “issuer” DID signing the “holder” VC is attested to by one or multiple other entities through a public TR; this layers on top of:
  2. Legitimacy (as previously discussed)
  3. Integrity (as previously discussed)

To conclude this section, the diagram below helps explain how the three technological components described in this section work in conjunction with one another — to build a comprehensive web of trust.

Interplay between DIDs, VCs and TRs

This diagram illustrates the following flows:

  1. A DID cryptographically signs a VC, which establishes Legitimacy and Integrity in the data the VC contains
  2. A VC references a TR (or other TMI), which establishes Legitimacy and Integrity that a TR is intended to be used by the verifier
  3. The TR provides additional information about the reputability of the DID, which establishes Legitimacy, Integrity and Reputability in the DID and signed VC which can be used to meet governance and compliance requirements.

Bridging the Trust Gap

  1. Legitimate, since it is attested to by a particular “issuer” (I)
  2. Cryptographically untampered because the VC data model enables proof serialisation and data integrity
  3. Reputable, since one or multiple TRs can be referenced to where the issuer’s DID is attested to by third parties
  1. Other parties they are interacting with meet compliance requirements for their industry or use case, creating trusted markets;
  2. They themselves meet compliance requirements, as they can demonstrably assure third-party regulators that the data they receive from other parties has absolute legitimacy, integrity and sufficient reputability for a particular governance framework.

Making the Market

  • Legitimacy, via the authentication of a DID = Free
  • Integrity, via the verification of a VC = Free
  • Reputability, via the verification of a TR (or other TMI) = Paid
  1. A cost saving opportunity for entities to achieve a high-level of trust, compared to existing KYC and KYB mechanisms
  2. A time-efficiency bonus for achieving a high-level of trust, with trusted data being instantaneously verifiable — reducing the burden of regualtory compliance
  3. A never-before seen revenue opportunity for “issuers” of trusted data

Payment Gating Reputation

The way that cheqd supports the above objective is through payment gating the reputational arm of Trusted Data, Trust Management Infrastructure (TMI)Trust Registries (TR) or in an alternative use case, Status Registries (SR).

Payment gating a Trust Registry (TR)
Payment gating a Trust Registry (TR)
Payment gating a Status Registry (SR)
Payment gating a Status Registry (SR)
  1. “Issuers” (and in some cases “Regulators”) are able to set the market price of unlocking a TR or SR
  2. “Verifiers” are able to choose whether they want to pay to unlock the result of a TR or SR to achieve
  3. Payments will be made back to the “issuers” of the VC that is being presented to the “verifier”
  1. If a TR for a particular DID, or SR for a particular VC, has a high Level of Assurance (LoA), such as being created by a reputable entity, it is reasonably foreseeable that the price for that check may be higher than average.
  2. If the price of a TR or SR check is too high, the verifier will either: (a) Choose not to make the extra payment; or (b) Choose another TR to make the check against (if available)
  3. Once organisations and industries see the revenue opportunities from creating TRs, it is hypothesised that a competitive market will emerge — with a range of TRs with differing LoAs and associated range of prices.

We will explore how these use-cases present a clear product market fit for cheqd, cheqd’s partners and also the wider SSI ecosystem, projected to capture 550 billion dollars worth of value by 2030.

cheqd is now supported in walt.id’s SSI Kit!

Integration of cheqd into SSI Kit provides greater flexibility for adopters of cheqd, opens up a new customer-base for increased utility on the network and helps future-proof cheqd for upcoming EU regulations!

Introduction

We are excited to announce that cheqd is now fully supported in walt.id’s SSI Kit. This integration expands the support for cheqd in a greater array of SDKs, and provides end-customers the flexibility to choose a wider breadth of options for credential exchange protocols.

SSI Kit leverages the cheqd/sdk, slotting neatly alongside other supported SDKs including Veramo, and the soon to be released Aries SDKs, offering a wide range of SDK choices for SSI app developers which they can select dependent on their needs and existing stack.

What is SSI Kit?

SSI Kit is a holistic and standard-compliant open source tool created and maintained by the team at walt.id. It offers everything you need to use Self-Sovereign Identity (SSI) with ease, including the creation, issuance, management and verification of Verifiable Credentials across various ecosystems.

SSI Kit utility for different parties

walt.id docs — SSI Kit | Basics — Learn what the SSI Kit is.

Integrating cheqd with SSI Kit provides an array of benefits for both cheqd and walt.id’s existing end-customers and users:

cheqd customers:

Supporting cheqd within the SSI Kit means that anyone that wants to use cheqd, can now do so through walt.id’s intuitive and easy to use tools — available here. Through this, SSI developers can:

  • Create DID — Create your first did:cheqd
  • Issue VC — Issue your first Verifiable Credential based on a did:cheqd
  • Verify VC — Verify your Credential based on a did:cheqd

This offers cheqd’s customers:

  • Greater flexibility for end-customers: Through expanding support for cheqd into SSI Kit, end-customers can now choose a more specific technical stack that suits their needs best — with Veramo and Aries Framework JavaScript as other enterprise options.
  • Simple APIs for credential operations: walt.id offers a selection of enterprise-ready APIs for creating, updating and revoking credentials. With this new integration, all of the operations can be carried out with cheqd DIDs which makes integrating cheqd DIDs, DID-Linked Resources and Credentials into client applications lightweight and simple!
  • Future proofed for upcoming regulations: SSI Kit uses the OpenID for Verifiable Credentials stack for establishing peer-to-peer connections and for credential exchange. This is notable because it aligns with the proposed European Digital Identity Architecture and Reference Framework, which will accompany upcoming European regulations such as eIDAS v2.
  • Streamlining the bridge from Web2 identity into Web3: Through walt.id’s IDP kit, (Identity Provider Kit) cheqd customers can use cheqd issued Verifiable Credentials with traditional identity infrastructure, such as IAM tools including KeyCloakgluu and Okta.

walt.id customers:

The SSI Kit abstracts complexity for developers by following a “multi-stack approach” that enables developers to use different implementations or “flavours” of SSI. Adding cheqd as the latest “flavour” offers walt’s customers all the benefits cheqd has to offer, including:
  • Support for DID-Linked Resources: cheqd is the first identity network to build and support DID-Linked Resources (now a draft W3C standard) to support various identity data structures such as schemas, trust registries and status lists. Support for cheqd enables walt.id’s existing customer-base to utilise this innovative functionality.
  • Support for upcoming Payment Rails: cheqd’s vision is to become the de-facto payment mechanism for trusted data. By supporting cheqd within the SSI Kit, walt.id’s customers can benefit from already having existing integrations with cheqd, making it far easier and faster to leverage payment rails when released.
  • Offering a higher performance network at a lower cost: cheqd is designed as a highly performant Layer 1 with high throughput. cheqd can process an estimated 7,500 Transactions Per Second (TPS), benchmarking well beyond other leading networks such as Cardono (250 TPS), Ethereum (15–30 TPS), Avalanche (5000 TPS) and Bitcoin (10 TPS). Gas fees on cheqd are a fraction of the cost of other networks, making it far cheaper to transact on the network.

Why is walt.id’s SSI Kit important for cheqd?

When it comes to Self-Sovereign Identity, there are different technical components that need to work together to construct an end-to-end solution. The combination of different protocols together to make-up a tech stack is vital for interoperability between different ecosystems.

The Trust over IP Foundation describe these different components clearly within the ToIP Stack:

Trust over IP (ToIP) Stack

With reference to the image above, cheqd sits at the Layer 1 of the stack; cheqd is a Verifiable Data Registry with a DID method which supports the anchoring of DIDs and associated DID-Linked Resources.

SSI Kit works at Layers 2 and 3 of the stack, supporting a suite of protocols for credential exchange and peer-to-peer connections which hit a different market compared to those cheqd supports in its other SDKs. These include: OAuthOpenID for Verifiable Credential Issuance (OpenID4VCI)OpenID for Verifiable Presentations (OpenID4VP) and Self-Issued OpenID Provider v2 (SIOP V2).

The image below offers a cheqd specific overview which helps to further illustrate SSI Kit’s place in the stack.

cheqd capability model

By supporting these protocols, it gives end-customers more flexibility in choosing a tech stack that fits on top of their use case, jurisdiction and existing identity management systems. This is especially important as:

  1. The OpenID for Verifiable Credential stack is closely related to OpenID Connect in terms of how the authentication flows work between different parties. This makes it less daunting for companies to transition from something more traditional or federated, such as OpenID Connect, to decentralised identity.
  2. The OpenID for Verifiable Credential protocols are also supported by a range of prominent SSI vendors, such as Microsoft (Entra), MattrYesPing and Workday within the VC-JWT Presentation Profile, meaning that cheqd can now support and interoperate with a wider array of large vendors and their clients.
  3. These protocols form part of the European Digital Identity Architecture and Reference Framework, which is a new interoperability profile for companies to exchange trusted data in the European Union. Conforming with the technical stack described here will help future-proof cheqd’s tech stack for the upcoming European regulatory changes, which will give legal effect for credentials as a means of data exchange.

If you are interested in learning more about these regulatory changes, we would recommend that you read Avast’s takeaways from the regulatory changes, or watch Nacho Alamillo’s presentation on the proposed eIDAS 2 Regulation.

A bright future ahead

Interoperability, flexibility, simplicity and cost-efficiency are the key ingredients for adoption of Self-Sovereign Identity. With an eye on all of these, cheqd is positioning itself strategically for any vendor or organisation looking to implement an SSI solution. SSI Kit was the perfect storm for providing another enterprise software product, while also covering a new set of connection credential and exchange protocols.

Oh, and if you made it this far — we also have a lot of exciting developments to come, using this tech stack and the cheqd <> walt.id partnership 🔜👢👢

As always, if this blog resonates with you and you want to learn more about building on cheqd, please get in touch with our product team here and cheq-out our identity documentation here.

Universal Registrar: DID utility off-the-shelf

cheqd-Blog-Universal_Registrar-Off-the-Shelf_cheqd_Utility

cheqd’s new Universal Registrar driver enables easy and efficient integration with cheqd’s DID and DID-Linked Resource utility.

Introduction

We are excited to announce that we have successfully built a cheqd driver into Decentralized Identity Foundation’s (DIF) Universal Registrar to enable out-of-the-box and highly efficient DID and DID-Linked Resource transactions on cheqd. This is a big step in simplifying the developer journey for client applications to use cheqd’s DID and DID-Linked Resource utility in a more light way than integrating with our Software Development Kits (SDKs).

Understanding the value of the Registrar

The Universal Registrar is an open source application created by the Decentralized Identity Foundation (DIF) which aims to make it far easier to create, update and deactivate Decentralized Identifiers (DIDs) across a range of DID Methods without full integration.

EASILY CONSUMABLE DIDS IN A COMMON FORMAT

The aim of the Universal Registrar is similar to the Universal Resolver; to transform method-specific APIs for DID transactions into a common format for client applications to easily call. In more simple terms, remember the kids’ toys with different shapes and different shaped holes? Yep this one!

Imagine each different DID Method driver is a different shape. If you run an application and have to consume all different shapes and sizes, that is a hugeeeee uplift to maintain. What the Universal Registrar does is converts all of these shapes into one common shape which makes it far easier for any application to consume any of the listed DIDs (in technical terms it wraps an API around a number of co-located Docker containers).

DID Operations with minimal integration

Not only does it make it easier for client applications to support DIDs from multiple DID methods, but it also makes it far quicker and easier to create, update and deactivate DIDs — as it calls the method-specific driver with a common API.

If you imagine our SDK as a flatpack #IKEA product for DIDs, where it’s simple to put together, but you have to have instructions & the right tools (and a bit of patience).

Then imagine the Universal Registrar is like buying cheqd DID functionality straight off-the-shelf — it’s simple, efficient and quick! And it allows our partners or customers to use cheqd’s utility within minutes.

Therefore, the barrier for integrating cheqd DIDs into existing client applications has been greatly reduced by the Registrar. Instead of having to integrate with the cheqd SDK, applications can now create a simple workflow to call the relevant APIs for issuing, updating or deactivating cheqd DIDs and creating DID-Linked Resources.

Going beyond other DID Registrar Drivers

cheqd’s DID Registrar driver also supports the creation of DID-Linked Resources which goes beyond any other existing DID Method on the market. This provides the functionality for any developer to easily create the likes of schemas, trust registries and status lists on cheqd.

This week, the W3C has also formally approved the DID-Linked Resource work item which will be developed as a formal standard over the next few months here! 🥳

Getting started with the Registrar

We have created a simple setup guide for using the Registrar with Docker or locally. You can also find us on the Universal Registrar frontend.

Once you have setup the registrar, you can use the cheqd Registrar driver APIs here coupled with the Universal Registrar to build into your workflows!

For more information, we have created an Architecture Decision Record which describes the workflow for building cheqd DIDs and DID-Linked Resources into existing client applications using the Registrar.

Conclusion

We were clear in our Product Vision blog for 2023 that the path to adoption for cheqd goes hand in hand with the simplicity of integrating with its identity functionality. Using a DID Registrar abstracts away a lot of the complexity of fully integrating with cheqd’s SDK, but provides all the same benefits for DIDs and DID-Linked Resources. This is therefore a huge step in gaining wider adoption in a broad array of applications and SDKs, as the uplift for supporting cheqd DIDs is now much simpler. As always, if this blog resonates with you and you want to learn more about building on cheqd, please get in touch with our partnerships team here or you can try out our SDK for issuing and verifying credentials here, and you can setup the DID Registrar here! We set out at the beginning of 2022 to integrate cheqd into the DIF Universal Resolver. The Universal Resolver utilises REST APIs and other interfaces to enable the resolution of any DIDs which have a supported driver. We have successfully made this integration and you can now find did:cheqd on the list of supported drivers. Over 2023, we will improve and refactor our DID Resolver and our integration to make it fully enterprise-ready. The graph below shows our work on the cheqd DID Resolver and how the bulk of the work was carried out towards within the second and third quarter.

cheqd Product Reflections 2022

A retrospective on a year building a Decentralized Identity network on Cosmos. Co-authored by Ankur BanerjeeRoss Power and Alex Tweeddale.

TL;DR

2022 has been a huge year for the cheqd Product & Engineering team. We’ve made three major software upgrade releases to the cheqd network and several minor upgrades, including:

Each upgrade comes at the end of a development cycle (shown by the spikes on the graph below), contributing towards our mission to build a stable, trusted and secure decentralised identity network, known within SSI as a Verifiable Data Registry.

Cadence of our commits on cheqd node over 2022

From these releases, here’s a quick overview of what our overall journey looks like so far:

cheqd Journey to Payment Rails

You can think of everything we have been working on to date as:

  1. Feature parity with all other SSI networks;
  2. Identity functionality that goes beyond existing SSI networks; and
  3. The tooling and scaffolding to lay the foundations for payment rails for Verifiable Credentials.

Bringing this all together into a visual representation, using the Trust Over IP stack as we have done in the past, helps to make sense of what cheqd’s capabilities look like both now and what’s to come…

cheqd Capability Stack

Our top product takeaways

Before diving into how we measured up against our 2022 roadmap, we wanted to lay out five key product takeaways from this year.

1. SDKs are a crucial product driver to adoption

To get cheqd integrated into our partners’ software applications, we must focus on supporting the broadest range of SDKs, across Hyperledger Aries and non-Aries spheres. Until functionality is supported fully in an SDK, the fact that it is supported on the cheqd network is less relevant.

This year, we successfully built out a working Javascript-based SDK, the Veramo SDK for cheqd, which offers end-to-end functionality for developers to build their identity applications. We have also made significant progress with integration into Walt ID’s SSI Kit and our partners Animo Solutions are in the process of building cheqd support into Aries Framework JavaScript.

That said, we also want to be honest with ourselves and you in some areas. We had hoped to make more progress and have a larger suite of SDK support this year. However, the complexity of SDKs and the effort we put into building other first-of-a-kind aspects of our ledger, such as a Resource module has slowed this progress down. Aries SDKs are largely a community challenge. With each SDK so tightly bedded to Hyperledger Indy, it’s taking a mammoth effort from the whole AnonCreds community to rework this into a Ledger-agnostic AnonCredss specification.

2. Payment rails are worth the cost / benefit analysis

Mass adoption for cheqd is predicated on the success of the payment rails. Earlier in the year, we conducted a survey and gathered that payment rails would swing partners to invest the time and resources to fully integrate with cheqd because they would offer a new angle and incentive for clients and customers to bite on SSI technology.

Data showing how payment rails will help cheqd’s partners gain adoption

While admittedly, we didn’t make the progress into payment rails that we set out at the beginning of the year, we have laid the groundwork in terms of identity functionality, partnerships and interoperability for payments to feature centre stage in 2023.

3. Interoperability is a USP

In terms of identity on-ledger, we generally feel a great sense of accomplishment with our progress. Being able to bridge the AnonCreds crowd with the VC-JWT and VC-DI (JSON-LD) proponents using innovative and highly requested standards will help cheqd establish itself as a respected alternative to other leading identity networks such as Sovrin and ION.

We are also leading the charge to broaden Hyperledger Aries SDKs’ dependencies and innate ties to Hyperledger Indy. We joked about the interoperability pitfalls of Aries at the start of the year in our presentation on the Seven Deadly Sins of Commercialising SSI, and now meaningful interop changes are finally coming to fruition.

Meme about Hyperledge Aries interop

4. Never underestimate the difficulty of refactoring to support upstream changes

The Cosmos-wide Dragonberry vulnerability and patch (see 0.6.9) was a big shakeup in the Cosmos SDK ecosystem. As a result of the vulnerability, there have since been widely coordinated sets of changes for securitys reasons across the Cosmos ecosystem.

In just two months there have been six Cosmos SDK bumps addressing security issues. In two of these releases (0.46.5 and 0.46.7) prior versions of Cosmos SDK were retracted for security reasons, meaning that expedited shifts to the latest SDK versions was a priority.

Additionally, software for decentralised identity gets developed very rapidly and requires fast catchup to upstream projects/codebases. For example, our Veramo SDK for cheqd requires changes to be made when there are upstream changes in the Veramo SDK versions.

What that means for us is a lot of developer resources need to be dedicated to keeping up with the quick succession of upstream releases upstream which has inevitably taken away focussed time on cheqd specific product features.

5. Demos speak louder than words (and code…)

Getting a general audience and Web3 audiences to understand the value of decentralized identity can be challenging. Blogs, docs and written information are often difficult to consume for newcomers unfamiliar with our terminology. For this reason it is important that we focus on demonstrating the value of cheqd’s identity technology, rather than simply explaining it.

This year we’ve created two initial demos of how credentials may work in real world scenarios. Firstly, we partnered with Animo Solutions to demo AnonCreds on cheqd being issued and used in various scenarios. This went down very well with identity audiences, but again, newcomers and Web3 companies struggled to understand why AnonCreds were cool!

Secondly, we created a cheqd wallet demo which shows how Verifiable Credentials can be stored and used alongside your crypto. In this demo, we enable a user to get a social media credential for authenticating with Twitter or Discord, then upload a QR code of a ticket for an event, and present a combined presentation of their social media credential and the event credential.

This demonstrates how credentials can be used to prove a level of trust and reputation in someone’s identity and also their validity for entering an event, a topic we’ve since expanded on. Demos like this go a long way to showing people the power of the identity technology we’ve built, and we want to go even further with simple, gamified demonstrations of the tech in 2023.

Looking back on our 2022 Product Vision

We strongly believe that transparency in our product development is integral to the success of cheqd. As such, we want to take a candid look at how we have measured against our initial goals set at the beginning of 2022. We hope that you, our partners and community members, also continually challenge and hold us to account.

Looking back to the start of the year, our three focus areas for development laid out in our January Product Vision blog were broken down into:

  1. Identity: Core identity functionality for our partners to build compelling self-sovereign identity use-cases on top of the cheqd network.
  2. Web 3.0 Core: Core Web 3.0 functionality adds deeper integration for our network and token into the Cosmos and other blockchain ecosystems.
  3. Web 3.0 Exploratory: Emerging Web 3.0 use-cases such as decentralised exchanges (DEX) ecosystems; decentralised autonomous organisations (DAOs); identity for non-fungible tokens (NFTs), and in general, DeFi applications.

So, how did we match up against these goals and objectives? We’ll take a look at each Core section and give them a score based on how many goals we successfully achieved.

Identity Retrospective

  • TOTAL SCORE: 5/6

1. Tutorials for developers

STATUS: COMPLETED

Tutorials are critical for having actual utility built on the network, since if users don’t know how to use what we’ve built, there is little value in it. We have been hard at work on expanding our documentation for developers using cheqd’s identity and ledger functionality:

To utilise the Veramo SDK for cheqd, you can follow the setup guide here. You can then follow our tutorials to begin creating DIDs and DID-Linked Resources on cheqd, or create Verifiable Credentials and Presentations.

2. Integrations with industry-standard identity projects

STATUS: COMPLETED

We set out at the beginning of 2022 to integrate cheqd into the DIF Universal Resolver. The Universal Resolver utilises REST APIs and other interfaces to enable the resolution of any DIDs which have a supported driver. We have successfully made this integration and you can now find did:cheqd on the list of supported drivers. Over 2023, we will improve and refactor our DID Resolver and our integration to make it fully enterprise-ready.

The graph below shows our work on the cheqd DID Resolver and how the bulk of the work was carried out towards within the second and third quarter.

Cadence for DID Resolver commits

We have also made significant progress to integrate cheqd into the DIF Universal Registrar. This will enable parties to create, update and deactivate cheqd DIDs through a standard interface. The Universal Registrar can also be leveraged to support cheqd in a wider range of applications and SDKs. You can cheq out our progress in our Open Source repository here.

3. New & Improved Identity Functionality

STATUS: COMPLETED

We have gone above and beyond other identity networks on the market. Firstly, looking at the cheqd DID method, we have incorporated the ability to have multiple DID controllers and to have more complex verification method relationships not present in the did:indy method.

We have also published implementation reports for our DID method, DID resolver and DID URL dereferencer against the DID Core Specification Test Suite to transparently show our DID method capabilities.

Most notably, we have built a ‘resource’ module which supports DID-Linked Resources identifiable with unique, DID Core conformant, DID URLs. This functionality is available to begin using through the Veramo SDK for cheqd. You can learn more about DID-Linked Resources in our guide here.

Using DID-Linked Resources, we have been able to natively support AnonCreds on cheqd, a type of credential format used by Hyperledger Indy and Hyperledger Aries libraries which has increased privacy-preserving qualities compared to VC-JWT and VC-DI (JSON-LD) based credentials.

You can see how cheqd benchmarks against Hyperledger Indy in a side-by-side comparison here.

4. Payment rails for identity

STATUS: IN DESIGN

Payment rails for identity has been cheqd’s flagship offering since the network was launched. In the last year we have laid the groundwork and foundations for payment rails to layer on top.

In our update 1.0.x we are introducing new tokenomics for the network which will tie the CHEQ token to the core identity utility on the network. You can see cheqd’s updated pricing and comparison against Sovrin and Indicio here.

This is the first step towards payment rails, and early in 2023 we hope to release a phased architectural plan for achieving full payment rail functionality.

5. Client SDKs in more programming languages

STATUS: COMPLETED

At the beginning of 2022 we were planning to use Verifiable Data Registry (VDR) Tools SDK, from one of our key partners Evernym (acquired by Avast, now merged into Gen) as the primary enterprise-ready SDK for cheqd. However, towards the beginning of 2022 we conducted a product survey which established that software vendors prefer to use programming languages/frameworks based on JavaScript (81.2%), Python (62.5%), and Go (28.1%), instead of Rust which VDR Tools SDK uses (16%).

Since then, we have built our own SDK (cheqd SDK) and integrated cheqd into a JavaScript based SDK, the Veramo SDK, as a plugin. You can take a look at the modular architecture for our SDK packages here.

In terms of our product development on the cheqd SDK, below you can see the commits over the year for the cheqd SDK and cheqd Veramo plugin.

Cadence for cheqd SDK commits
Cadence for cheqd Veramo plugin commits

We have also successfully demoed cheqd using AnonCreds through Aries Framework JavaScript, and full SDK support is due to be completed following some updates to the AnonCreds spec to decouple dependencies on Hyperledger Indy. Going forward into 2023, we want to continue to integrate cheqd into as wide an array of SDKs as possible, starting with Aries Cloud Agent Python (ACA-Py), followed by Aries Framework Go and Aries Framework .NET.

6. Better interoperability and support for emerging identity standards:

STATUS: COMPLETED

cheqd has made a splash in the identity world with heavy influence into two emerging technical standards. Firstly, a W3C specification for DID-Linked Resources, as an extension to the DID Resolution Specification. Secondly, an updated ledger-agnostic Hyperledger AnonCreds specification that will complement cheqd’s approach to supporting AnonCreds object using its DID-Linked Resources.

Through supporting AnonCreds on cheqd, cheqd is the first network to support all major credential types, with VC-JWT fully supported and VC-DI (JSON-LD) at the final stages of being production ready.

Web 3.0 retrospective

  • TOTAL SCORE: 3/6

1. Wider integration with Cosmos ecosystem

STATUS: IN PROGRESS

We have expanded the number of platforms that cheqd functionality is available on. For example, at the beginning of 2022, staking and governance functionality was only available through with our friends at OmniFlix on our web-based cheqd x OmniFlix dashboard.

Now, we natively support staking and governance in our own cheqd wallet web-app and separately support governance operations at our cheqd x Commonwealth forum.

Notably, cheqd is also fully supported by Leap Wallet on their browser extension and on their mobile app. The beta Leapboard enables you to manage your CHEQ alongside your other Cosmos-based tokens in one place.

Going forward into 2023, we still want to push for full Keplr integration, and importantly, we want to introduce the ability to manage Verifiable Credentials within existing Cosmos-based applications.

2. Bridge to Ethereum networks

STATUS: COMPLETED

In Q1 2022, we successfully set up a bridge to Ethereum for the cheqd network using the Gravity Bridge. A blockchain bridge or ‘cross-chain bridge’ enables users to transfer assets or any form of data seamlessly from one entirely separate protocol or ecosystem to another (i.e. Solana to Ethereum, or in our case Cosmos to Ethereum and vice versa). The ERC20 version of CHEQ, the CHEQ-ERC20 wrapped token can be found here!

Read more about the cheqd x Gravity bridge and our decisions around it here.

3. Improved automation and tooling

STATUS: COMPLETED

We have made significant progress in how we have architected our cheqd node tooling and infrastructure. At the beginning of 2022, for example, it was a manual process to upgrade your cheqd node from scratch. We have since introduced an interactive installer which automates the process of upgrading cheqd nodes, making it significantly easier and less time consuming to run a cheqd node.

In addition, we have introduced several new components to streamline the setup and management of nodes in the form of infrastructure-as-code. We have started using HashiCorp’s Terraform to define consistent and automated workflows. This automation gives prospective network Validators the choice of whether they want to just install a validator node (using our install instructions), or whether they want to set up a sentry-plus-validator architecture for more security.

To complement Terraform, we have also introduced Terragrunt which performs the role of a wrapper to make our infrastructure available in Hetzner and DigitalOcean, as well as making it easier to utilise with AWS or Azure.

And to make cheqd configurations reusable across other Cosmos networks, we have begun using Ansible, which again acts as a wrapper or envelope to take cheqd configurations between separate Cosmos projects for node operators.

For a condensed list of the tooling and automation improvements we have made during the year, take a look through our Open Source-a-thon blog.

4. Native, cross-chain privatives for the Cosmos ecosystem

STATUS: IN PROGRESS

Our demo wallet showcases how cheqd DIDs and DID-Linked Resources can sign and be wrapped into Verifiable Credentials in Cosmos-based identity wallets. Using the demo wallet, you can now obtain credentials for authenticating with a social media profile, as well as for importing event tickets which you can combine into one proof. Within these Credentials, there are schemas and images which are stored as resources on the cheqd network. You can watch a video of the full demo here.

This is a huge step for demonstrating how cheqd’s identity functionality can begin to slot into other Cosmos-based applications. A priority for 2023 will be exploring how cheqd’s identity primitives can be utilised across other Cosmos chains, perhaps through opening up cheqd’s DID and Resource module to the rest of the Cosmos ecosystem via Interchain Accounts.

5. Smart contracts using CosmWasm

STATUS: BACKLOG

Using smart contracts and CosmWasm will likely be a crucial component of creating a privacy-preserving payment flow for credentials on cheqd. We are waiting until the first iteration of payments on cheqd has gone live before looking to integrate smart contracts. This is because CosmWasm will largely increase the computation cost for running a cheqd node. Therefore, we want to make sure the addition of CosmWasm does not get introduced before it is ready to be used.

6. Establish ourselves as a leader in decentralised governance for identity

STATUS: COMPLETED

cheqd’s Governance Framework is poised to become the first fully Trust over IP conformant Layer 1 Governance Framework. This will be enabled through cheqd’s approach to DID-Linked Resources which will identify the Governance Framework with a unique DID URL.

The cheqd governance framework also tightly aligns itself with the latest Governance standards coming out of the ISO/TC 307 technical committees on Governance (ISO/TS 23635:2022) where the concept that cheqd refers to as “Entropy” was a core component of the ISO approach to blockchain governance.

cheqd’s Governance Framework has since been lauded by leaders in the decentralised identity space. Drummond Reed, Director of Trust Services at Gen and co-author of the DID Core Spec stated:

“As SSI matures we’re seeing innovation at every layer of the Trust Over IP stack. cheqd is the only ToIP Layer 1 public utility I’ve seen with a governance framework designed explicitly to evolve from permissioned to permissionless. Add to that cheqd’s commitment to interoperability across all SSI ecosystems and its unique focus on SSI-based value exchange and you have one of the most exciting projects in SSI today.”

We’ll also be continuing to engage with working groups, consortium and organisations in the SSI space, such as Decentralized Identity Foundation (DIF)Trust over IP (ToIP)World Wide Web Consortium Credentials Community Group (CCG)European Blockchain Services Infrastructure (EBSI), and LACCHAIN to align on best practices and standards for governance and identity technology.

Conclusion

In 2022 we have made huge amounts of progress in terms of product development across the board, completing 8/12 objectives we set at the beginning of the year. Our development cadence and quantity of work is also illustrated by the sheer volume of commits (5,365), pull requests (262) and frequency of PRs (1.3 per day) that we’ve achieved this year, shown in the overview below (this is just based off of 2 of our repositories (cheqd/cheqd-node and cheqd/sdk).

Overview of cheqd’s development metrics

Putting this into perspective, cheqd ranks at the top of all the Cosmos-based chains for total commits for the month of December (as of 22/12/2022) with 874, and this wasn’t even our month with the highest number of commits (we reached 1,175 in November).

Graphic showing monthly commits of Cosmos chains

This is a huge testament to the dev and dev ops teams who have worked around the clock to bring our product visions to life, and we’d be nowhere without them!

In summary, 2022 was the year of building solid foundations, and this can often go under the radar. 2023 will be the year of functional applications, utility and deployments on cheqd.

In fact, we intend to start the year with a bang (hint hint you may have heard of a project with the codename Boots.)

cheqing out,

Ankur, Ross and Alex

Verifiable Credentials to streamline the events and ticketing industry

SSI for events-1

The ticketing industry has been in the press this week for failing to provide loyal and “Verified Fans” the early access to tickets they deserve. New types of data files, known as Verifiable Credentials, are here to help, enabling punters to easily present verified identity information to secure the process of buying tickets and entering events. Here’s how.

We’ve grown too accustomed to the way events handle ticketing. Long queues, ticket touts, and ticket handling fees are all part of the scenery if we were attending a gig, sports event or festival.

Every time we register for an event or purchase a ticket, we are constantly providing the same information over and over again. Meanwhile, each time we attend an event, the organisers need to spend time and money verifying our name, ticket validity and sometimes our age.

This process is repetitive, insecure, costly and, most importantly, easily avoidable. The use of Verifiable Credentials (VC) – a tamper-evident data file with a set of claims about a person, organisation, or thing that can be cryptographically verified – is poised to streamline the entire events industry. Using VCs, however, it would be far more difficult for bots and scalpers to get the requisite level of trust necessary to bulk-buy and resell tickets, while event-goers will be able to prove trusted attributes about themselves and a level of reputation that they are who they claim to be.

Problem one: Scalpers

Taylor Swift has been in the news this week for the wrong reasons. The release of tickets for her latest tour has been met with a clear demonstration of one of the events industry’s biggest issues: ticket scalpers and bots.

Just minutes after the pre-sale release, her face-value tickets, which cost between $49 and $449 each, had sold out, Ticketmaster had crashed; and simultaneously, on the secondary market, the same tickets were being resold and flipped for as much as US$22,700 (£19,100) each.

This is not a one-off. It is a problem that has consistently plagued the ticketing industry. This can be shown by the research conducted at Distil Research Labs, which concluded that bot activity was estimated to be behind 42.2% of activity in online ticket sales.

There have been attempts to resolve the issue. For example, in 2017, Ticketmaster introduced a new scheme for “Verified Fans”, where the most clued-in fans could pre-register their personal information for the event prior to the tickets coming on sale. Through this process, lucky fans would receive an email with a presale code to access an early-bird ticket sale.

This process was in effect for the Taylor Swift gig, however, bots had been able to register themselves with fake names and fake email addresses in advance, skirting around the extra measures put in place.

The issue here is that becoming a “Verified Fan” does not actually “Verify” your identity in any way, it relies on self-attestations.

Problem two: Security

There is a lack of security around ensuring the identity of the attendees of an event because the existing process is too clunky. It usually involves one physical ticket or digital QR code PLUS a physical identity document check. More recently, it may even involve a Covid vaccination certificate.

Since traditional tickets aren’t tied to an identity, bouncers and door staff are tasked with the role of ascertaining people’s identities. This is expensive for the organisers, inefficient as it causes long queues, and ultimately, insecure.

Various UK YouTubers such as Niko Omilana, Max Fosh and The Zac & Jay Show have shown how farcical the existing ticketing system is by creating videos of themselves sneaking their way through bouncers into various events. The most telling was Max Fosh getting into the International Security Expo in London, armed with only a fake lanyard and a strut of confidence.

This is potentially a huge risk vector for event organisers, which they need a new answer for.

Additionally, due to the lack of options to prove your identity, around 10,000 passports are lost each year while on a night out in a bar or club in the UK, according to the Identity and Passport Service (IPS). It makes little sense that there is no digital way to represent the same identity data or level of trust for entering a venue.

Verifiable Credentials to the rescue

Verifiable Credentials are a new digital standard from the World Wide Web Consortium (W3C) to create a more trustworthy way of holding and presenting data. From a more technical lens, VCs are tamper-evident data files with a set of claims about a person, organisation, or thing that can be cryptographically verified. Using Verifiable Credentials, it would be possible to reduce the risk of identity fraud, ticket scalpers and unauthorised access to events.

This is because a Verifiable Credential that has been issued to you from a trustworthy party is:

  1. Verifiable: affording a much higher degree of trust than something like Ticketmasters’ “Verified Fan” service, which is self-attested.
  2. Tamper-evident: meaning it’s much more difficult to fake or copy without being caught out.
  3. Reputable: You can combine credentials from multiple sources to present a digital reputation, not just an attestation.

For example, if a music artist issues you a VC for being a loyal fan, this becomes something that cannot be replicated as easily by a bot and becomes far more meaningful. Event organisers could then create a gated presale, requesting trusted attestations from multiple sources – requesting, for example:

  1. A loyal fan credential issued by a band; and
  2. A credential for authenticating with a social media platform

Having both of these would make it much more difficult for bots and scalpers to be included in the presale. For more high-profile events or with higher risk involved, you could request additional higher Level of Assurance (LoA) credentials, such as:

  1. A credential issued by a bank or government (trusted third party); or
  2. A credential issued by an employer attesting to your name and identity.

Therefore, with digital credentials, it becomes a lot harder to fake your way into an event, or scalp tickets, since you need to prove a level of reputation to get them in the first place. Currently, it is easy to spin up a new identifier like an email, but Verifiable Credentials make it much harder to fake a digital reputation.

Verifiable Credentials in action

From theory to action, to illustrate how easy this solution is, cheqd demoed how VCs could address this exact issue at Internet Identity Week (IIW). In this demo, Verifiable Credentials were used to combine: a verified identity; and an event ticket into one QR code proof.

Check out the demo recording here, or feel free to have a go yourself at wallet.cheqd.io

Learn from the demo how you could:

  1. Sign in to a web-based application using a wallet (in this case, the Cosmos-based Keplr wallet)
  2. Prove your identity with a social media account (authentication)
  3. Get a credential with your name and social media details
  4. Add your event ticket to your wallet, providing a QR code
  5. Present your event ticket alongside your name and social media details.
pasted image 0

The combined proof could be shown on the door of the event to securely “scan in”. This would hugely reduce the amount of time needed to physically check identity documentation and would make it much more secure for event organisers, who could rely on a combined proof of “identity” plus an “event ticket”, all issued by trusted third parties. 

You can play with the cheqd wallet and get yourself a credential here

You can learn more about Verifiable Credentials here and their interaction with Decentralised Identifiers (DIDs)

Conclusion

Using Verifiable Credentials would greatly streamline the events and ticketing industry, solving some of the core problems that have been causing huge financial losses and frustrating punters over the past decade.

Firstly, requesting Verifiable Credentials in a ticket sale would make it far more difficult for bots and scalpers to get the requisite level of trust necessary to bulk-buy tickets and resell them.

Secondly, using Verifiable Credentials to enter an event venue would streamline the process, reducing queue times and giving organisers a greater level of confidence in who is attending the event.

Overall, the technology is developing quickly, and we at cheqd are ready to provide the backbone and network for the growing adoption of Verifiable Credentials for events, with Decentralised Identifiers anchored on the cheqd network. If this blog resonates with you and you are in the events industry, please get in touch with our partnerships team here, or you can try out our SDK for issuing and verifying credentials here!

AnonCreds Indy-Pendence: Part Two

Part 2: Bringing cheqd AnonCreds to life with Animo Solutions in Aries Framework JavaScript

Co-authored by Alex Tweeddale (Product Manager at cheqd), Ross Power (Product Manager at cheqd), Timo Glastra (CTO, co-founder at Animo), Berend Sliedrecht (Software Engineer at Animo).

Read part 1 here.

Introduction

In our previous blog we explained how we’re using cheqd’s on-ledger resources module to decouple AnonCreds from Hyperledger Indy and natively support AnonCreds and AnonCreds Objects on cheqd.

This work has laid the foundation for the next piece of the puzzle, bringing cheqd AnonCreds to life with easily accessible and usable Software Development Kits (SDKs).

Over the past few months we have been collaborating with Animo Solutions to support AnonCreds on cheqd within Aries Framework JavaScript. We are thrilled to announce that we have achieved this goal. Via this integration, users will be able to issue, verify and present AnonCreds with cheqd DIDs, and with AnonCreds Objects written as cheqd Resources, identifiable via DID URLs.

Showcasing AnonCreds on cheqd

During the demo we co-hosted with Animo at their office in Utrecht as a pre-event for Rebooting the Web of Trust (RWoT) on the 19th September 2022, we were able to show a full end-to-end journey of Ankur, our CTO, using AnonCreds. You can also watch a demo of this, here.

As you’ll see in the video, using a custom version of Animo’s self-sovereign identity demo we were able to showcase:

  1. AnonCreds issuance: Ankur gets his Animo identity for his name, date of birth and address as a fully functional AnonCred on cheqd.
Accepting Animo Identity AnonCred Credential into identity wallet

2. AnonCreds presentation and verification: Ankur signs up to attend the RWoT conference using his Animo identity AnonCred which is verified against the cheqd network.

Signing up for RWoT using Credentials
Connecting to RWoT to sign up for conference
A Credential request from RWoT for Ankur’s Credentials

3. AnonCreds issuance: Ankur receives a RWoT AnonCred for his entry to the conference.

Accepting the RWoT Credential sent to Ankur

4. AnonCreds presentation and verification: Ankur presents his RWoT AnonCred to gain access to the conference.

Presenting the RWoT as an AnonCred to attend the conference

Under the hood, you can check out the schema and credential definitions used for this AnonCred, which are identifiable using a DID URL, and stored as cheqd resources, via our DID Resolver.

Animo Identity schema:

Animo Identity credDef:

RWoT schema:

RWoT Credential Definition:

Integration into Aries Framework JavaScript

Aries Framework JavaScript (AFJ) is a framework that enables users to quickly and easily issue, hold and verify Verifiable Credentials. The main goals of AFJ are:

  • Interoperability between all the Aries implementations,
  • Usability for non-SSI developers and to remain agnostic of the credential format or DID method that is used and to lead the way in terms of interop.

In the recent months AFJ has heavily been expanding into a more modular and “less specific” framework. The integration with cheqd is a prime example of this, being the first true showcase of anchoring AnonCreds on non-Indy ledgers. Supporting more credential formats, ledgers and DID methods is crucial and essential to the continual development of AFJ.

How Animo leveraged the cheqd SDK to accelerate the integration into AFJ

One of the catalysts for Animo to support AnonCreds on cheqd was the release of cheqd/SDK in August, which was integrated into the Veramo SDK for cheqd. The Veramo SDK for cheqd was built in a modular fashion, which made it a very useful frame of reference for integrating cheqd into AFJ, and making it easier to leveraging the architectural design that the cheqd team put together.

Looking ahead

To continue the work to decouple AnonCreds from Hyperledger Indy, Animo is currently raising funds to fully implement Ledger Agnostic AnonCreds in AFJ and ACA-Py with cheqd. cheqd and Animo are also working closely with Esatus to support the development of Aries Framework .NET. This means, over the coming months users will be able to interact with cheqd and build their identity applications in any of the key SDKs they already use, whether this is AFJ, ACA-Py, Veramo or .NET. The image below provides a visual aid to where things are currently at:
The cheqd “stack” and tooling at different technical layers

Animo and cheqd continue to collaborate in this effort. The cheqd team are continuing to offer support for widening the SDKs available, for example working on a Universal Registrar Driver which will speed up the ACA-Py development. cheqd are also seeing the community get behind these efforts, with an new initiative to utilise the Community Pool funds to help financially support the progress.

AnonCreds Indy-Pendence: Part One

PART 1: DECOUPLING THE RELIANCE ON HYPERLEDGER INDY AND CREATING MORE EXTENSIBLE ANONCREDS OBJECTS WITH CHEQD.

Co-authored by Alex Tweeddale (Product Manager & Governance Lead), Ross Power (Product Manager), and Ankur Banerjee (CTO/Co-founder).

Read part 2 here.

Introduction

🚀 We are very excited to announce that the on-ledger resources feature released on cheqd mainnet allows for developers to support AnonCreds natively on the cheqd network.

Supporting AnonCreds on cheqd is a landmark achievement, since they have previously been tightly coupled with Hyperledger Indy chains. We are the first network to successfully decouple this dependency.

cheqd now provides support for AnonCreds, and in doing so, it remains compliant with W3C DID Core Spec, enabling SchemasCredDefsRevocation Registry Definitions and Revocation Registry Entries to be created using on-ledger resources, identifiable via DID URLs. This work is seminal in creating a broader ecosystem-agnostic ledger-layer, which supports greater interoperability for self-sovereign identity (SSI).

What are “AnonCreds”?

AnonCreds are a type or “flavour” of Verifiable Credentials that are utilised predominantly by Hyperledger Indy for ledger code and interactions, Hyperledger Ursa for cryptographic libraries, and the Hyperledger Aries codebase for agent, wallet, and credentialing code.

AnonCreds are now widely adopted, through organisations, such as the Government of British Columbia, IDunion and the IATA Travel Pass.

We carried out a survey earlier in 2022, finding that AnonCreds constituted the highest adopted (or roadmapped) Credential type amongst our respondents in 2022:

  • 45.9% AnonCreds;
  • 40.5% JSON based JWT;
  • 40.5% JSON-lD with BBS+ signatures; and
  • 32.5% JSON-LD.

(The percentages represent the percentage of respondents who selected the Credential type to support in 2022).

The respondents here were largely partners of cheqd. Therefore, to fully support all our partners and their existing clients within our network, we needed to build a way to support AnonCreds.

Why are AnonCreds a contentious issue?

For anyone in the self-sovereign identity (SSI) community, there is no avoiding the fact that AnonCreds divide the opinion of the community.

AnonCreds were originally designed to provide additional privacy benefits, compared with JSON and JSON-LD based Verifiable Credentials. To achieve these privacy benefits, AnonCreds utilise a series of bespoke technical components, including:

  1. Camenisch-Lysyanskaya (CL-Signatures) to encode individual claims for the purpose of enabling selective disclosure.
  2. Link secrets written into Credential Definitions, used when issuing AnonCreds to: (a) bind the issued Credentials to a particular holder; and (b) enable the holder to present a Zero-Knowledge Proof to a verifier without using a correlatable identifier.
  3. Hyperledger-Indy specific transaction syntax for on-ledger schemas, credential definitions, revocation registry definitions and revocation registry entries.
  4. Cryptographic accumulator deltas on-ledger, compiled into off-ledger ‘tails files’for the purpose of asserting proofs of non-revocation, while maintaining privacy for AnonCreds holders.

These bespoke technical components afford AnonCreds a degree of ingenuity; but it comes at the cost of interoperability.

Vendor Lock-in

A bit like how Apple products only really work with other Apple products; AnonCreds only really work with Hyperledger Indy, and are largely tied to Hyperledger Aries libraries to be issued and consumed.

Image comparing AnonCreds and Apple compatibility. Source.

Historically, AnonCreds cannot be written to other Layer 1 Utilities, since Credential Definitions and CL-Schemas are custom Indy-specific transactions which are not conformant with W3C DID Core. This shoehorns adopters into a particular tech stack, which although open source, is largely reliant on the Indy/Aries community for updates, since there is no formalised Standard approved by a reputable international body like IETF or W3C. It is worth noting, however, that AnonCreds v1 is being proposed as a Standard and is in the process of being taken to the Internet Engineering Task Force (IETF).

Scalability

The use of ZKP-CL signatures provides AnonCreds benefits from a privacy perspective, yet, it also requires the computation of very large files which can lead to inefficiencies when used at scale in production environments.

Kaliya Young’s recent blog on Being “Real” about Hyperledger Indy & Aries / Anoncreds highlights many of the scalability issues around CL-Signatures well.

Similarly, the “tails files” used in Indy revocation suffer from similar inefficiencies. Each “tails file” can contain up to 20,000 revocation entries; and in a highly utilised ecosystem, there may be large amounts of these tails files, of 20,000 entries archived. On making a request of non-revocation and querying a tails file of this size, it may take a much longer time than usual to return a proof, creating a slower user experience than standard centralised models.

Owing to the benefits, as well as the tradeoffs, we previously concluded that AnonCreds in their current format have led to a degree of short-termism and long-termism within the community. Vendors with clients knocking on their door and asking for a product tomorrow are swaying towards short-term solutions which work now (such as AnonCreds which contains privacy-preserving revocation and the capability for predicate proofs).

Whereas, enthusiasts and SSI visionaries are looking at a longer-term vision of harmonisation and wide-barrelled interoperability (such as JSON-LD with BBS+ signatures or even AnonCreds V2 with BBS+ signatures). This is because BBS+ signatures provide a lot of the same benefits as AnonCreds (CL signatures), but with much smaller file sizes. Nonetheless, at cheqd we have acknowledged that there is a value in supporting both types of Verifiable Credentials.

Supporting AnonCreds on cheqd

Our first port of call in supporting AnonCreds was differentiating what makes AnonCreds distinct from other types of Credentials in terms of what goes on the ledger, and looking at ways that we could accommodate for those differences using cheqd functionality.

Decoupling AnonCreds from Indy

The key features of what makes an AnonCred exist at two distinct levels:

  1. Ledger level: What must be written to a verifiable data registry for AnonCreds to be created and functional in practice?
  2. Credential and SDK level: What cryptographic techniques must be employed within an SDK to give AnonCreds their privacy-preserving features?

For the existing AnonCreds stack, on Hyperledger Indy, these two levels can be represented by Figure 1 below:

Figure 1: How Indy and Aries interact for AnonCreds support

Here, Hyperledger Indy is important for supporting AnonCreds since it has to-date been the only identity-blockchain which can natively support DIDs, Schemas, Credential Definitions (and optional Revocation Registry) transactions written to the ledger.

Our hypothesis in supporting AnonCreds was that if we were able to replicate the functionality of the Ledger layer, the SDK and credential layer would be able to fit on top, without any wholesale changes in client applications that currently use AnonCreds. This would decouple the dependency on Hyperledger Indy, creating a much broader and widely interoperable ecosystem for AnonCreds.

Figure 2: How cheqd and Aries theoretically can interact for AnonCreds support

However, we did not want to build a carbon copy of Hyperledger Indy. As previously discussed, Hyperledger Indy is contentious for multiple reasons, scalability, and interoperability.

For this reason, we wanted to explore the option of supporting Credential Definitions and Schemas on-ledger, but in a way which directly conformed with W3C DID Core and with greater scalability.

Composition of AnonCreds Object identifiers

To reach a composition of Schemas, CredDefs, RevRegDefs and RevRegEntries on-ledger which is W3C DID Core compliant, it is important to understand exactly how these bespoke transactions are composed on Hyperledger Indy, and why this did not achieve conformance with the standard.

What goes into a Legacy AnonCreds ID?

Historically, AnonCreds Identifiers for scheamas, CredDefs and Revocation writes have been composite strings, recognised only by Hyperledger Aries and Indy applications.

For example, AnonCreds on-ledger schema_id, contain the following information:

  • Publisher DID: is a string. The DID of the Schema Publisher.
  • Object type: An integer denoting the type of object. 2 is used for Schemas.
  • Name: is a string, the name of the schema
  • Version: is a string, the version of the schema in semver format. The three part, period (“.”) separated format MAY be enforced.

The schema_id therefore was formatted in the following way:

<publisherDid>:<objectType>:<name>:<version>

For example, a Legacy AnonCreds schema_id could be:

7BPMqYgYLQni258J8JPS8K:2:degreeSchema:1.5.7

The problem with this approach is that it:

  1. Ties AnonCreds to Hyperledger Indy, since Indy is the only possible chain which can provide all the required content for schemas, CredDefs and Revocation writes;
  2. Limits client applications to expect a very specific identifier format for AnonCreds Objects.

Therefore, to decouple AnonCreds from Hyperledger Indy it has been important to move away from this identifier format and to create a more extensible approach to providing client applications the required information.

AnonCreds Specification expansion

Recently, the AnonCreds specification has evolved to allow different ‘AnonCreds Object Methods’ which do not necessarily need to conform to the same representation as the legacy identifiers.

This approach gives different Object Methods the flexibility to define their own AnonCreds Object Identifier formats. This is a welcome change which provides greater flexibility in how AnonCreds Objects may be represented on different chains. Using this extension of the AnonCreds specification, cheqd has been able to create its own AnonCreds Object Method.

cheqd AnonCreds Object Method

In an earlier blog, we discussed our approach to resources on cheqd at length and highlighted the benefits of using on-ledger schemas, compared to other existing solutions such as schema.org.

cheqd AnonCreds Objects build directly on this approach, in that we wanted to create an identifiable path to each specific AnonCreds Object using DID URLs.

Example of cheqd DID URL for schema resource

We have now created an extensive body of documentation to explain how we can support AnonCreds on cheqd, including how we represent each of the AnonCreds Objects using DID Core conformant DID URLs. See here for: Central to this approach has been removing all dependencies on Hyperledger Indy from the core “data” contents of an AnonCreds Object, and moving anything specific to a particular network to “AnonCreds Object Metadata”. This mimics how DID Document representation is able to support multiple different approaches, where anything network specific should be represented within the “DID Document metadata” section, rather than in the core body of what is returned.

Conclusion and next steps

Through the use of the resource module, cheqd is able to support all AnonCreds specific Objects within the parameters of DID Core, by identifying each Object and future Object versions with DID URLs.

This creates an elegant platform for AnonCreds to be issued on-top, which will allow cheqd to support any SSI vendor in the marketplace at a technical level. We have started with integration into Aries Framework Javascript (AFJ) and are looking to expand into Aries Cloud Agent Python (ACA-Py) as well as Aries Framework .NET. Read more about our plans in Part 2!

We look forward to working with our partners and the broader SSI community to see how we can innovate together using this new functionality, and decouple AnonCreds dependencies further.

As always, we’d love to hear your thoughts on our work and how resources on-ledger, or AnonCreds on cheqd could improve your product offering. Feel free to contact the product team directly — [email protected], or alternatively start a thread in either our Slack channel or Discord.

AnonCreds Indy-Pendence was originally published in cheqd on Medium, where people are continuing the conversation by highlighting and responding to this story.

Our Approach to Resources on-ledger

USING THE CAPABILITIES OF THE DID CORE SPECIFICATION FOR STANDARDS-COMPLIANT RESOURCE LOOKUP

This blog post has been co-written by Alex TweeddaleAnkur Banerjee and Ross Power.

Introduction

Since the beginning of Q2 2022, cheqd has been assembling the building blocks for anchoring “resources” to the cheqd network.

The concept of resources in self-sovereign identity (SSI) ecosystems is not new, however, as we will discuss throughout this blog post, existing approaches to resources in SSI oblige adopters to make compromises between security, availability and interoperability. We first noticed this when we were looking at how we could securely reference credential schemas, something we will expand on throughout this post.

Our objective in building resources on cheqd is to improve the way resources are stored, referenced and retrieved for our partners and the broader SSI community, in line with the existing W3C DID Core standard.

Within this blog, we will answer three simple questions:

  1. What are resources?
  2. What are the problems with the way resources are stored?
  3. How have we implemented resources on cheqd?

By answering these questions, we aim to provide a conceptual understanding of why we have chosen this approach, how it improves on existing approaches, and the timelines for this being implemented on cheqd in practice.

In self-sovereign identity (SSI) ecosystems, “resources” are often required in tandem with W3C Verifiable Credentials, to provide supporting information or additional context to verifiers receiving Verifiable Presentations.

For example, common types of resources that might be required to issue and validate Verifiable Credentials are:

Schemas

Describe the fields and content types in a credential in a machine-readable format. Prominent examples of this include schema.orgHyperledger Indy schema objects, etc. You can think of them as a template for what is included in a Verifiable Credential.

Below is an example of a schema.org residential address with full URLs:

{
 "@type": "http://schema.org/Person",
 "http://schema.org/address": {
   "@type": "http://schema.org/PostalAddress",
   "http://schema.org/streetAddress": "123 Main St.",
   "http://schema.org/addressLocality": "Blacksburg",
   "http://schema.org/addressRegion": "VA",
   "http://schema.org/postalCode": "24060",
   "http://schema.org/addressCountry": "US"
 }
}

This might also take the form of evidence schemes, which describe additional information about the processes used to validate the information presented in a Verifiable Credential in common, machine-readable format.

Revocation status lists

Allow recipients of a Verifiable Credential exchange to check the revocation status of a credential for validity. Prominent examples of this include the W3C Status List 2021 specification, W3C Revocation List 2020Hyperledger Indy revocation registries, etc.

Visual representations for Verifiable Credentials

Although Verifiable Credentials can be exchanged digitally, in practice most identity wallets want to present “human-friendly” representations. A resource, using something like Overlay Capture Architecture (OCA) may enable a credential representation to be shown according to the brand guidelines of the issuer, internationalisation (“i18n”) translations, etc. Such visual representations can also be used to quickly communicate information visually during identity exchanges, such as airline mobile boarding passes.

In the example above from British Airways, the pass at the front is for a “Gold” loyalty status member, whereas the pass at the back is for a “standard” loyalty status member. This information can be represented in a Verifiable Credential, of course, but the example here uses the Apple Wallet / Google Wallet formats to overlay a richer display.

While it’s useful to have digital credentials that can be verified cryptographically, the reality is that there are often occasions when a quick “visual check” is done instead. For example, when at an airport, an airline staff member might visually check a mobile boarding pass to direct people to the correct queue they need to join. The mobile boarding pass does get scanned at points like check-in, security, boarding etc., to digitally read the information, other scenarios where this is not done are equally valid. However, most Verifiable Credential formats do not explicitly provide such “human-friendly” forms of showing the data held in a credential.

Documents

More broadly, there are other types of resources that might be relevant for companies beyond SSI vendors, that want a way to represent information about themselves in an immutable and trustworthy way.

Many companies require documentation such as Privacy Policies, Data Protection Policies or Terms of Use to be made publicly available. Moreover, Trust over IP (ToIP) recommends making Governance Frameworks available through DID URLs, which would typically be a text file, a Markdown file, PDF etc.

Logos

Companies may want to provide authorised image logos to display across different websites, exchanges or block explorers. Examples of this include key-publishing sites like Keybase.io (which is used by Cosmos SDK block explorers such as our own to show logos for validators) and “favicons” (commonly used to set the logo for websites in browser tabs).

The current uses for resources are therefore very broad across the SSI ecosystem, and in addition, for other companies that may want to use DIDs to reference relevant information on ledger. For this reason, it is essential that the SSI community strengthens the way that resources are stored, referenced and retrieved in SSI ecosystems.

What are the problems with the way resources are stored?

There are multiple approaches to decentralised identity which rely on centralised infrastructure across different technical layers. Decentralised Identifiers (DIDs): are often stored on ledgers (e.g., cheqdHyperledger Indy, distributed storage (e.g., IPFS in Sidetree), or non-ledger distributed systems (e.g., KERI). Yet, DIDs can be stored on traditional centralised-storage endpoints (e.g., did:webdid:git).

Predominantly, however, the issue of centralisation affects resources providing extra context and information to support Verifiable Credentials. These resources, such as schemas and revocation lists, are often stored and referenced using centralised hosting providers.

Using centralised hosting providers to store resources may have a significant difference in the longevity and authenticity of Verifiable Credentials. For example, a passport (which typically has a 5–10 year validity) issued as a Verifiable Credential anchored to a DID (regardless of whether the DID was on-ledger or not) might stop working if the credential schema, visual presentation format, or other necessary resources were stored off-ledger on traditional centralised storage.

This section will therefore explain the pain points that should be addressed to improve the way resources are stored, managed and retrieved in SSI ecosystems.

SINGLE POINTS OF FAILURE

Even for highly-trusted and sophisticated hosting providers who may not present a risk of infrastructure being compromised, a service outage at the hosting provider can make a resource anchored on their systems inaccessible.

The high centralisation of cloud providers and history of noteworthy outages clearly demonstrates why we should not host resources on centralised cloud storage in production environments. In Q1 of 2022, the three largest players in the cloud (AWS, Google Cloud, Microsoft Azure) dominated with 65 per cent in nearly all regions (outside of China).

Furthermore, beyond cloud providers, there are other events that exemplify the issuers relying on larger players. The Facebook outage of 2021 (shown in the graph below) took down apps that used “Login with Facebook” functionality. This highlights the risks of “contagion impact” (e.g., a different Facebook outage took down Spotify, TikTok, Pinterest) of centralised digital systems — even ones run by extremely-capable tech providers.

Ed Skoudis, president of the SANS Technology Institute amusingly commented on this issue:

“In the IT field, we sometimes joke about how we spend 15 years centralizing computing, followed by 15 years decentralizing, followed by another 15 years centralizing again,” he said. “Well, we have spent the past 10 years centralizing again, this time on [the] cloud.”

Likewise, with decentralised identity, there has been excellent work to decentralise, with standards that remove the need for centralised intermediaries — notably around Verifiable Credentials and the decentralised trust provided by DID Authentication. Yet, all of this excellent work may be eroded in practice, unless every component of an SSI ecosystem is able to maintain an equivalent level of decentralised trust. Resources are currently an area that has been centralised for the sake of convenience.

LINK ROT

“Link rot” happens when URLs become inaccessible over time, either because the endpoint where the content was stored is no longer active, or the URL format itself changes. The graph below from an analysis by The New York Times shows the degradation over time of URLs.

For this reason, keeping an up-to-date version of the links themselves is crucial. Furthermore, a study of link rot found at least 66.5% of links to sites in the last 9 years are dead. This can have an adverse impact on the digital longevity of Verifiable Credentials if there’s “link rot” in the resources necessary to process the credential. For this reason, projects such as The Internet Archive’s Wayback Machine exist to snapshot digital ephemera before they are lost forever.

This illustrates that link rot can affect a significant proportion of links in a relatively small amount of time, and once again, looking at how resources are currently stored in SSI ecosystems, if the resource locations are moved and the links are broken, the Verifiable Credentials relying on these resources become unusable. Therefore, resources, once defined, should be architected to be used and referenced indefinitely, without being changed.

TAMPER-EVIDENT CHANGES AND CENSORSHIP RESISTANCE

Finally, the centralised way that resources are currently stored and managed is not immutable, and as a result, it is liable to tampering. For example, if a hosting provider is compromised, or if malicious actors are working for the company, resources may be changed and previous resource versions may be purged from the central database.

As we move towards a new web infrastructure with Web 3 (and beyond…), and as more projects leverage blockchain and distributed ledgers, it’s important not to port the previous issues of the web, and instead find novel ways to better manage information, with longevity in mind. This is why at cheqd, we have decided to redesign the way resources are captured on the ledger.

How have we implemented resources on cheqd?

cheqd’s on-ledger resources can be defined as data files, stored in an organised collection and common format on a distributed ledger. As such, these resources will be highly available, resilient to attack and with immutable archives showing expired, revoked or deprecated resources.

Resources, using the same method of referencing may also be stored off-ledger.

When working towards this objective, we laid out a set of requirements we benchmarked our implementation against:

  1. Resources must be referenceable via immutable and uniquely identifiable using Decentralised Identifiers (DIDs).
  2. Resources can be stored on-ledger if they are sufficiently small enough. (If a resource is too large to be stored on-ledger, e.g., an image or video file, they should still be referenceable via their DIDs.)
  3. Resources must be versioned, with each version easily accessible in the future.
  4. Resources can be indexed, to promote reuse of resources.
  5. Existing DID resolvers should be able to either resolve resource URLs or get references to them without significant modification to how they currently function and behave.
  6. There should be an ability to mark resources as deprecated or superseded by new versions.;
  7. On-ledger resources must fit within the existing W3C standards for decentralised identity;
  8. Resources should be assigned a media type, to allow client applications to apply logic to what resources they expect and want to consume.

Is this similar to Hyperledger Indy’s approach to Schemas and CredDefs on-ledger?

We heavily considered the schema implementation used by AnonCreds on Hyperledger Indy in our design phase, since it tackles many of the problems highlighted above. However, the issue we have with schemas and Credential Definitions for AnonCreds is that they are very tightly coupled with Indy-based ledgers. Both schemas and Credential Definitions, require Indy-specific transactions, limiting the interoperability of these Credentials outside of Indy ecosystems.

Our implementation will enable resources on the ledger to be far more interoperable, beyond cheqd, since the architecture does not tie resources to a specific ledger, and builds within the parameters of the W3C DID Core Spec. This means that partners using AnonCreds and proponents of JSON or JSON-LD Verifiable Credentials can benefit from this approach. We have tried to design our architecture as flexible as possible to allow new resource types to be created, without any vendor lock-in.

We will be writing a specific blog post on how cheqd supports AnonCreds and Credential Definitions using its resource architecture.

On-ledger resources on cheqd

To explain the details of how we have structured this, we will start by breaking down the high-level overview diagram shown in figure 1.

Figure 1: High-level architecture flow for resources on-ledger

The diagram above shows multiple layers to this architecture:

ISSUER DID DOCUMENT

This Issuer DID Document is created as per usual. It may be updated to reference a Collection DID Document within the service section, and also may specifically link to a Resource using a service endpoint.

This allows an Issuer to explicitly cite which resources it uses when issuing Verifiable Credentials.

COLLECTION DID DOCUMENT

This Collection DID Document references an on-ledger Collection, using the unique identifier of its DID URL, which is the same as the Collection ID. It also acts as the keys and gating mechanism for controlling, updating and managing on-ledger resources within that Collection.

The same verification methods listed in this DID Document are used to authenticate with and manage the resources within the collection it refers to.

RESOURCE COLLECTION

Resource Collections are a way of organising resources into different versions or media types. This enables new resources to be added to a collection, and old resources to be indexed and still be retrievable through querying a collection by version time.

The Collection ID is the same as the unique identifier of the Collection DID Document’s DID URL and ‘id’.

RESOURCE

The resource contains the actual data of the resource itself, including its name and media type. A resource is directly retrievable through the service endpoints of both the Collection DID Document, and optionally within an Issuer DID Document.

The full architecture including specific layer-by-layer details about how each component references and links to the other can be found in the Resources section of our Identity Documentation.

REFERENCING RESOURCES WITH DIDS

We decided to identify and reference each ‘resource’ with its own unique identifier, within a ‘collection’ tied to a DID. This enables us to reference a resource in the following way:

Figure 2: resource configuration via DIDs

Each Collection and Resource is identified with its own Universally Unique Identifier (UUID). However, the Resource Collection ID is also the same as the unique identifier of the DID that controls the collection.

UTILISING THE ‘SERVICE’ SECTION

We decided to reference ‘resources’ by using the ‘service’ section, rather than creating a new section in a DIDDoc for multiple reasons:

  1. While the DID Core spec technically allows creating new sections, most client apps expect the specific default/minimum list, and would not know how to handle the contents within a new section.
  2. Service Types are already designed to be extended. It is a well-trodden and well-recognised part of DID Documents. For example, the DID Spec Registries currently list two service types: LinkedDomains and DIDCommMessaging

We used the LinkedDomains service type in our first DID to directly reference an image hosted on IPFS using a DID.

{
"id": "did:cheqd:mainnet:zF7rhDBfUt9d1gJPjx7s1JXfUY7oVWkY#non-fungible-image",
"type": "LinkedDomains"
"serviceEndpoint": "https://gateway.ipfs.io/ipfs/bafybeihetj2ng3d74k7t754atv2s5dk76pcqtvxls6dntef3xa6rax25xe",
}

If you look up the IPFS link above through any valid IPFS gateway, you’ll find our Data Wars poster.

Likewise, we are using the ‘service’ section of DID Documents to reference specific resources:

{
“id”: "did:cheqd:mainnet:46e2af9a-2ea0–4815–999d-730a6778227c#DegreeLaw"
"type": "CL-Schema",
"serviceEndpoint": "https://resolver.cheqd.net/1.0/identifiers/did:cheqd:mainnet:46e2af9a-2ea0-4815-999d-730a6778227c/resources/688e0e6f-74dd-43cc-9078-09e4fa05cacb"
}

Through linking to the resource this way it is highly accessible and easily consumable for client applications, DID resolvers and developers. Furthermore, if a client app doesn’t understand a service section, most of them skip and ignore it rather than throwing an error and causing the system to fail, potentially catastrophically.

Creating and retrieving a resource

In order to create a resource on the ledger, the following steps laid out in figure 2 should be followed:

Figure 3: Creating and retrieving a resource on cheqd

Writing resources to the ledger

CREATE COLLECTION DID DOCUMENT

Anchor Collection DID and associated Collection DID Document to the ledger through a create DID operation

CREATE RESOURCE

Anchor Resource to the ledger and specify Collection ID as the same identifier as the unique identifier from the Collection DID Document. Sign createResource transaction with the same private key as the verification method listed in the DID Document

UPDATE COLLECTION DID DOCUMENT

Update Collection DID Document and reference the Resource within the ‘service’ section

This Collection DID Document and Resource may also be referenced within an Issuer DID Document, as shown in figure 1.

Retrieving resources from cheqd ledger

QUERY LEDGER FOR RESOURCE

Through referencing resources using DIDs, as explained above, it makes it far easier to query historic or deprecated resources using DID resolvers and DID URL dereferencing,

For example, the following request could be made to a resolver to fetch a resource from a specific point in time:

https://resolver.cheqd.net/1.0/identifiers/did:cheqd:mainnet:46e2af9a-2ea0-4815-999d-730a6778227c#degreeLaw?versionTime=2022-06-20T02:41:00Z

This may be incredibly powerful where a resource, such as a schema, has evolved over time, but you want to prove that it was issued using the correct schema at the point of issuance.

RETURN RESOURCE

Full Resource is returned including any data files attached.

Tutorials for creating a resource on-ledger can be found here on our identity documentation site. Further technical detail about creating resources can be found in our Architecture Decision Record 008.

How this improves the way resources are stored and retrieved

Through storing resources on ledgerreferencing them through resolvable DID URLs, and authenticating them using DID Documents, the resources on-ledger will be:

HIGHLY AVAILABLE AND EASILY RETRIEVABLE

Resources are identified by a DID URL which allows them to be retrieved easily from a distributed ledger using existing DID Resolvers.

Using a distributed ledger like cheqd to store and index resources removes the problem identified by centralised systems creating single points of failure, such as schema.org.

Schemas, for example, would therefore become on-ledger resources, represented in the format of the following example:

Resource1
{
"header": {
"collectionId": "46e2af9a-2ea0–4815–999d-730a6778227c",
"id": "688e0e6f-74dd-43cc-9078–09e4fa05cacb",
"name": "DegreeLaw",
"resourceType": "CL-Schema",
"created": "2015–02–20T14:12:57Z",
"checksum": "a7c369ee9da8b25a2d6e93973fa8ca939b75abb6c39799d879a929ebea1adc0a",
"previousVersionId": null,
"nextVersionId": "0f964a80–5d18–4867–83e3-b47f5a756f02",
}
"data": "<CLSchema.json containing ‘{\"attrNames\":[\"last_name\",\"first_name\"\"degree_type\"\"graduation_year\"\"degree_percentage\"]}>”
}

This schema could be resolved through a DID Resolver, with an input such as the following:

did:cheqd:mainnet:46e2af9a-2ea0–4815–999d-730a6778227c#degreeLaw?versionTime=2015–09–08T02:41:00Z

You can dive into further detail on the syntax of resources and how they can be retrieved within the Resources section of our Identity Documentation.

Controllable and self-attestable

Resources can be tied to DID Documents and control over resources can be exerted via the same verification method keys as those written into an associated DID Document.

This allows persons to authenticate with a DID Document to update or prove control of a Resource, which addresses the problem of tamper-proofing identified around centralised cloud providers.

Built to be consumed by client applications

Resources must specify a name, and resource type and compile into a media type, which provides specific information to any client application about what data format and syntax the data are expected to be retrieved in.

This allows client applications to apply business rules to what types of resources are expected, and which should be accepted, making resources far easier to be consumed by third-party software. This differs from existing Hyperledger Indy resources, which require the client applications to be familiar with Indy in order to process and consume Indy resources.

Conversely, our method gracefully allows “dumb” applications, that do not understand specific DID protocols, to still fetch and access a resource over HTTP/REST APIs. In addition, “smart” applications, that do understand these protocols, can process, query, and get their own resources from DID resolution and dereferencing.

Indexable

Resources are versioned with unique identifiers (UUIDs), allowing previous versions to be archived within a collection, and retrieved through querying a unique ID or a version time.

This mitigates the problem identified of link rot when using centralised storage systems since each version is archived immutably.

For more information, please refer to the section on Versioning and Archiving Resources in our identity documentation.

Conclusion

In building resources on-ledger, we want to avoid the risks associated with relying on centralised infrastructure for storing resources, while importantly, remaining conformant with W3C standards and avoiding ledger lock-in. We believe that we have achieved this compromise by enabling the management of resources through the use of DID Documents, identifying resources using DID URLs and retrieving resources using existing DID Resolvers. Not only does this work solve existing problems, but it opens the door for far more innovation using DIDs, including:
  • Fully identifiable and resolvable governance documentation and schemes on ledger, tied to DIDs and DID Documents
  • Full AnonCreds support on non-Indy ledgers (to be explained further in a future blog)
  • On-ledger revocation lists, where each tails file can be uniquely versioned and retrieved efficiently (this is our next priority roadmap item)
  • Logos and company information easily accessible on-ledger, referenced within that company’s DID
We look forward to working with our partners and the broader SSI community to see how we can innovate together using this new functionality, and properly securing SSI architecture in a decentralised end-to-end flow. As always, we’d love to hear your thoughts on our writing and how resources on-ledger could improve your product offering. Feel free to contact the product team directly — [email protected], or alternatively start a thread in either our Slack channel or Discord.

Our Approach to Resources on-ledger was originally published in cheqd on Medium, where people are continuing the conversation by highlighting and responding to this story.

Entropy & Decentralisation: a cheq up

entropy and decentralisation-1-2

The concept of Entropy in decentralised governance was created by the team at cheqd to model how the control of the network changes over time, from the initial launch where the core team had a larger portion of control (Low Entropy), to a state where the community and users of cheqd have a decentralised spread of control over the Network (High Entropy).

This blog post intends to cheq up on the progress to date.

Increasing Entropy was something very important to cheqd because it:

  1. Correlates with higher Network security and resiliency across countries;
  2. Means broader contributions to the Network from a multidisciplinary and diverse collective;
  3. Enables increased integration capabilities with other technologies to improve the ecosystem as a whole;
  4. Dilutes the control from a select group of people to a genuinely decentralised and diverse collective.

In terms of modelling this change, we focussed on a number of key metrics for the network and created a scoring model which could be easily digested and understood based on five distinct Entropy levels. 

Variable

Entropy Level 1

Entropy Level 2

Entropy Level 3

Entropy Level 4

Entropy Level 5

Number of Node Operators (Validators)

5

10

25

50

100

Number of commits from the outside core team

5

10

25

50

100

Number of distinct Participants with bonded tokens

100

500

1000

5000

10,000

Number of stakeholders to achieve 51% of Network (Nakamoto coefficient)

2

4

8

15

30

Exchanges (CEX and DEX) supported by

1

2

4

6

8

Country distribution of node operators

5

10

20

40

60

Number of accepted Proposals after genesis

5

10

20

40

60

If you are interested in learning more about the scoring model and how we designed it, jump into our Entropy blog series here.

So, where are we now?

We can use our Entropy scorecard and table to pinpoint where cheqd is in terms of Entropy.

Variable 

Result

Entropy Level

Number of Node Operators (Validators)

62

4

Number of commits from the outside core team

17

2

Number of distinct Participants with bonded tokens

~8000

4

Number of stakeholders to achieve 51% of Network (Nakamoto coefficient)

8

3

Exchanges (CEX and DEX) supported

4

3

Country distribution of node operators

20+

3

Number of accepted Proposals after genesis

2

1

OVERALL SCORE

 

20

In terms of modelling this on our scorecard, this is how it looks:

This is an excellent start, given it’s been less than six months since we launched cheqd mainnet. Comparing this to where we started at cheqd mainnet launch, we have decentralised in almost all categories, improving from a score of 9 to a score of 20. 

But there is still a long way to go, both in achieving a higher overall score and also consistently higher individual scores.

Given where we are now, it is clear that the areas where we can improve:

  • Firstly, by encouraging the community and partners to focus on codebase commits, through better documentation and tutorials; and
  • Secondly, by driving more community participation in on-chain governance.

We intend to continually improve our existing processes by:

  • Making it easier to contribute to governance processes by having clear instructions on how to use the cheqd forum to make governance proposals and decisions;
  • Increasing the amount of discussion on the cheqd forums on technical topics regarding SSI and cheqd’s product;
  • Running workshops with our partners to increase understanding about where experts and vendors could build alongside the core team;
  • Suggesting that funds from the Community Pool are put towards the community (technical and non-technical initiatives)
  • Creating an Entropy dashboard to increase the visibility of what metrics need to be focussed on the most.

And finally,

High Entropy was never designed to be reached overnight, it is a gradual process. What is important, however, is cheqd’s Foundational Principle of Increasing Entropy. This is why it’s crucial to take stock, reflect and assess where core processes can be improved and iterated – to cheq up.

We, at cheqd, help companies leverage SSI. cheqd’s network is built on a blockchain with a dedicated token for payment, which enables new business models for verifiers, holders and issuers. In these business models, verifiable credentials are exchanged in a trusted, reusable, safer, and cheaper way — alongside a customisable fee.

Find out more about our solution here or get in touch if you wish to collaborate and/or join our ecosystem by contacting us at [email protected].

Some rights reserved

Understanding the SSI stack through 5 trends and challenges

Co-authored by Alex Tweeddale (Governance & Compliance Lead), Ross Power (Product Manager), and Ankur Banerjee (CTO/co-founder)

In the early months of 2022, the team at cheqd conducted two surveys diving into self-sovereign identity (SSI) and digital identity in Web 3.0. We analysed responses from a general audience as well as from an expert audience to tease out key trends.

This second article, following on from our first article, focuses on trends and key takeaways from the deep-dive survey that was shown to an expert technical audience of self-sovereign identity (SSI) vendors.

Specifically, this article will focus on trends and challenges that can be drawn from looking at each Layer of the SSI technical stack.

Key technical trends identified in digital identity / Web 3.0 in cheqd’s deep dive survey

What do we mean by the SSI technical stack? The best example of what we mean by this ‘stack’, is shown by the Trust over IP Model which splits into a distinct Technology Stack and a Governance Stack. The Technology Stack looks at Public Utilities, Peer-to-Peer Communication, Credential Exchange and Technology Application Ecosystems, which are all essential components of a functional SSI ecosystem, and will be the focus of this analysis (shown in Figure 1 below). Governance is also a vital component for real-world SSI use cases and should be the focus of future work and research.

Figure 1: the Trust over IP Technology Stack

We will use the ToIP classifications to split up the survey responses into how they correspond with each specific technical Layer. Through this, we will reach a set of conclusions and five trends around the entire technical stack, top to bottom.

Introduction

From our deep dive survey we have drawn 5 distinct trends across the layers four of the stack:
  • Trend 1: (Layer 1) Hyperledger Indy is still the most supported Layer 1, but there are signs it may be losing its dominance
  • Trend 2: (Layers 1, 2, 3) Aries-based SDKs are dominant, correlating with Indy at Layer 1
  • Trend 3: (Layer 2) OIDC SIOP may be starting to catch up with DIDComm in terms of a peer-to-peer connection layer
  • Trend 4: (Layer 3) The lack of harmonisation on Credential type/exchange standards is more stark than ever
  • Trend 5: (Layer 4) Ecosystem adoption could be driven by stronger commercial models and payment rails
All five trends culminate in one main challenge:

Interoperability and harmonisation is lacking at each layer of the stack, which is a barrier for adoption

In addressing this challenge, respondents indicated that cheqd could help boost adoption:

Since large-scale interoperability is not yet a selling point, SSI may benefit from new commercial models and payment rails to kickstart and incentivise adoption

To show the workings of how these conclusions were drawn, we will go through each trend individually and explain how the trend has developed and how challenges have emerged from the trends, using evidence in the form of data collected from our survey.

Finally, it is important to make clear that when we set out to gain a better understanding of what our community and our SSI partners wanted, we did not enter with a specific agenda or set of assumptions we were hoping to prove. That said, we are thrilled to see the direction we’re moving in at cheqd is generally supported by the data we collected.

Buckle up!

Trend 1: Hyperledger Indy is still the most supported Layer 1, but there are signs it may be losing its dominance

#Layer1

DID methods are the set of rules and instructions used for interacting with a specific Layer 1 Public Utility in order to write, update and resolve DIDs on that Utility. There are over 100 distinct DID methods already, with increasing diversity in the approach the did methods take.

We asked our respondents what DID methods they currently support or plan on supporting in 2022. The most prominent supported DID Methods amongst our respondents were Indy-based DID Methods, as well as specifically the Sovrin DID Method. This is most likely because Indy and Sovrin have been largely the only options for functional, identity-specific use cases.

did:ethr has also been around for a long time, however, due to being based on Ethereum, identity use cases have never been the main priority of Layer 1. For this reason, Ethereum and did:ethr have not gained the same amount of attention or traction within the identity sphere as Ethereum has across the rest of Web 3.0.

Interestingly, looking at the responses on a more granular level, the majority of the companies that support any of the alternatives to did:indy and did:sov also support one of did:indy or did:sov.

Only 3 out of the 37 respondents supported did:web or did:key without also supporting did:sov or an Indy-based method, for example.

This does suggest that for SSI vendors, did:sov and did:indy are the first DID Methods to be looked at and supported, before expanding to other alternatives afterwards. did:web and did:key both make sense in this regard, as they are off-ledger options for anchoring DIDs, which is arguably much more practical for testing environments or Proof of Concepts since nothing written into the DID is immutable.

Of the ledger-based alternatives, there is not currently a clear frontrunner, although did:ion is probably the alternative gaining the most momentum, as it is the main supported Utility for the VC-JWT Interop Profile between the likes of Mattr, Microsoft, Ping and Workday — working alongside the Decentralised Identity Foundation (DIF).

ION uses a protocol called Sidetree to store DIDs in temporary storage, as well as within IPFS / MongoDB, in order to rollup and batch DID operations to the main blockchain (Bitcoin in ION’s case), which makes it more operationally cost-efficient.

One DID Method that we did not include but was brought up within the comments and ‘other’ sections is did:ebsi. This is a ledger that we do expect to gain much more traction over the coming months and years, especially as the European Blockchain Services Infrastructure (EBSI) has begun releasing conformance criteria for European Digital Identity Wallets, mandating support for did:ebsi. We would expect to see an increase in uptake of specifically did:ebsi off the back of this body of work.

In terms of analysis here, it is important to stress that diversity at Layer 1 is by no means a bad thing. The emergence of new methods to compete with did:indy and did:sov should be encouraged, especially if such approaches comply directly with the W3C DID Core specification. The DID Core Spec was not written to be a rigid standard; rather, it promotes innovation, extensibility and flexibility — meaning that it is possible to innovate to a certain degree at the DID layer without compromising interoperability. We hope that in the next year, did:cheqd will become another name on the above list.

In terms of making interoperability more seamless here, we would push for the community to strengthen the DID Method Test Suite in order to better highlight the degree to which DID Methods interoperate and what functionality exists within each individual DID Method.

Trend 2: Aries-based SDKs are dominant, correlating with Indy at Layer 1

#Layer1, #Layer2, #Layer3

Software Development Kits (SDKs) are less talked about than they should be. In fact, SDKs are pivotal to interoperability because they enable third parties to carry out functions such as establishing connections, issuing Verifiable Credentials and verifying the Decentralised Identifiers within the Credential, against a Layer 1 Public Utility and DID Method(s).

Hyperledger Aries was initially designed to be an agnostic set of Protocols (Request For Comments ‘RFCs’) for carrying out SSI-based operations. These Protocols, however, and a lot of the Hyperledger Aries SDKs, have since been designed to work specifically with Hyperledger Indy — which is why, largely, there has been a dominance of Indy-based Layer 1s and Indy-based DID methods.

This was shown within the survey results, with the largest proportion, 35.1%, of respondents using Aries Cloud Agent Python (ACA-Py); and secondly, 27.0% of the respondents using Evernym’s VDR Tools SDK (tied to Hyperledger Indy).

As a trend here, it is likely that Aries SDKs will remain dominant, especially as there is ongoing work to decouple the dependence between Aries and with Indy. We see decoupling Aries from Indy as a vital part of building a more interoperable SSI technical stack.

At cheqd, we are also planning to support this work by helping expand one of the main Aries Frameworks to support cheqd, and therefore, multiple Layer 1s other than Indy-based networks. Once Aries or any SDK is able to communicate with a variety of Layer 1s and route requests to specific DID Methods accordingly, this will be a key milestone for SSI.

On this slightly different topic, we were surprised to see Aries Framework JavaScript and Aries Framework Go with such a low adoption vector (both 10.8%), especially as most companies indicated that they use JavaScript/TypeScript or related frameworks as their primary language for development, as seen in the graphic below:

The skew towards ACA-Py may simply be down to the fact that much more work is being done on the project. Looking at the Contributions over the last 2 years, ACA-Py is far ahead of the likes of Aries Framework Javascript just in terms of general contributions and commits, with roughly four times more activity over a sustained period.

Figure 2: ACA-Py Contributions

Figure 3: Aries Framework JavaScript contributions

What perhaps this result does show, is that there is no standout SDK for SSI yet, even within Aries, different implementations of Aries may be useful for different purposes, with pros and cons within each implementation.

This may also explain why Indy is the most utilised Layer 1, as Aries is currently the main SDK to use alongside it. Until this tie is severed, we do not see this changing in most SSI implementations.

Trend 3: OIDC SIOP may be starting to rival DIDComm in terms of a peer-to-peer connection layer

#Layer2

Layer 2 in the SSI stack is all about creating secure connection channels between different parties in identity transactions. If you think of Verifiable Credentials for the contents of a letter, Layer 2 provides the Envelope and the Postal Service.

Figure 4: Peer-to-peer communication envelope

Through the use of a peer-to-peer communication channel, Verifiable Credentials or messages can be sent securely between parties, in a way which is completely off-chain. This is often a concept that newcomers to SSI do not fully understand since most uses of blockchain have peer-to-peer transactions take place on-ledger.

The survey results showed that the Layer 2 protocol is relatively split between two frontrunners DIDComm (v1 and v2) as well as Self-Issued OpenID Provider: OpenID Connect (SIOP OIDC).

73.7% of respondents ranked DIDComm v1 and v2 as being either the most or second most important protocol here. This demonstrates that it is a clear leader in the community for how to create trusted communication channels between wallets and agents. This is also supported by interoperability profiles such as WACI-DIDComm.

However, 68.4% of respondents also clearly acknowledged the importance of SIOP OIDC. In fact, there may even be a more current trend toward SIOP OIDC, with both the European Commission as well as the VC-JWT Interop Profile recently selecting it as a communication channel over DIDComm v2. This may be because it focuses on bridging Web 2.0 and more federated identity models into the self-sovereign paradigm, through the thoroughly tested and well-trodden path of OpenID Connect.

Through this bridge, there may be a larger adoption vector as there are already millions of OpenID Connect Relying Parties which may be able to access and issue Verifiable Credentials through this model.

Whether DIDComm or SIOP OIDC does come out on top in practice is yet to be seen, however, it is vitally important that both work towards functional compatibility between the two approaches.

This is also understood within the WACI-DIDComm interop profile which states that it is waiting until version 2 of SIOP OIDC before considering adoption into its own interop profile.

Hopefully, after the latter publishes its v2 spec, it will spark a convergence around a particular interop profile, or a combination of the two. A Killer Whale Jello Salad. If you know, you know.

Trend 4: The lack of harmonisation on Credential type/exchange standards is more stark than ever

#Layer3, #Layer4

Credential Exchange

Being issued a set of claims and Verifiable Credentials to your identity wallet and then presenting them to a third party is at the core of SSI. This is what Layer 3 in the technology seeks to achieve.

For SSI to become an interoperable ecosystem, the mechanism that transports one Verifiable Credential to a Holder’s wallet must be able to communicate with a completely different piece of software receiving a Verifiable Credential or Presentation on the end of the Verifier. Yet, currently, the different approaches do not enable Credential interoperability.

In this category, Aries Present Proof (88.3% voted as 1, 2 or 3 in terms of importance), WACI Presentation Exchange (58.8% voted as 1, 2 or 3 in terms of importance) and Verifiable Presentation Request (76.5% voted as 1, 2 or 3 in terms of importance) all score very highly — with Credential Manifest and OIDC Credential Provider being recognised as very important by a smaller segment of the respondents.

The WACI-DIDComm Interop Profile encompasses both the Wallet and Credential Interop work on Verifiable Presentation exchange, alongside Aries Present Proof, as supported means of exchange, on top of DIDComm v2. WACI Pex and Aries Present Proof here work together, tackling slightly different components of Credential and Presentation Exchange and Proofs.

Looking at the data on a more individual basis, it is not surprising that the people who voted SIOP OIDC highly were also the same people who voted OIDC Credential Provider highly. Similarly, the people who voted WACI PEx and Aries Present Proof highly also favoured DIDComm v1 and 2 as the Layer 2 communication envelope.

This separation between, on the one hand, the VC-JWT Interop profile and EBSI which both focus on SIOP OIDC and OIDC Credential provider; and on the other hand, the WACI-DIDComm interop profile which focuses on Aires Present Proof and WACI PEx demonstrates the lack of harmonisation in the industry right now.

Credential type

It will come as no surprise to anyone who has been in the SSI industry for a while that there is a stark lack of agreement on which semantic and syntactic format Verifiable Credentials should take.

Within the survey results, the respondents indicated a very even:

  • 45.9% AnonCreds;
  • 40.5% JSON based VC JWT;
  • 32.5% JSON-LD; and
  • 40.5% JSON-lD with BBS+ signatures.

There has been plenty written on the differences between these Credential types, we will point to Kaliya Young’s work on Credential flavours here.

However, the key point is that this lack of alignment and agreement in technical standards has acted as a barrier to SSI adoption. This response shows directly the split that there is in the community between Credentials types and different standards on this topic.

Here the respondents clearly indicated that both the lack of maturity in technical standards and the limited interoperability afforded by the standards were two of the main barriers to real-world adoption by clients.

Without technical and semantic interoperability, Self-Sovereign Identity can only exist within closed silos or consortiums, rather than as a global model for trusted data.

This is not something that any of us want in the community, and as such, it is important that we work together towards:

  1. Robust technical Standards published by Standards bodies (such as W3C, IETF, ISO) which anyone is able to adopt, in an interoperable fashion
  2. Interoperability profiles, outwards communication and industry dialogue about technology stacks
  3. Middleware to connect any natively incompatible implementations with one another

Through a combination of these three points, we would hope that SSI could converge to a point of interoperability, rather than creating a larger divide.

Layer 5: Ecosystem adoption could be driven by stronger commercial models and payment rails

Quite interestingly, the respondents made it clear that the main driver for the adoption of SSI is currently to reduce the compliance burden on companies and to make it easier to comply with new regulations. It weighed the highest with 65% selecting this option as a driver for interest in SSI amongst their customers.

This is not entirely surprising as there are many regulations are coming into force, or having recently been proposed, which impose strict identity or KYC requirements on businesses, such as the US Drug Supply Chain Compliance Act within product supply chains, eIDAS 2.0 for data sharing in the European Union or the Financial Action Task Forces’ Recommendation 16 Travel Rule within Web 3.0.

Through these regulatory changes, companies are being shoehorned into looking beyond the purview of existing technologies to comply with the expanded scope of these new regulations. Or, in other words, changes in regulation are making SSI adoption more viable, as was selected by 55% of the respondents.

35% of respondents also recognised the potential for revenue streams as a driver for SSI. Similarly, 35% recognise that KYC/KYB is currently too expensive with existing providers. Finally, 35% of respondents see a reduction in costs through greater operational efficiency as being a driver.

These latter three responses indicate that the current customers that SSI vendors are working with are currently more focussed on the compliance benefits of SSI than the potential cost benefits and revenue opportunities.

This response is most interesting when compared to another question we asked on where cheqd’s product roadmap could help our respondents’ customers. Here, 70% highlighted that Payment Rails for Identity would help drive adoption with their customers.

This indicates that functional payment models may increase interest from customers already interested in the compliance benefits. However, it is likely that this is not an existing driver, since the technology to realise the operational and cost benefits, alongside new revenue opportunities is not yet readily available.

This data reinforces the product direction and roadmap we have laid out at cheqd, which is pleasing to see. We also expand and dive deeper into this specific rationale in our third trend of the General Product Survey response — Trend 3: Privacy-preserving commercial models for digital identity exchange could radically accelerate the adoption of self-sovereign identity.

Key Takeaways

One of our founding principles at cheqd is to build out the network with and for our community. We are confident in the knowledge we have within the cheqd team, but fundamentally, we believe in the wisdom and experiences of those that will be the ultimate beneficiaries of the network; a utility after all is for everyone, and should therefore be designed by everyone.

So, bringing it all together, a reminder of the 5 Trends identified:

  • Trend 1: (Layer 1) Hyperledger Indy is still the most supported Layer 1, but there are signs it may be losing its dominance
  • Trend 2: (Layers 1, 2, 3) Aries-based SDKs are dominant, correlating with Indy at Layer 1
  • Trend 3: (Layer 2) OIDC SIOP may be starting to catch up with DIDComm in terms of a peer-to-peer connection layer
  • Trend 4: (Layer 3) The lack of harmonisation on Credential type/exchange standards is more stark than ever
  • Trend 5: (Layer 4) Ecosystem adoption could be driven by stronger commercial models and payment rails
 

Looking at the trends holistically, they each spiral into one main challenge: interoperability and equivalency at each technical layer is still one of the largest barriers to mainstream SSI adoption — this is an overarching trend in itself.

Resolving this challenge is not as easy as converging around one formal Standard at each layer, or just using “W3C” Standards. This is because companies, governments and consortia each have individual requirements for SSI that cannot be currently addressed by one interoperability profile.

This has led to a degree of short-termism and long-termism within the community.

Vendors with clients knocking on their door and asking for a product tomorrow are swaying towards short-term solutions (such as AnonCreds which contains privacy-preserving revocation and the capability for predicate proofs); whereas, enthusiasts and SSI visionaries are looking at a longer-term vision of harmonisation and wide-barrelled interoperability (such as JSON-LD with BBS+ signatures or even AnonCreds V2 with BBS+).

Both approaches, short-term and long-term, need to be recognised as valid by the broad SSI community; which does not make the resolution a quick fix.

Whether one mature enough Standard does emerge over the next few years, creating a convergence effect; or, we move towards a middleware marketplace between different implementations and standards, we think it is essential to maintain an open conversation about technical stacks and interoperability profiles. It is only by having this conversation that interoperability may be inched closer.

What do these trends mean for cheqd going forwards?

Our immediate focus is to make the cheqd network more accessible and usable by our SSI vendors, whilst offering an opportunity to help educate the wider community and beyond on the need for SSI. Our medium-to-long term focus is delivering new commercial models which will drive ecosystem adoption. We are pleased to see that this focus is also supported by the results that the survey respondents provided. As such, over the past few weeks, we’ve been building an MVP which will act as a launchpad from which we’ll be building out the payment models. This work is largely informed by the results from Trend 2 (Aries-based SDKs are dominant, correlating with Indy at Layer) as we’ve been working on a cheqd Identity MVP which leverages the Veramo SDK (a Javascript SDK), combined with a refactored Cosmos wallet (Lum Wallet) to enable to possible to issue and hold a credential in the SAME wallet as other DeFi activities are performed, using the cheqd DID Method. By offering the ability to hold a Verifiable Credential in the same location as one holds tokens and performs DeFi activities (such as staking and delegating), we see an opportunity to begin demonstrating the intrinsic need for a tokenized network for identity, which in parallel provides us with a starting point from which to build out the payment rails that are clearly desired, beginning with Verifier-pays-Issuer, enabled through our approach to Revocation Registries, coming soon. If you’re at IIW we’d love to share our demo with you. Co-founders Fraser Edwards and Ankur Banerjee will be sharing this on Wednesday, 27th, 13:30–14:.30pm PDT (full details will be on our social and community channels). We’d love to hear your thoughts on our analysis and what this means for your company. Feel free to contact the product team directly — [email protected], or alternatively start a thread in either our Slack channel or Discord.

Understanding the SSI stack through 5 trends and challenges was originally published in cheqd on Medium, where people are continuing the conversation by highlighting and responding to this story.