Reimagining Banking with Reusable KYC

Reimagining Banking with Reusable KYC Thanks to Decentralised Identity

Reimagining Banking with Reusable KYC thanks to Decentralised Identity​


Self-sovereign identity (SSI) or decentralised identity (DID) enables both KYC and separately creditworthiness in an efficient manner and in a way that prevents fraud. The verifier of data can always verify credentials directly with the issuer of it. Furthermore, the KYC credentials can be re-used to further save on costs and time while improving the user experience. In short, SSI can significantly reduce the friction for users improving a customer experience and, at the same time, providing a compliant service. While current KYC is “single-use”, SSI makes KYC “reusable”.

Here’s how.

Why current know your customers processes don’t work

Most financial services usually request user identity verification, such as opening a bank account. KYC is the process of verifying the identity of a customer, without it, fraud is impossible to prevent.

Driven by regulatory requirements, banks and corporates are required to undertake KYC and anti-money laundering (AML) checks to be in compliance with them.

While effective KYC processes are vital for successful compliance and risk management, endless identity checks are, at the same time, a hurdle for customers. According to SWIFT, AML and KYC, compliance is growing in importance as more stringent regulatory requirements are coming into force, making it even more difficult for banks to navigate the balance of compliance and frictionless customer service. That balance is being exacerbated by the old-fashioned compliance methods currently used.

Banks, including neobanks, continue to fight a losing battle with fraud in their industry. It is indeed a “losing battle” because they continue to use the same age-old AML and KYC processes to defeat the ever-evolving digital threats. Oversharing personal information in order to verify the identity (i.e. showing a utility bill in order to prove an address) or storing all personal data in a centralised database aren’t future-proof and secure KYC methods.

A very well-known example was the neobank darling Monzo – nearly half a million of their customers fell victim to a data breach. It shows how even digitally-savvy banks’ KYC and data handling methods aren’t sustainable and fit for the digital age.

Part of the problem is the current regulations – meaning these financial institutions are burdened with heavy requirements to be compliant with, which means significant costs. There is no incentive, time or resources for a company within the financial sector to develop an identity solution of its own. Also, perhaps a deeper point could be that they don’t feel it is their responsibility to find a solution. Since the amount of regulation imposed, one may assume that the regulators must know what they are doing and should be the ones to carry this burden. If a bank is already meeting a hefty bill in costs to comply, e.g. paying salaries for a compliance team of 100s of people along with the cost of associated systems, would they be further thinking to find, let alone develop, a solution to the problem?

Yes, one can argue neobanks are much ahead of the rest and are actively using user-friendly identity verification methods, such as selfies, a short video of the applicant or even live verification systems. However, these checks can easily be bypassed with deepfakes, with some of them being literally mind-blowing (cheq out DuckDuckGoose).

Decentralised KYC

In order to understand how decentralised identity can aid the banking sector by re-imagining KYC, we need to understand the core concepts of SSI itself.

Self-sovereign identity or decentralised identity is a method of identity that centres the control of information around the user, hence also sometimes referred to as “self-managed identity”. It safeguards privacy by removing the need to store personal information entirely on a central database and gives individuals greater control over what information they share. Unlike the existing system, it’s a user-centric and user-controlled approach to exchanging authentic and digitally signed information in a much more secure way.

Acting as an enabler of decentralised identity, verifiable credentials are tamper-evident data files with a set of claims about a person, organisation, or thing that can be cryptographically verified.

Banks can adopt a decentralised identity to make their entire KYC process smooth by using reusable verifiable credentials across banks through forming a consortia or potentially utilising initiatives like open-banking here in the UK. If one bank issues a VC, others can simply reuse it and get it verified with the issuer of the VC. The VC can be updated on expiry or on a more regular basis (e.g. annual) based on existing policies of the bank and regulators – and incorporating any further requirements set by the ecosystem or the consortia itself, accommodating various commercial business models – it could be the issuer bank renewing this VC or perhaps the bank that receives a VC which has just expired. The scenarios are endless.

It would lead to saving costs for banks and a massive improvement to the onboarding experience.

Furthermore, banks can start actually becoming issuers of additional, useful data through SSI to their clients, e.g. credit scores, evidence of salary payments or bank balance, bank account title VCs, etc. They can get paid each time another entity (bank or otherwise) needs the VC verified – e.g. a mortgage provider or a new employer, or a visa application. The payer can be verified of the data or the holder of the data. As a result, this will:

  • Make payments fair – e.g. banks get paid because they have actually done something valuable for the client by verifying a particular detail to a third party;
  • Put the data owner in control of their data. The data is issued to the bank’s client, who is the owner and user of that data. And so the user is always in control of where and what data to use.
  • Save time for each participant of the process.
  • Reduce chances of fraudulent transactions or activity, including ID theft.
  • Make all such applications that take hours or days to complete verified in a matter of seconds, including bank accounts, mortgages, visas, loans, property purchases, and so on.
  • Improve security – since the users keep their data, banks do not have to create databases and data silos, meaning they are less of a target for hackers, decreasing their exposure to data leaks and hacks.

Through SSI, banks can further (in a partnership with non-financial and/or government organisations or just within their consortia) create a list of verified issuers of data or even adopt a scale or scoring system. In such a system, the level of trust put in each VC by the user or verifier of that VC is based on several factors, one of which can be the status or credibility of the issuer. An individual can then use those verified credentials to prove their identity. And so, if an individual VC has a lower credibility score due to the VCs issuer, they may need to use multiple VCs from different issuers to prove that particular aspect of their identity.

Companies pioneering decentralised KYC

Decentralised identity is already being used by a number of banks and financial services firms for their KYC. As part of their Regulatory Sandbox, the UK’s Financial Conduct Authority (FCA) tested how decentralised identity can make it easier for customers to sign up for financial products while maintaining a high level of fraud and anti-money laundering protection.

There are also a number of web3 companies already improving KYC processes with the help of decentralised identity. Two particularly interesting ones are Umazi and Verida. Umazi speeds up corporate identity verification by streamlining due diligence by replacing repetitive paperwork heavy processes. Verida is a multi-chain protocol for interoperable database storage and messaging built on decentralised identity.

We at cheqd believe that SSI helps solve this problem effectively. The decentralised nature of the solution makes it resilient to phishing, hacking or similar attacks.

In addition, the identity data is held by individuals themselves. So, there is no single, ripe, fruitful target (a bank in this case) any longer. Even if the perpetrator is successful, there is no longer a single honeypot that stores all that sensitive and very valuable personal data for thousands or millions of individuals.

Finally, cheqd payment rails will create commercial models for trusted data marketplaces, which will incentivise all the participants of the process. And as mentioned above, an organisation that receives identity data or credentials from an individual can have it verified by the issuer of that identity (see the image below), which further helps fight fraud.

cheqd Trust Triangle


In short, while current KYC is “single-use”, SSI makes KYC “reusable”, decentralised, privacy-preserving, cheaper, and future-proof.

The SSI market is around the 0.55 trillion mark, noting this number might be significantly underestimated as other unexplored areas of opportunities present themselves with SSI adoption. Irrespective, experts believe that the adoption of this technology will accelerate in the coming years.

Read more about how cheqd infrastructure enables Trusted Data Markets.

The role of cheqd in Trusted Data markets

The role of cheqd in trusted data markets

A technical approach to building Trusted Data Markets, reducing the time-to-reliance and compliance costs in digital interactions.


The “Trust Gap”

As discussed in “The Anatomy of a Trusted Data Market, the composition of “trust” is a complex and interpersonal relationship between two parties. It is predicated on more than the mere reliance on a particular party; namely, it involves an “extra factor”, including the perception of good faith and the willingness to act in a credible way.

However, when considering “trust” in a digital context, it becomes increasingly challenging. As opposed to an “interpersonal” relationship, digital trust is often a “pseudonymous” relationship. Here we approach what is widely regarded by academics as the “trust gap”; the de facto lack of the capacity to make an informed judgement on the “extra factor” to build “trust” beyond “mere reliance”.

Therefore, to build a functional Trusted Data Market with cheqd, we need to augment the requirement for this “extra factor” using a combination of trust-building technologies and techniques.

See: Camila Mont'Alverne, Sumitra Badrinathan, Amy Ross Arguedas, Benjamin Toff, Richard Fletcher, and Rasmus Kleis Nielsen. The Trust Gap: How and Why News on Digital Platforms is viewed more Sceptically versus News in General. 2022. University of Oxford. Reuters Institute.

Available at:

The Technical Components of a Trusted Data Market

  1. Decentralized Identifiers (DIDs)
  2. Verifiable Credentials (VCs)
  3. Trust Management Infrastructure (TMI) such as Trust Registries (TRs) or Status Registries (SRs).
  • Legitimacy established by DIDs
  • Integrity established by VCs
  • Reputability established by TMI
Technical Composition of Trusted Data
Technical Composition of Trusted Data

Legitimacy through Decentralized Identifiers

Decentralized Identifiers (DIDs) are a relatively new technical standard, ratified by the W3C as a formal recommendation in 2022, for uniquely identifying a particular entity in a digital domain. Each DID can be “resolved” to fetch a data file called a DID Document, which helps prove legitimacy in three ways:


DID Documents must contain signing keys, known as Verification Methods, which can be used to cryptographically sign other data files (such as Verifiable Credentials). If a DID and associated Verification Method is found referenced in another data file, that DID and it’s key can be challenged, and authenticated against, to prove that DID is in fact:

  1. Legitimate;
  2. Associated with a particular DID Document (discussed in point 2);
  3. Associated with any other DID-Linked Resource (discussed in point 3).

If a DID is proved to be legitimate, it is possible to infer that the data file signed by the DID has a higher level of trustworthiness.



Integrity through Verifiable Credentials

Verifiable Credentials (VCs) are another type of data file, again formalised by the W3C as a standard, designed to ensure absolute integrity of the “claims” listed in the data file. A “claim” in this sense is an assertion about a particular entity; for example, this could be attesting to someone’s name, address, date of birth etc.

VCs are able to carry out this function because the “claims” contained in the credential are intrinsically verifiable through cryptographic “proofs”.

VCs dovetail well together with DIDs, since the “proof” embedded in the VC is able to be signed by DIDs and their associated Verification Method keys. This allows the VC “proof” to be challenged and authenticated against using the Public Key Infrastructure from the DID and associated DID Document.

Once the proof is embedded in the VC, the VC may also be serialised as a JSON Web Token (JWT) or use a Data Integrity proof (VC-DI), to create a representation of the Credential that is tamper-evident. This means that if any modification is made to the serialisation, the embedded “proof” will become unverifiable.

Commonly therefore, VCs are issued to a “holder”, who holds this in a data wallet, and these VCs are cryptographically signed by a DID of the issuing entity “issuer”. This enables the “holder” to prove to a third party that the Verifiable Credential has both:

  1. Legitimacy — since it is signed by a particular entities DID; and
  2. Integrity — since the cryptographic proof is tamper-evident.

Different cryptographic signature schemes can also be layered on top of VCs to provide additional benefits, such as:

  1. Selective disclosure: where only a selected subset of VC claims, or selected claims from multiple VCs, are presented in one tamper-evident format (e.g. SD-JWT).
  2. Zero-Knowledge Proofs (ZKPs): where a VC can use its legitimacy and integrity to prove a particular fact, in a yes/no challenge/response mechanism, without revealing the actual “claims” written into the VC (e.g. AnonCreds).

VCs are highly flexible in their design, with certain flavours being useful for specific use cases. However, each type maintains the same underlying focus on data integrity. This data integrity coupled with the legitimacy of DID authentication, is in many cases, enough for a verifier to build a level of “trust” in a digital interaction, reducing the time-to-reliance significantly.

Reputability through Trust Management Infrastructure

Trust Management Infrastructure (TMI) can be used to move the needle from “low/medium” trust digital interactions to “high” trust digital interactions. As such, this infrastructure may not always be required in a trusted data market — but may be relied upon when necessary.

DID-Linked Resources (DLRs) may be used to establish TMI in a decentralized way. Examples of common TMI for Trusted Data Markets are Trust Registries (TRs) which may ascertain whether a DID belongs to a trusted set; or Status Registries (SRs), which may be used to check if the VC status has been revoked or not. However, for the purposes of this paper, we will use TRs as the canonical TMI to explain the concept of reputability.

A TR is a data object where one entity publicly attests to the legitimacy of other entities. For example, a Health Regulator such as the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK may create multiple trust registries of pharmaceutical manufacturers or wholesalers that are legally regulated to provide certain types of medicines, drugs or pharmaceutical products in the UK.

In the context of decentralised identity technology, TRs contain lists of DIDs pertaining to specific entities for a particular purpose. In the example above, MHRA could create a TR including the DIDs of each pharmaceutical manufacturer or wholesaler regulated to carry out a particular action.

Through resolving-to and parsing a TR, a verifier can traverse the DIDs and metadata that is listed to establish a root-of-trust and establish that the data they are receiving hits requisite levels of assurance for a specific governance framework.

TRs provides relying parties with additional assurance through this way of linking to a root-of-trust, resulting in:

  1. Reputability, since the “verifier” will be able to check that the “issuer” DID signing the “holder” VC is attested to by one or multiple other entities through a public TR; this layers on top of:
  2. Legitimacy (as previously discussed)
  3. Integrity (as previously discussed)

To conclude this section, the diagram below helps explain how the three technological components described in this section work in conjunction with one another — to build a comprehensive web of trust.

Interplay between DIDs, VCs and TRs

This diagram illustrates the following flows:

  1. A DID cryptographically signs a VC, which establishes Legitimacy and Integrity in the data the VC contains
  2. A VC references a TR (or other TMI), which establishes Legitimacy and Integrity that a TR is intended to be used by the verifier
  3. The TR provides additional information about the reputability of the DID, which establishes Legitimacy, Integrity and Reputability in the DID and signed VC which can be used to meet governance and compliance requirements.

Bridging the Trust Gap

  1. Legitimate, since it is attested to by a particular “issuer” (I)
  2. Cryptographically untampered because the VC data model enables proof serialisation and data integrity
  3. Reputable, since one or multiple TRs can be referenced to where the issuer’s DID is attested to by third parties
  1. Other parties they are interacting with meet compliance requirements for their industry or use case, creating trusted markets;
  2. They themselves meet compliance requirements, as they can demonstrably assure third-party regulators that the data they receive from other parties has absolute legitimacy, integrity and sufficient reputability for a particular governance framework.

Making the Market

  • Legitimacy, via the authentication of a DID = Free
  • Integrity, via the verification of a VC = Free
  • Reputability, via the verification of a TR (or other TMI) = Paid
  1. A cost saving opportunity for entities to achieve a high-level of trust, compared to existing KYC and KYB mechanisms
  2. A time-efficiency bonus for achieving a high-level of trust, with trusted data being instantaneously verifiable — reducing the burden of regualtory compliance
  3. A never-before seen revenue opportunity for “issuers” of trusted data

Payment Gating Reputation

The way that cheqd supports the above objective is through payment gating the reputational arm of Trusted Data, Trust Management Infrastructure (TMI)Trust Registries (TR) or in an alternative use case, Status Registries (SR).

Payment gating a Trust Registry (TR)
Payment gating a Trust Registry (TR)
Payment gating a Status Registry (SR)
Payment gating a Status Registry (SR)
  1. “Issuers” (and in some cases “Regulators”) are able to set the market price of unlocking a TR or SR
  2. “Verifiers” are able to choose whether they want to pay to unlock the result of a TR or SR to achieve
  3. Payments will be made back to the “issuers” of the VC that is being presented to the “verifier”
  1. If a TR for a particular DID, or SR for a particular VC, has a high Level of Assurance (LoA), such as being created by a reputable entity, it is reasonably foreseeable that the price for that check may be higher than average.
  2. If the price of a TR or SR check is too high, the verifier will either: (a) Choose not to make the extra payment; or (b) Choose another TR to make the check against (if available)
  3. Once organisations and industries see the revenue opportunities from creating TRs, it is hypothesised that a competitive market will emerge — with a range of TRs with differing LoAs and associated range of prices.

We will explore how these use-cases present a clear product market fit for cheqd, cheqd’s partners and also the wider SSI ecosystem, projected to capture 550 billion dollars worth of value by 2030.