Reimagining Banking with Reusable KYC

Reimagining Banking with Reusable KYC Thanks to Decentralised Identity

Reimagining Banking with Reusable KYC thanks to Decentralised Identity​

Introduction

Self-sovereign identity (SSI) or decentralised identity (DID) enables both KYC and separately creditworthiness in an efficient manner and in a way that prevents fraud. The verifier of data can always verify credentials directly with the issuer of it. Furthermore, the KYC credentials can be re-used to further save on costs and time while improving the user experience. In short, SSI can significantly reduce the friction for users improving a customer experience and, at the same time, providing a compliant service. While current KYC is “single-use”, SSI makes KYC “reusable”.

Here’s how.

Why current know your customers processes don’t work

Most financial services usually request user identity verification, such as opening a bank account. KYC is the process of verifying the identity of a customer, without it, fraud is impossible to prevent.

Driven by regulatory requirements, banks and corporates are required to undertake KYC and anti-money laundering (AML) checks to be in compliance with them.

While effective KYC processes are vital for successful compliance and risk management, endless identity checks are, at the same time, a hurdle for customers. According to SWIFT, AML and KYC, compliance is growing in importance as more stringent regulatory requirements are coming into force, making it even more difficult for banks to navigate the balance of compliance and frictionless customer service. That balance is being exacerbated by the old-fashioned compliance methods currently used.

Banks, including neobanks, continue to fight a losing battle with fraud in their industry. It is indeed a “losing battle” because they continue to use the same age-old AML and KYC processes to defeat the ever-evolving digital threats. Oversharing personal information in order to verify the identity (i.e. showing a utility bill in order to prove an address) or storing all personal data in a centralised database aren’t future-proof and secure KYC methods.

A very well-known example was the neobank darling Monzo – nearly half a million of their customers fell victim to a data breach. It shows how even digitally-savvy banks’ KYC and data handling methods aren’t sustainable and fit for the digital age.

Part of the problem is the current regulations – meaning these financial institutions are burdened with heavy requirements to be compliant with, which means significant costs. There is no incentive, time or resources for a company within the financial sector to develop an identity solution of its own. Also, perhaps a deeper point could be that they don’t feel it is their responsibility to find a solution. Since the amount of regulation imposed, one may assume that the regulators must know what they are doing and should be the ones to carry this burden. If a bank is already meeting a hefty bill in costs to comply, e.g. paying salaries for a compliance team of 100s of people along with the cost of associated systems, would they be further thinking to find, let alone develop, a solution to the problem?

Yes, one can argue neobanks are much ahead of the rest and are actively using user-friendly identity verification methods, such as selfies, a short video of the applicant or even live verification systems. However, these checks can easily be bypassed with deepfakes, with some of them being literally mind-blowing (cheq out DuckDuckGoose).

Decentralised KYC

In order to understand how decentralised identity can aid the banking sector by re-imagining KYC, we need to understand the core concepts of SSI itself.

Self-sovereign identity or decentralised identity is a method of identity that centres the control of information around the user, hence also sometimes referred to as “self-managed identity”. It safeguards privacy by removing the need to store personal information entirely on a central database and gives individuals greater control over what information they share. Unlike the existing system, it’s a user-centric and user-controlled approach to exchanging authentic and digitally signed information in a much more secure way.

Acting as an enabler of decentralised identity, verifiable credentials are tamper-evident data files with a set of claims about a person, organisation, or thing that can be cryptographically verified.

Banks can adopt a decentralised identity to make their entire KYC process smooth by using reusable verifiable credentials across banks through forming a consortia or potentially utilising initiatives like open-banking here in the UK. If one bank issues a VC, others can simply reuse it and get it verified with the issuer of the VC. The VC can be updated on expiry or on a more regular basis (e.g. annual) based on existing policies of the bank and regulators – and incorporating any further requirements set by the ecosystem or the consortia itself, accommodating various commercial business models – it could be the issuer bank renewing this VC or perhaps the bank that receives a VC which has just expired. The scenarios are endless.

It would lead to saving costs for banks and a massive improvement to the onboarding experience.

Furthermore, banks can start actually becoming issuers of additional, useful data through SSI to their clients, e.g. credit scores, evidence of salary payments or bank balance, bank account title VCs, etc. They can get paid each time another entity (bank or otherwise) needs the VC verified – e.g. a mortgage provider or a new employer, or a visa application. The payer can be verified of the data or the holder of the data. As a result, this will:

  • Make payments fair – e.g. banks get paid because they have actually done something valuable for the client by verifying a particular detail to a third party;
  • Put the data owner in control of their data. The data is issued to the bank’s client, who is the owner and user of that data. And so the user is always in control of where and what data to use.
  • Save time for each participant of the process.
  • Reduce chances of fraudulent transactions or activity, including ID theft.
  • Make all such applications that take hours or days to complete verified in a matter of seconds, including bank accounts, mortgages, visas, loans, property purchases, and so on.
  • Improve security – since the users keep their data, banks do not have to create databases and data silos, meaning they are less of a target for hackers, decreasing their exposure to data leaks and hacks.

Through SSI, banks can further (in a partnership with non-financial and/or government organisations or just within their consortia) create a list of verified issuers of data or even adopt a scale or scoring system. In such a system, the level of trust put in each VC by the user or verifier of that VC is based on several factors, one of which can be the status or credibility of the issuer. An individual can then use those verified credentials to prove their identity. And so, if an individual VC has a lower credibility score due to the VCs issuer, they may need to use multiple VCs from different issuers to prove that particular aspect of their identity.

Companies pioneering decentralised KYC

Decentralised identity is already being used by a number of banks and financial services firms for their KYC. As part of their Regulatory Sandbox, the UK’s Financial Conduct Authority (FCA) tested how decentralised identity can make it easier for customers to sign up for financial products while maintaining a high level of fraud and anti-money laundering protection.

There are also a number of web3 companies already improving KYC processes with the help of decentralised identity. Two particularly interesting ones are Umazi and Verida. Umazi speeds up corporate identity verification by streamlining due diligence by replacing repetitive paperwork heavy processes. Verida is a multi-chain protocol for interoperable database storage and messaging built on decentralised identity.

We at cheqd believe that SSI helps solve this problem effectively. The decentralised nature of the solution makes it resilient to phishing, hacking or similar attacks.

In addition, the identity data is held by individuals themselves. So, there is no single, ripe, fruitful target (a bank in this case) any longer. Even if the perpetrator is successful, there is no longer a single honeypot that stores all that sensitive and very valuable personal data for thousands or millions of individuals.

Finally, cheqd payment rails will create commercial models for trusted data marketplaces, which will incentivise all the participants of the process. And as mentioned above, an organisation that receives identity data or credentials from an individual can have it verified by the issuer of that identity (see the image below), which further helps fight fraud.

cheqd Trust Triangle

Conclusion

In short, while current KYC is “single-use”, SSI makes KYC “reusable”, decentralised, privacy-preserving, cheaper, and future-proof.

The SSI market is around the 0.55 trillion mark, noting this number might be significantly underestimated as other unexplored areas of opportunities present themselves with SSI adoption. Irrespective, experts believe that the adoption of this technology will accelerate in the coming years.

Read more about how cheqd infrastructure enables Trusted Data Markets.

The role of cheqd in Trusted Data markets

The role of cheqd in trusted data markets

A technical approach to building Trusted Data Markets, reducing the time-to-reliance and compliance costs in digital interactions.

Introduction

The “Trust Gap”

As discussed in “The Anatomy of a Trusted Data Market, the composition of “trust” is a complex and interpersonal relationship between two parties. It is predicated on more than the mere reliance on a particular party; namely, it involves an “extra factor”, including the perception of good faith and the willingness to act in a credible way.

However, when considering “trust” in a digital context, it becomes increasingly challenging. As opposed to an “interpersonal” relationship, digital trust is often a “pseudonymous” relationship. Here we approach what is widely regarded by academics as the “trust gap”; the de facto lack of the capacity to make an informed judgement on the “extra factor” to build “trust” beyond “mere reliance”.

Therefore, to build a functional Trusted Data Market with cheqd, we need to augment the requirement for this “extra factor” using a combination of trust-building technologies and techniques.

See: Camila Mont'Alverne, Sumitra Badrinathan, Amy Ross Arguedas, Benjamin Toff, Richard Fletcher, and Rasmus Kleis Nielsen. The Trust Gap: How and Why News on Digital Platforms is viewed more Sceptically versus News in General. 2022. University of Oxford. Reuters Institute.

Available at: https://reutersinstitute.politics.ox.ac.uk/

The Technical Components of a Trusted Data Market

  1. Decentralized Identifiers (DIDs)
  2. Verifiable Credentials (VCs)
  3. Trust Management Infrastructure (TMI) such as Trust Registries (TRs) or Status Registries (SRs).
  • Legitimacy established by DIDs
  • Integrity established by VCs
  • Reputability established by TMI
Technical Composition of Trusted Data
Technical Composition of Trusted Data

Legitimacy through Decentralized Identifiers

Decentralized Identifiers (DIDs) are a relatively new technical standard, ratified by the W3C as a formal recommendation in 2022, for uniquely identifying a particular entity in a digital domain. Each DID can be “resolved” to fetch a data file called a DID Document, which helps prove legitimacy in three ways:

Verification

DID Documents must contain signing keys, known as Verification Methods, which can be used to cryptographically sign other data files (such as Verifiable Credentials). If a DID and associated Verification Method is found referenced in another data file, that DID and it’s key can be challenged, and authenticated against, to prove that DID is in fact:

  1. Legitimate;
  2. Associated with a particular DID Document (discussed in point 2);
  3. Associated with any other DID-Linked Resource (discussed in point 3).

If a DID is proved to be legitimate, it is possible to infer that the data file signed by the DID has a higher level of trustworthiness.

Resolution

Resources

Integrity through Verifiable Credentials

Verifiable Credentials (VCs) are another type of data file, again formalised by the W3C as a standard, designed to ensure absolute integrity of the “claims” listed in the data file. A “claim” in this sense is an assertion about a particular entity; for example, this could be attesting to someone’s name, address, date of birth etc.

VCs are able to carry out this function because the “claims” contained in the credential are intrinsically verifiable through cryptographic “proofs”.

VCs dovetail well together with DIDs, since the “proof” embedded in the VC is able to be signed by DIDs and their associated Verification Method keys. This allows the VC “proof” to be challenged and authenticated against using the Public Key Infrastructure from the DID and associated DID Document.

Once the proof is embedded in the VC, the VC may also be serialised as a JSON Web Token (JWT) or use a Data Integrity proof (VC-DI), to create a representation of the Credential that is tamper-evident. This means that if any modification is made to the serialisation, the embedded “proof” will become unverifiable.

Commonly therefore, VCs are issued to a “holder”, who holds this in a data wallet, and these VCs are cryptographically signed by a DID of the issuing entity “issuer”. This enables the “holder” to prove to a third party that the Verifiable Credential has both:

  1. Legitimacy — since it is signed by a particular entities DID; and
  2. Integrity — since the cryptographic proof is tamper-evident.

Different cryptographic signature schemes can also be layered on top of VCs to provide additional benefits, such as:

  1. Selective disclosure: where only a selected subset of VC claims, or selected claims from multiple VCs, are presented in one tamper-evident format (e.g. SD-JWT).
  2. Zero-Knowledge Proofs (ZKPs): where a VC can use its legitimacy and integrity to prove a particular fact, in a yes/no challenge/response mechanism, without revealing the actual “claims” written into the VC (e.g. AnonCreds).

VCs are highly flexible in their design, with certain flavours being useful for specific use cases. However, each type maintains the same underlying focus on data integrity. This data integrity coupled with the legitimacy of DID authentication, is in many cases, enough for a verifier to build a level of “trust” in a digital interaction, reducing the time-to-reliance significantly.

Reputability through Trust Management Infrastructure

Trust Management Infrastructure (TMI) can be used to move the needle from “low/medium” trust digital interactions to “high” trust digital interactions. As such, this infrastructure may not always be required in a trusted data market — but may be relied upon when necessary.

DID-Linked Resources (DLRs) may be used to establish TMI in a decentralized way. Examples of common TMI for Trusted Data Markets are Trust Registries (TRs) which may ascertain whether a DID belongs to a trusted set; or Status Registries (SRs), which may be used to check if the VC status has been revoked or not. However, for the purposes of this paper, we will use TRs as the canonical TMI to explain the concept of reputability.

A TR is a data object where one entity publicly attests to the legitimacy of other entities. For example, a Health Regulator such as the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK may create multiple trust registries of pharmaceutical manufacturers or wholesalers that are legally regulated to provide certain types of medicines, drugs or pharmaceutical products in the UK.

In the context of decentralised identity technology, TRs contain lists of DIDs pertaining to specific entities for a particular purpose. In the example above, MHRA could create a TR including the DIDs of each pharmaceutical manufacturer or wholesaler regulated to carry out a particular action.

Through resolving-to and parsing a TR, a verifier can traverse the DIDs and metadata that is listed to establish a root-of-trust and establish that the data they are receiving hits requisite levels of assurance for a specific governance framework.

TRs provides relying parties with additional assurance through this way of linking to a root-of-trust, resulting in:

  1. Reputability, since the “verifier” will be able to check that the “issuer” DID signing the “holder” VC is attested to by one or multiple other entities through a public TR; this layers on top of:
  2. Legitimacy (as previously discussed)
  3. Integrity (as previously discussed)

To conclude this section, the diagram below helps explain how the three technological components described in this section work in conjunction with one another — to build a comprehensive web of trust.

Interplay between DIDs, VCs and TRs

This diagram illustrates the following flows:

  1. A DID cryptographically signs a VC, which establishes Legitimacy and Integrity in the data the VC contains
  2. A VC references a TR (or other TMI), which establishes Legitimacy and Integrity that a TR is intended to be used by the verifier
  3. The TR provides additional information about the reputability of the DID, which establishes Legitimacy, Integrity and Reputability in the DID and signed VC which can be used to meet governance and compliance requirements.

Bridging the Trust Gap

  1. Legitimate, since it is attested to by a particular “issuer” (I)
  2. Cryptographically untampered because the VC data model enables proof serialisation and data integrity
  3. Reputable, since one or multiple TRs can be referenced to where the issuer’s DID is attested to by third parties
  1. Other parties they are interacting with meet compliance requirements for their industry or use case, creating trusted markets;
  2. They themselves meet compliance requirements, as they can demonstrably assure third-party regulators that the data they receive from other parties has absolute legitimacy, integrity and sufficient reputability for a particular governance framework.

Making the Market

  • Legitimacy, via the authentication of a DID = Free
  • Integrity, via the verification of a VC = Free
  • Reputability, via the verification of a TR (or other TMI) = Paid
  1. A cost saving opportunity for entities to achieve a high-level of trust, compared to existing KYC and KYB mechanisms
  2. A time-efficiency bonus for achieving a high-level of trust, with trusted data being instantaneously verifiable — reducing the burden of regualtory compliance
  3. A never-before seen revenue opportunity for “issuers” of trusted data

Payment Gating Reputation

The way that cheqd supports the above objective is through payment gating the reputational arm of Trusted Data, Trust Management Infrastructure (TMI)Trust Registries (TR) or in an alternative use case, Status Registries (SR).

Payment gating a Trust Registry (TR)
Payment gating a Trust Registry (TR)
Payment gating a Status Registry (SR)
Payment gating a Status Registry (SR)
  1. “Issuers” (and in some cases “Regulators”) are able to set the market price of unlocking a TR or SR
  2. “Verifiers” are able to choose whether they want to pay to unlock the result of a TR or SR to achieve
  3. Payments will be made back to the “issuers” of the VC that is being presented to the “verifier”
  1. If a TR for a particular DID, or SR for a particular VC, has a high Level of Assurance (LoA), such as being created by a reputable entity, it is reasonably foreseeable that the price for that check may be higher than average.
  2. If the price of a TR or SR check is too high, the verifier will either: (a) Choose not to make the extra payment; or (b) Choose another TR to make the check against (if available)
  3. Once organisations and industries see the revenue opportunities from creating TRs, it is hypothesised that a competitive market will emerge — with a range of TRs with differing LoAs and associated range of prices.

We will explore how these use-cases present a clear product market fit for cheqd, cheqd’s partners and also the wider SSI ecosystem, projected to capture 550 billion dollars worth of value by 2030.

The Anatomy of a Trusted Data Market

trusted data markets cheqd blog

A deep dive into the fundamentals of what a Trusted Data Market is and how cheqd’s infrastructure enables them.

This is part of a series, read the second blog ‘The role of cheqd in Trusted Data markets’ here.

In short, a “Trusted Data Market” is a vision for both consumers and businesses where the paradigm of data ownership is inverted to the user, trust is verifiable, and can be transacted upon within a privacy-preserving data market.¹

Introduction

If you look up the word “trust” in the dictionary, the first definition is typically one about general trust between two parties, but almost always, the second definition is a financial one. Take, for example, the language around finance; the United States is backed “by the full faith and credit” of the U.S. government. “Credit,” from the Latin credere, literally means “to trust.” Trust is known to be fundamental to better economic outcomes² and we all trust in others and reciprocate that trust with trustworthy actions as part of our everyday lives. Moreover, economic historians debate the relative importance of models of trade growth and the importance of capital infusion to fuel innovation. But where there is broad agreement is in the central notion that goes back to civilisation’s origin: trust undergirds cooperative behaviour. But what is trust’s central role in markets? And how could verifiable data change trust’s role in markets to form a new data paradigm?

Defining Trust

Philosophers have an important definition of what ‘trust” is. Most agree the dominant paradigm of trust is “interpersonal” and this type of trust is a type of reliance, although it is not mere reliance.³ Rather, trust involves reliance “plus some extra factor.”⁴ Although this definition is lacking when applied to trust in current data markets, we will explore later its applicability within a trusted data market.

This extra factor typically concerns why the trustor (i.e., the one trusting) ought to rely on the trustee to be willing to do what they are trusted to do. This is further conditioned by another layer of trust: whether the trustor is optimistic that the trustee will have a “good” motive for acting. This is especially important should the trusted interaction involve a transactional relationship (the exchange of assets or risk), to optimise incentive alignment within a market context.

This is demonstrated in the logic of why one should trust someone.

T(x,y) = x trusts y

R(x,y) = x relies on y

E(x,y) = x ought to believe there is an ‘extra factor’ for trusting y

W(x,y) = y is willing to do what they are trusted to do

M(x,y) = y acts with good motive

For all x and y,

T(x,y) → [R(x,y) ∧ E(x,y) ∧ W(y) ∧ (T(x,y) → M(y))]

This translates to: for all x and y, x trusts y iff (if and only if) x relies on y and believes there is an extra factor for trusting y and y is willing to do what they are trusted to do (not dependent on x trusting y) and if x trusts y, then x believes that y has a good motive for acting. This logic breaks down if y would not be willing to do what they were originally trusted to do and/or y acts with a “bad” motive.

Critically, to establish these relations as “trusted,” time and repetition are necessary. The more “trusted interactions” are performed successfully, the more likely one can safely assume trust as I can likely assign a ‘good’ motive for acting, and admit a history of actions than consistently show y is willing to do what they are trusted to do.

In this blog, we’ll explore how a market dynamic could shape if certain trust conditions in our introduction became verifiable from inception, what this subsequently could mean for time-to-reliance to form trust, and the consequences in terms of transparency, accountability, and reliability this could entail. We will then explore how this could form a “Trusted Data Market.”

The Role of Trust in Markets

Trust is, and always has been, an essential component of markets. Therefore, we must ascertain why trust formed in markets, and why sociologists and anthropologists cite the advent of market economies as representing a significant break in the organisation of human societies.

From an economist’s perspective, markets represent a transition from a system where product distribution was based on personal relationships to one where distribution is governed by transparent rules. While these rules are necessary to ensure that markets operate effectively, trust is also crucial to ensure that transactions are conducted in good faith, and to promote compliance with market regulations. The rules themselves now represent a ‘shared truth’ which replaced the previous social contracts and interpersonal relationships that pre-dated market economies.⁵

Primarily, trust reduces the risk of opportunistic behaviour within rule-based frameworks. When buyers and sellers trust each other to act in good faith, they are more likely to engage in mutually beneficial transactions. As described above, these ‘trusted interactions’ form a basis for trust over time. Therefore, trust is an important property for promoting compliance with market regulations to ensure that the market functions efficiently and that all participants can compete fairly. Concerning data, trust is also critical for promoting transparency and accountability in markets with rules. When buyers and sellers trust each other, to be honest, and transparent in their transactions, they are more likely to provide accurate information about the goods and services they offer. This promotes transparency in the market and helps to ensure that all parties have access to the information they need to make informed decisions. Moreover, trust can also foster self-regulation and promote agency, defined by Hickman et al. as:

“… intentionality, responsible for defining strategies and plans; anticipation, related to temporality, in which the future tense represents a motivational guide, driving force of prospective acts to reach goals; self-regulation, which are personal patterns of behaviours that monitor and regulate their actions; self-reflection, responsible for self-inquiry into the value and meaning of their actions.”⁶

And with agency undergirded by trust, businesses and individuals who are trusted by their peers are more likely to adhere to ethical standards and social norms.

For these advancements to occur, verifiable data could reduce the need for trust (its current role in markets), as defined by time + trusted interactions, by ensuring data integrity and authenticity, promoting fair competition, reducing fraud risk, improving transparency, and simplifying regulatory enforcement. Forming a powerful tool for modern rule-based markets’ efficiency and effectiveness and ability to innovate with new use cases, commercial models and trust dynamics.

Data in Markets

Data is well-referenced as the “new oil” of the digital economy. This is perhaps more evident than ever before as its driving innovations like Chat GPT, creating new knowledge and insights from various machine learning and reasoning techniques, and increasing efficiency in many fields.⁷

However, certain types of data markets are not transparent where the user is the product. Typically, consumers are not cognisant of what happens to their data, and without self-sovereign identity, cannot control data use.⁸ Researchers and academics have persuasively argued a legitimate trade of data in a “shadow market”⁹ has evolved, however, this “shadow market” (intermediary platforms that trade on subject’s data) is not lucrative for the subjects of data, but is for data controllers. This has presented a conflict and misalignment of incentives between consumers’ data rights, assumed privileges and increasing desire for privacy and the current market demands for data.

A vision for both consumers and businesses, where the paradigm of data ownership is inverted to the user, and trust is verifiable within a privacy-preserving data market¹⁰ is what we at cheqd refer to as a “Trusted Data Market” powered by cheqd’s infrastructure.

cheqd’s infrastructure provides privacy protection and verifiable informational self-determination for consumers. Inverting the current Data Market paradigm whilst critically providing economic advantage and innovation for businesses who currently control and participate in the preexistent data economy. This type of data sharing, when willingly shared by “Issuers’ (companies who own, control and monetise consumer data) can provide verifiable information, through cheqd’s verifiable data registry and infrastructure, shared by ‘Holders’ (consumers, customers, companies or even objects within a supply chain) can afford low market entry barriers, transparency for the parties involved as well as verifiable data. And critically, new economic data models where trust has a new property within the data paradigm: verifiability, forming the foundation for a new, innovative type of Data Market.

The Trust Game

Before bringing this all together, it’s important to note what Verifiable Data can solve within economic theory for Data Market models. Economic theory commonly models this through a thought experiment known as the “Trust Game.” The “Trust Game” involves a principal-agent scenario where the following economic features of trust — risk, shared values, sacrifice, and reputation — are observed. At its core, the game involves a sequential exchange in which there is no contract to enforce agreements. In the game, most variants of it endow subjects with $X. Subjects are then paired anonymously and assigned to either the role of “sender” or “receiver.” Stage 1: The sender (trustor) may either pass nothing or any portion X of the total Y to the receiver (Trustee). If the sender keeps X, the experiment conditions triple the total amount so that 3x is passed onto the receiver. Stage 2: The receiver (trustee) may either pass nothing or pass any portion Y of the money received back to the sender. The amount passed by the sender is said to “capture trust,” as it infers an intent that the other party “… will reciprocate a risky move (at a cost to themselves),’’ and the amount returned to the trustor by the trustee, therefore, capture trustworthiness.¹¹

Importantly, the “Trust Game” models how trust occurs in the context of a cooperative relationship with repeated interactions over time. In iterations of market dynamics where trust must be assumed, gained, reciprocated, and then maintained, we can only gage trust by how parties act within a transactional instance, and how the accumulation of these trusted interactions forms “trust” and creates dependencies on how those actions will affect cooperation in the future. Simultaneously, how market participants behave today is also determined by cooperative behaviour in the past.

In the Trust Game model, repeated interactions over time are the critical factor in determining whether those interactions can be fairly assumed as worthy of a trusted reputation. Players may initially be cautious and invest a small amount, and then gradually increase their investment as they learn more about the other player’s trustworthiness. This may not always lead to high levels of trust and cooperation between the players.

However, if verifiable trust is established from inception, players would have access to data about each other’s trustworthiness before the game begins. For example, they may have access to ratings or reviews from previous games, or other forms of verifiable data that indicate the other player’s trustworthiness, like data which is issued from a trusted issuer that provides assurance the information the player presents has an “extra factor” other plays can rely upon. This can create a positive expectation and lead to more initial trust between the players, reducing the need for a learning process to establish trust. Time-to-reliance within the market dynamic shifts, regardless of the use case as the verifying participant can assume the data is at base verifiable.

For example, in the Trust Game’s model, if the players have access to verifiable data indicating that the other player has a history of trustworthy behaviour in previous games, they may be more likely to invest a larger amount of money in the current game. This can lead to a higher degree of cooperation and reciprocity between the players, resulting in higher payoffs for both players and they may be more likely to view them in a positive light and exhibit more cooperative behaviour.

This also reduces the complexity of why a rational agent should trust someone. To travel back to our previous logic, let’s see how it becomes modified:

For all x and y,

T(x, y) if and only if [(R(x, y) ∧ P(x, y))]

where

P(x, y) = x has positive reasons (verifiable trust) to assume or validate that y is trustworthy.

trusted data markets cheqd blog
But how can we alter the time variable, and provide a verifiable “extra factor” to form a model where participants can establish that trust is warranted from inception in market dynamics?

Trusted Data Markets

In two essential references for this blog, Altman defines privacy as “the selective control of access to the self,”¹² and Mason describes the individual who trades private information about the self as a kind of currency in exchange for anticipated goods and services.¹³ Essentially, I should be able to exercise personal agency, select, control and subsequently act (for example, trade) upon information that accesses my personal data and determine with whom I share that access with. This is not a new digital precedent but has been deliberated on in terms of dignity, exceptionalism, and values by philosophers for centuries.¹⁴

Many other well-worked cheqd blogs on self-sovereign identity, trusted data, and cheqd’s payment infrastructure explain how self-sovereign identity and cheqd’s infrastructure can facilitate this paradigm, both from a technical and commercial perspective. What we’ll dive into in this blog is the relevance of this type of data, and selective control of it, to a market dynamic.

In a cheqd trusted data market, “holders” (users, companies, objects) have selective control, and their willingness to share data is dependent on a variety of factors, e.g: benefits, type of information, programming and culture.¹⁵

In “Trusted Data Markets,” companies issue verifiable data to holders, who in turn actively share their data with interested parties (known as “Verifiers”) who wish to verify that data. The reasoning behind this dynamic forming is multifold, but we will focus on commercial benefit for “Issuers”, and we will explore various use cases in subsequent blogs where Trusted Data Market dynamics form around a payment flow: “Verifier pays Issuer.”

Within a “shadow” data market, this payment has already formed, without the user, as we all already interact within Data Markets, but our data is traded upon without our selective control. With cheqd and self-sovereign identity, this paradigm is inverted via a privacy-preserving, standards-compliant data and payment infrastructure. This infrastructure forms the structure for both verifiable trust and payments to support the transactional flows of associated verified and trusted data within the format of verifiable credentials.

The “Issuer” issues Verifiable Data

The “Holder” receives this data, which can be trusted as 100% verifiably issued by the Issuer.

The “Holder” then presents said data (a Verifiable Credential) to the “Verifier/Receiver.

Upon presentation, in which the “Holder” maintains selective control, the Verifier can “check” the verifiability of the Verifiable Credential (the data), via cheqd’s network, and upon this “check” ascertain whether the data is: verifiability issued by the issuer, non-revoked, and of the correct standards.

It is via this “check” a privacy-preserving payment is released from the “Verifier/Receiver” to the “Issuer.” At no point within this market dynamic is selective control of the data removed from the “Holder” and at no point is the presentation of the “Holder’s” data gated by a payment wall.

The verifier can ascertain greater trust in the credential received, via the reputation of the issuer reducing time-to-trust and trust the data issued is from the issuer at genesis.

The price a Verifier is willing to pay correlates to the impact of Verifiable Trust on the market dynamic.

This price is set by the Issuer.

Crucially this solves two significant problems for data markets.

TIME-TO-RELIANCE TO ESTABLISH TRUST

Typically, trust takes significant time to develop and maintain and this in turn informs market dynamic structures. With the import of Verifiable Data, time-to-reliance in the market dynamic is significantly improved. If I can ascertain as a “Verifier” the data Issued to the Holder is 100% from the Issuing participant, I can mitigate the risk of fraud and form a new trust dynamic. This, in turn, can form new reputation metrics, and new commercial models spurring growth as this data has an associated value attached to it for all market participants.

SELECTIVE CONTROL WHILST MAINTAINING PRIVACY

Selective control of data, in a privacy-preserving fashion for “Holders,” where they can meaningfully participate in a new data market paradigm is established. No longer will users participate in “shadow” data markets, they can meaningfully participate and retain ownership and control of their information. Whether this is to access benefits, or indeed indicate what they “prefer,” this data will form a value category which we believe will create better economic outcomes.

Conclusion

cheqd’s trusted data markets present novel solutions for both problems, institutional and consumer participants, whilst maintaining privacy-preserving transactions and interactions, a new data paradigm emerges; with verifiable trust. One where the user is at the centre of their own data universe, and institutions can discover new revenue streams, new reliable ‘trusted’ data, new ways to innovate with customer data and participate in an emerging paradigm fit for the new data economy.

Learn more

We will be following this initial blog with a deep dive into how cheqd’s infrastructure supports the advent of Trusted Data Markets, followed by specific use cases we’re exploring. Beginning with credit data.

If you’d like to learn more, please reach out to us directly: [email protected]

[1] Gkatzelis, V., Aperjis, C., & Huberman, B. A. (2015). Pricing private data. Electronic Markets, 25(2), 109–123. https://doi.org/10.1007/ s12525–015–0188–8

[2] Arrow, K. (1972). Gifts and exchanges. Philosophy and Public Affairs, I, 343–362, Fukuyama, F. (1995). Trust. New York: Free Press, Putnam, R. (1993). Making democracy work: Civic traditions in modern Italy. Princeton, NJ: Princeton University Press.

[3] Goldberg, Sanford C., (2020), “Trust and Reliance”, in Simon 2020: 97–108.

[4] Hawley, Katherine, (2014(, “Trust, Distrust and Commitment”, Noûs, 48(1): 1–20. doi:10.1111/nous.12000

[5] https://policyreview.info/open-abstracts/trust-trustless

[6] https://trustoverip.org/wp-content/uploads/Overcoming-Human-Harm-Challenges-in-Digital-Identity-Ecosystems-V1.0-2022-11-16.pdf pp. 30–32

[7] Spiekermann, S., Acquisti, A., Böhme, R., & Hui, K. L. (2015). The challenges of personal data markets and privacy. Electronic Markets, 25(2), 161–167. https://doi.org/10.1007/s12525-015- 0191–0

[8] Spiekermann, S., & Novotny, A. (2015). A vision for global privacy bridges: Technical and legal measures for international data markets. Computer Law and Security Review, 31(2), 181–200. https://doi.org/ 10.1016/j.clsr.2015.01.009.

[9] Conger, S., Pratt, J. H., & Loch, K. D. (2013). Personal information privacy and emerging technologies. Information Systems Journal, 23(5), 401–417. https://doi.org/10.1111/j.1365-2575.2012.00402.x

[10] Gkatzelis, V., Aperjis, C., & Huberman, B. A. (2015). Pricing private data. Electronic Markets, 25(2), 109–123. https://doi.org/10.1007/ s12525–015–0188–8

[11] Camerer, C. (2003). Behavioral game theory: Experiments in strategic interaction. Princeton, NJ: University Press, Princeton.

[12] Altman, I. (1976). Privacy — a conceptual analysis. Environment and Behavior, 8(1), 7–29.

[13] Mason, R.O., Mason, F., Conger, S. & Pratt, J.H. (2005). The connected home: poison or paradise. Proceedings of Academy of Management Annual Meeting, Honolulu, HI, August 5–10

[14] Floridi, Luciano, On Human Dignity and a Foundation for the Right to Privacy (April 26, 2016). Available at SSRN: https://ssrn.com/abstract=3839298 or http://dx.doi.org/10.2139/ssrn.3839298

[15] Hallam, C., & Zanella, G. (2017). Online self-disclosure: The privacy paradox explained as a temporally discounted balance between concerns and rewards. Computers in Human Behavior, 68, 217–227. https://doi.org/10.1016/j.chb.2016.11.033.

Trusted data explained | the rise of the trusted data economy

Trusted data (or authentic data) is information translated into a form usable by computers, whose source is verifiable — it can be checked through a standardised method to demonstrate accuracy. The trusted data economy takes trusted data one step further, encompassing the business models that can enable a fairer, more transparent and decentralised world.

Systems that expand the radius of trust change societies”

(Werbach, 2016: 4)

Trust is the underpinning of all human contact and institutional interactions; a crucial value in international affairs and a complex interpersonal and organisational construct, embedded in all areas of society, from individuals’ relationships with each other to the global political system.

It is widely seen as one of the most important synthetic forces within society which encompasses values such as reciprocity, solidarity and cooperation, whilst within areas such as technology, law and governance, it is less of a synthetic value and more of an intrinsic and core construct within contracts, regulation and code.

 

But what is trust?

At its core, trust is centred on the reliability of an assertion about someone or something; an indisputable, verifiable claim (the operative word here being ‘verifiable’ — the ability to check or demonstrate accuracy).

In a continuously digitised and globalised world, trust has been increasingly hard to nurture, and as events over the past decade have shown, it has fast become a threatened commodity the world over.

 

The evolution of trusted data

With the mass adoption of the internet, the world has witnessed a rapid acceleration of innovation, and as a result, a diverse range of positive outcomes.

Access to the internet, for example, in a vast portion of the world is now considered a basic human right and an essential component of a functioning society.

However, for the vast majority of people, the reliance and somewhat addiction to the efficiencies it brings to our lives, towers over our curiosity, and knowledge of, the very real risks it poses to us around what is being done backstage, behind the warm lure of the glossy frontends.

As we engage in our now phenomenally digitised day-to-day lives, the information about the things we write, say, do and read is packaged up into something we hear about a lot, but don’t necessarily question all that often what it is.

You guessed it.… data.

Data, within the context of computing, is simply information, facts provided or learned about something or someone, translated into a form that is efficient for movement or processing.

Put simply, everything we know about anything is information which can be packaged up and utilised as data.

This page addresses the meeting of the two: trust and data.

Trusted data (we also use the term authentic data interchangeable) is, therefore: information translated into a form usable by computerswhose source is verifiable — and can be checked through a standardised method to demonstrate accuracy.

The need for trusted data

For much history, we have found methods to demonstrate that information itself can be trusted. This is based on the way the issuer of information, and the verifier of information agree on what makes something trustworthy.

We trust paper money because we trust the fine details that are imprinted onto each one, which can be verified as being issued by the body that can reliably demonstrate they have the authority to do so.

As a temporary holder of the paper money, one can also conduct their own checks.

A British £20 banknote

For example; the £20, the most used note in Britain, can be verified by verifying that:
  1. The hologram image changes between ‘Twenty’ and ‘Pounds’
  2. The foil is gold and blue on the front of the note and silver on the back within the see-through windows
  3. A portrait of the Queen is printed on the window with ‘£20 Bank of England’ printed twice around the edge
  4. A round, purple foil patch contains the letter ‘T’
  5. Under a good-quality ultraviolet light, the number ‘20’ appears in bright red and green on the front of the note
Similarly, if we look at an identity-related example, we trust the information on a passport, driver’s licence or birth certificate, because we note other fine details and unique characteristics which a verifier of this information can reliably identify. This model of having a common societal understanding of what is trustworthy and what is not extends across all aspects of information in the physical world, and now deep into the digital world. However, with the advancement of technology and ease of access to methods and tools to behave in a fraudulent manner, combined with a lack of transparency over where the information (stored as data) is being held, it is more difficult to actually verify a claim and ultimately be able to reliably state that data is trustworthy.  

A world of truly trusted data in Web 3.0 and Decentralised Identity

Without going into too much technical detail of how this data is made trusted in Decentralised Identity, you can find out all you need to know on our learn site here, some of the underlying principles and basics do help illustrate what a world where trusted data is the norm would look like. In Web 2.0, much of the world’s data is held in huge data centres controlled by a small number of large players, acting as gatekeepers. The term ‘cloud’ has been used effectively to create the feeling that one’s data is just held in the air, the ether, for an individual to call on when they need it, yet in reality, our data is locked up and secured by these large gatekeepers.

Our data is not held, controlled or owned by ourselves.

As a result, our understanding of what goes in and what comes out is limited. An issuer may provide a trusted piece of data, but what happens before this arrives to an individual or a verifier is out of your control.

Likewise, with more sophisticated means of cybercrime and hacking, uncovering whether some data has been tampered with is harder and harder to do.

To get around this, a combination of technologies have come together at a poignant moment across different industries. Within the Decentralised Identity / SSI space, three technologies, in particular, are integral:

  1. Decentralised Identifiers (DIDs)
  2. Verifiable Credentials (VCs)
  3. Blockchain technology (used with SSI as a Verifiable Data Registry) (note: for decentralised identity, blockchain is not necessarily required but it does offer some significant advantages in further improving the level of trust, transparency and the overall efficiencies required for it to flourish
 

Decentralised Identifiers (DIDs) and Verifiable Credentials work in tandem as the foundations of Decentralised Identity to ensure data can be trusted. DIDs act as a form of digital stamp or hologram, making it possible to check the authenticity of the information, whilst VCs contain the very information itself that needs to be checked and verified — more on both DIDs and VCs here.

Blockchain is often described as a “trustless” system, meaning that ultimately one does not have to have some synthetic indeterminate level of “trust” as the rules and structures laid out in code do this.

Although blockchains use complicated technology, which often deters people from further reading, their basic function is quite simple: to provide a distributed yet provably accurate record.

In other words, everyone can maintain a copy of a dynamically-updated ledger, but all those copies remain the same, even without a central administrator or master version.

This approach offers two basic benefits.

First, one can have confidence in transactions without trusting the integrity of any individuals, intermediaries, or governments. Data is therefore trustworthy because no party can tamper with it so the data put in is what comes out.

Second, the distributed ledger replaces many private databases that must be reconciled for consistency, thus reducing transaction costs.

 

The trusted data economy

The trusted data economy takes trusted data one step further.

Over the past two decades leveraging the buying and selling of data has become a powerful and incredibly profitable business model. It is what has led to the growth and dominance of the behemoths of the internet: Google and Facebook the most prominent examples.

By providing a service, totally free for most uses, these companies have quietly deepened their drills into the gold mine of individuals’ data whilst many have unknowingly compromised their privacy and freedoms.

Yet, though more and more this is being exposed and famous terms are emerging, such as ‘if the product is free, you’re the product’, there has still been very little movement and change at a regulatory or social level.

Enter the trusted data economy

The trusted data economy flips this business model entirely on its head, shifting control away from these internet behemoths and over to the individuals.

Through a range of payment models enabled through these technologies, the individual can now be the ultimate gatekeeper and vendor of their identity; able to choose who and for what their data is used and even sold for!

This new data economy of trusted data has been labelled as ‘decentralised identity’ as well as ‘self-sovereign identity (SSI)’ since it directly empowers individuals to have control and engage in trusted interactions in both the physical and digital spheres.

Find out more about the economy of trusted data might look in cheqd’s tokenomics for self-sovereign identity.

 

Conclusion

Transparency, freedom, determination, democratisation — these are all features of what Web 3 and the shift in power promises, however none of these are truly possible without a new era of data management in which we can have faith in where data resides, who has access to it and who is making money with it.

Through revolutions over time, a small number of people and organisations benefit most and power concentrates to the few. Yet, as time progresses and accessibility to the technologies that enabled that revolution increases, the more the masses can engage and challenge the status quo.

Trusted data is a very real solution for many of the problems of today and the trusted data economy which enables it to gain mass adoption can make it happen.

Find out more about how we at cheqd are helping usher in the trusted data revolution….