How cheqd can help your business embed trust in your AI strategy
In the final piece of our five-part series, we will delve into how cheqd can assist any organisation in developing its Verifiable AI capabilities. Verifiable Credentials (VCs) and Decentralised Identifiers (DIDs) are highly versatile technologies which can be applied to various use cases.
What are Verifiable Credentials and Decentralised Identifiers?
As discussed in detail in the first part of our series on Verifiable AI, Verifiable Credentials (VCs) are portable packets of data held by the individual or entity to which the data pertains. These packets are verified as true by another entity, which digitally signs the VC with their Decentralised Identifier (DID).
The advantage of using VCs is the reduced need for cross-referencing with third parties to verify the required information about the holder. The ‘holders’ of Verifiable Credentials can range from human beings to ethically sourced bananas to automated AI Agents. Given recent incoming regulations around identity in the EU, in which decentralised identity technology plays a central role, the integration of VCs into the wider economy seems highly likely.
Why should your company care?
Artificial Intelligence is widely viewed as a huge driver of growth over the coming decade, with statista reporting that the total addressable market size by 2030 could reach 15.72 trillion dollars. For comparison, that would be the same as doubling the EU’s total GDP, which is roughly 16 trillion dollars.
Moreover, the ongoing trend towards Open Source, more light-weight models being able to compete with the high-parameter models such as ChatGPT means that access to Artificial Intelligence is quickly becoming democratised, with hobbyists now getting increasing access to powerful tools. Similar to the ‘hobbyist stage’ of the internet, this period will likely see significant innovations as a generation of developers begin to tinker with and explore the underlying technology.
With a Cambrian Explosion of different models and methods on the horizon, and the clear financial opportunities AI makes possible, having an AI strategy for the future is essential. Not only to take advantage of the technology’s benefits, but also to mitigate its costs. With such powerful tools, misinformation can be created at scale; the datasets used to train models may become compromised with bad data or rendered unusable by synthetic (AI-produced) data. Ensuring safety against social engineering hacks will also become ever more complex as these models improve in output quality.
Cybersecurity threats have themselves increased massively in recent years, with the victim count rising by 1417% from 2002 to 2021, and $787,000 being lost every hour to cyber crime. Data particularly has become a poisoned chalice for companies. Data is not only valuable to the companies that collect it – but large valuable data collections invariably become honey pots for cyber criminals, with the average cost of a data breach to a company being 4.35 million dollars. 1 in 2 North Americans had an account breached in 2021, making data security a growing concern for consumers and a longstanding issue for regulators worldwide. The reputational damage of a data breach can be enormous – around 40% of the total cost of a breach is attributed to the tarnishing of the company brand among shareholders, customers and employees.
As the technology improves, regulators strive to keep pace – with the European Union often leading the way in setting the regulatory environment. In the EU, the AI act determines numerous rules around high-risk applications (such as those used in healthcare, education, policing etc) to ensure safe interactions with AI. Models in these high-risk areas must adhere to strict obligations, such as having high-quality, approved datasets, logging of activities to improve traceability of results and decisions, and maintaining high levels of security and accuracy. Coupled with EIDAS2 EU regulations on electronic IDs and the mandate for interoperability to be built into the system to enable multiple forms of Electronic ID, it is clear that acceptance of digital forms of identity is going to increase over the coming years, making the integration of Verifiable Credentials into an organisation’s AI tech stack much easier.
Taken together, it becomes clear that Verifiable AI will be required in some form or another if organisations wish to capitalise on the massive growth opportunities while avoiding the potential pitfalls of fraud, misinformation and data loss.
How can the cheqd network help?
We at the cheqd network, founded in 2021, are a decentralised identity infrastructure company focused on making interacting with Verifiable Credentials as seamless as possible for our partners and their partners to plug into and use. In a young industry, we have made sure from the start to not only conform to emerging standards, but also have a seat at the table when they are established. We are leading members of the Decentralised Identity Foundation, W3C, Open Wallet Foundation, TrustOverIP, INATBA and Content Authenticity Initiative. Recently we were invited to join the European Blockchain Sandbox, where we are developing a ledger-agnostic approach to trust registries to aid our journey to become a ‘Qualified Electronic Ledger’ under EIDAS2.
In addition to building interoperable, compliant infrastructure, we offer credential payments, enabling any identity transaction to be commercialised, helping remove one of the main barriers to adoption – the lack of a commercial option to replace the current system for KYC providers and others in the identity stack. Using the cheqd network, holders, issuers and verifiers can pay or get paid quickly and easily with full account abstraction.
Throughout this piece, we will discuss how the global need for data is changing dramatically in various areas and how cheqd network can help your organisation develop Verifiable AI in a safe, compliant and efficient manner.
If you think your organisation could benefit from working with the cheqd network, please don’t hesitate to contact us at [email protected].
Content Credentials
What is a content credential?
Discussed in detail in our 3rd article in the series, a Content Credential is, in essence, a verifiable credential embedded in a picture or video’s metadata. Just as a VC contains verifiable, trustworthy information signed with a DID, so too is a Content Credential signed with a trusted organisation’s DID. These Credentials can tell you many things about a piece of content- such as its creator, if it has been edited, when it was taken, and whether it was created digitally, captured with a camera, or generated by an AI model. Using content credentials ensures a traceable history of edits, providing more information about an image of video than would be available otherwise.
The increased value of Intellectual Property Rights and Content Origin
AI-generated content has profound implications for intellectual property rights and content origin. Knowing where our content comes from is more important than ever, as it helps prevent the spread of misinformation, which can impact both markets and democracies.
The need to determine whether content is human- or machine-generated content extends beyond avoiding misinformation: AI-generated data can considerably reduce a model’s effectiveness if included in its training process – similar to how a photocopy of a photocopy is typically of lower quality than the original.
Additionally, It is likely that consumers will continue to appreciate human artistic endeavours over machine-generated ones, further emphasising the need to verify the origin of content. As companies have begun to realise the value of the IP they hold on their websites, ensuring that the data models trained on are properly licensed has become more important.
Creating opportunities to protect and monetise reputation
Any form of content on the internet is ripe for disruption with Content Credentials. The fast-moving world of social media means consumers often have to fact-check away from the content itself, with no guarantee of success. News organisations, entities we reply on to deliver truthful information, have faced immense challenges in recent decades as they strive to remain relevant and profitable in the decentralised world of social media journalism.
Content Credentials can help solve that. By enabling any IP holder to accurately label content as their own, and by clearly distinguishing between AI-generated and human-created content, they can drastically revolutionise how we interact with the content we consume every day.
How can cheqd help?
cheqd is members of the Content Authenticity Initiative and Coalition for Provenance and Authenticity – two organisations dedicated to the standardisation and proliferation of content credentials. This positions us alongside some of the largest organisations in the world, including Adobe, Samsung, Google and OpenAI.
With our emphasis on creating new commercial opportunities through Credential Payments and the broad interoperability of the DID:CHEQ method, the cheqd network is well-positioned to help any organisation lead the way into the future. Our Credential Payments services enable creators and ‘Trust Anchors’, such as News Organisations, to not only protect their IP but can also get paid for creating content or verifying information.
Creating commercial opportunities for Content Credentials in Practice:
- A camera takes a picture with tamper-proof hardware that records key metadata, such as location, time, and camera model.
- The hardware ‘signs’ a Content Credential attesting to the recorded metadata.
- As the picture is uploaded to editing software such as Adobe Photoshop, any changes made to it including AI generation or the removal of metadata, are recorded as additional Content Credentials.
- Images or videos then published online with an organisation such as the Associated Press or the BBC, would come with a Content Credential which consumers can check to see its known origins and whether the picture was AI generated. If a picture was discovered to lack content credentials, or exhibit inconsistencies within them, a reputable news agency would not publish it.
- Republishers who use this content then make payments to the publisher, and others rights holders, such as the photographer and editor. Publishers here in effect act as ‘Trust Anchors’, getting paid for their ‘stamp of approval’ on a picture.
Verified Datasets
Just as we are increasingly eager to know the origins of our food, it is becoming equally crucial to understand the source of our data. The quality of data used to train an AI model is one of the most important factors in whether a model is effective and profitable, with implications in model quality, IP infringement, and societal biases. In the EU’s recent AI act, ensuring high quality data for models used in sensitive areas, such as policing, taxation and hiring policies, is one of the act’s keystones. This means that any company involved in a critical AI model MUST prioritise data quality, making trusted datasets significantly more valuable.
Large AI datasets vs small AI datasets
The type of data that one uses is completely dependent on what one is trying to train in a model. Some datasets are vast, encompassing a broad range of information, while others are specialised in very niche areas with comparatively smaller amounts of data.
A large dataset is enormous, filled with mountains of data. An example of this would be Common Crawl, which is a dataset that essentially covers the entire internet. These datasets, often consisting of 100s of terabytes, are mostly collected and sometimes sold by search engines and social media platforms – The extensive data availability is a key reason why companies like Google, Meta and X (through xAI) are heavily invested in Artificial Intelligence. These massive datasets are a requirement for Large Language Models such as ChatGPT or Claude, which require an endless inflow of data in order to improve their effectiveness.
However, using a large dataset is not always the best answer for every AI model. As a dataset grows larger, verifying the accuracy and compliance of the information usually becomes more challenging. With so much data, it is very difficult to confirm that all the information adheres to IP laws, maintains high quality for positive outcomes, and is free from AI-generated content that could lead to model collapse. The sheer size also limits who can effectively train models with them. Decentralised AI projects such as Bittensor, for example, face hardware and latency constraints when working with such enormous datasets.
In contrast, a small dataset, though still in the range of 10s to 100s of gigabytes, often consists of more niche, high-quality data that is easier to verify. If, for instance, one wishes to develop a model capable of recognising number plates from low-quality photos, there is no need to include data on face recognition. Developers would most likely seek datasets from trusted parties to ensure high quality – using dubious data could compromise the model’s effectiveness. Picking up specific, high-quality data on a specific area is of the utmost importance in ensuring the efficacy of a niche model.
Due to their smaller size and specialised nature, these datasets are perfect first use cases of Verifiable Credentials, especially in the Decentralised AI space where decentralised markets require better data labelling solutions to remain decentralised. Smaller sets are easier to verify and thus create early opportunities for ‘Trust Anchors’ to monetise their reputations whilst enabling datasets to carry their VCs as they are bought and sold on Decentralised Data Marketplaces.
How can cheqd help create commercial opportunities for large dataset providers?
Verifiable Credentials can be used by large dataset providers for a number of purposes in the future. As mentioned above, content credentials could be an extremely important use case for Verifiable AI, enabling search engines and other crawlers to selectively identify data confirmed as non-AI generated in order to avoid model collapse.
This presents a particular opportunity for more niche search engines, who may lack the heft of Google, but still have the capability to create new IP-compliant, organic large datasets. These organisations can then supply verifiable credentials to any model using one of their datasets, and potentially receive payment for ‘confirming’ that the supplied data meets the required standards.
Creating commercial opportunities for large dataset providers in practice:
- An image search engine places a dataset on a data marketplace that is confirmed to have no non-synthetic (AI generated) data
- The search engine signs a verifiable credential verifying that it is 100% non-synthetic data
- Placed on a data marketplace, an AI model developer requests verifiable credentials confirming that the dataset doesn’t contain synthetic material
- A payment is made to the attestor image search engine for verifying the verifiable credential’s accuracy
- Having confirmed that the data is accurate, the model developer purchases the dataset.
- The originally issued Verifiable Credentials can be reused repeatedly with verifiers continually receiving payments
How can cheqd help create commercial opportunities for small dataset providers?
Small datasets present an early low-hanging fruit for Verifiable AI. Specific datasets for niche subjects require subject matter experts and other ‘trust anchors’ to vouch for the quality of the data contained within and their smaller size means these datasets are more likely to be found on the kind of decentralised marketplaces those working in the ‘Decentralised AI’ space would prefer to use. Current data marketplaces such as Kaggle and Hugging face are highly centralised sites in which social proof through a system of reddit-like up-and down-votes is used to establish ‘quality’ – a measure that is highly manipulable.
As Decentralised data marketplaces continue to evolve, ensuring that any included dataset has verifiable information about their licensing and quality will help build much greater trust in the datasets. As with larger datasets, this creates commercial opportunities for those verifying the quality of this data. Using cheqd’s Credential Payments, anytime someone wishes to verify that the information provided is correct, an opportunity for a small transaction is created.
Creating commercial opportunities for small dataset providers in practice:
- University publishes peer-reviewed data on a decentralised data marketplace.
- The university and the scientific journal which published it attest to the data’s accuracy by signing verifiable credentials that confirm it is peer-reviewed.
- Before purchasing a dataset, an AI model developer requests verification that the dataset is peer-reviewed.
- Confirming that the data is accurate, the model developer purchases the dataset.
- A payment is made to the university and science journal.
- The originally issued Verifiable Credentials can be reused repeatedly with attestors continually receiving payments
Hardware verification - Dealing with localised clusters, compliance and energy in a decentralised setting
The past two years since the release of ChatGPT have seen a proliferation of new AI projects, with a strong focus in the web3 sector on the idea of DeAI (Decentralised AI) – that the training of AI (and other compute heavy activities) can be done not just by enormous corporations with the capital required to purchase and power thousands of GPUs (CeAI), but also by a network of distributed, decentralised GPUs owned by independent, smaller individuals or companies.
Proponents argue that using a decentralised approach offers an alternative to the highly centralised world of Artificial Intelligence, enabling a wider range of developers and users from outside of the tech AI bubble to train and use Artificial Intelligence in a way that is more censorship-resistant and open.
However, the nature of decentralisation creates a number of problems for DeAI projects, some of which Verifiable AI can help to solve:
- Localising compute clusters – a centralised network training an AI model will use GPUs in the same facility, or very close by, in order to reduce the amount of latency in the system. Any decentralised, distributed system therefore faces training issues if GPUs are not close to each other. This is not information that can be verified easily with up-to-date specifications, as any vaguely technical person can quite easily obfuscate their IP address with a VPN.
- Compliance – Although many DeAI projects tend to live by a ‘build things and break stuff’ philosophy, that does not mean that everyone who wishes to train a model wants to do it with no thought for local regulations, industry standards or international sanctions. Even the most ardent crypto-anarchist may be a bit embarrassed to find out that they have been paying the Lazarus Group (North Korean Hackers) to train their AI model. If DeAI wishes to compete with Centralised AI, it will need to be able to offer the same legal and regulatory assurances, around working with sanctioned individuals and having the correct data protection certificates for example, as large corporations are able to, or the industry will not be able to gain any serious users past the occasional hobbyist.
- Energy – Similar to compliance around sanctions, aiming towards carbon-neutrality is increasingly something expected of organisations and will often be a deciding factor in which product to use. Training using GPUs is extremely energy intensive and thus can potentially be a big drain on Net Zero targets. Large AI companies, such as Microsoft, OpenAI and Google are able to make a carbon-neutral (or carbon-offset) offer to potential clients, enabling any firm using their tech to claim that they are carbon neutral. If DeAI wishes to compete, they will need to be able to offer something similar.
How can cheqd help assure the geo-location of GPUs to reduce latency?
Latency is primarily influenced by GPU performance and location, making accurate verification of these factors of the utmost importance. While GPU performance can be easily tracked with up-to-date information shared within a decentralised network, meaning that obfuscating information such as hardware and performance is very tricky. However, verifying geo-location is trickier due to the existence of VPNs. By using Verifiable Credentials through the cheqd network to confirm the location of different GPUs, a more trustworthy map of the network emerges, enabling developers to train modelspicking GPUs which fit the minimum requirements.
This approach not only saves the time and money for developers training their models, but also creates new commercial opportunities for auditing firms. These firms, acting as ‘Trust Anchors,’ would be responsible for verifying the supplied information, and their established credibility would be critical. This potentially presents an opportunity for these firms to get paid each time someone wishes to confirm the geo-location of a GPU.
Creating commercial opportunities for auditing companies related to geo-location in practice:
- Auditing company audits GPU owner, checking their exact geo-location. If coming from a centralised cloud provider, this data can be pulled from their APIs.
- When confirmed, GPU owner is issued a Verifiable Credential attesting to their geo-location
- User rents GPUs and requests verification that the chosen GPUs have the required geo-location to ensure low latency
- User pays a small fee to the auditing company for the requested verification
- Upon verification, User and GPU owner begin contract of work
- Verifiable credential can be reused for other transactions, with continuous payments to auditor
How can cheqd help ensure DeAI network users do not work with sanctioned entities and countries?
In every developed nation, ensuring compliance with regulations is a necessary cost of doing business. Especially when it comes to anti-money laundering, data protection or Know-Your-Customer. Any serious company has to ensure (and be able to prove) that they are playing by the rules of the game. By utilising the cheqd network, auditing companies can verify that GPU providers have the correct certifications, such as being HIPAA or ISO compliant, then issue a credential which they can continuously be paid to verify each time someone wishes to confirm it.
Similarly, KYC companies could issue and verify either ‘Negative KYC’ credentials (“this GPU provider is NOT from a sanctioned entity”) or ‘Positive KYC’ credentials (“this GPU provider is this specific entity at this address with this Company/Passport number), and also get paid each time this has to be verified (see our article here on reusable KYC).
Using this system simplifies compliance for those renting computing power, making it easier for GPU providers to confirm their compliance without repeatedly undergoing lengthy, expensive KYC and certification processes, and creates a new revenue stream for auditing companies.
Creating commercial opportunities for auditing companies related to sanctioned entities in practice:
- Auditing company audits GPU owner, checking that the owner is not from a sanctioned entity
- When confirmed, GPU owner is issued a Verifiable Credential attesting to their non-sanctioned status
- User rents GPUs and requests verification that the chosen GPUs have the correct non-sanctioned credentials.
- User pays a small fee to the auditing company for the requested verification
- Upon verification, User and GPU owner begin contract of work
- Verifiable credential can be reused for other transactions, with continuous payments to auditor
How can cheqd help ensure the use of DeAI networks does not negatively affect company sustainability targets?
As with other hardware certifications done through Verifiable Credentials, the cheqd network can help GPU providers obtain the correct certifications in a reusable credential form. Auditing companies can perform their standard checks to ensure GPU providers use renewable energy or purchase carbon-offsetting credits. They can then issue their usual ‘certificate’ in Verifiable Credential form, allowing anyone planning to use a GPU in a cluster to quickly verify that their GPUs are fully sustainable and do not negatively impact their net zero targets.
Creating commercial opportunities for auditing companies related to sustainability in practice:
- Auditing company audits GPU owner, checking that the owner is fully carbon neutral.
- When confirmed, GPU owner is issued a Verifiable Credential attesting to their carbon-neutrality
- User rents GPUs and requests verification that the chosen GPUs have the correct green credentials.
- User pays a small fee to the auditing company for the requested verification
- Upon verification, User and GPU owner begin contract of work
- Verifiable credential can be reused for other transactions, with continuous payments to auditor
Proof of Personhood - how to reduce bot attacks without ruining the internet
Proof of personhood is a key safeguard against fraud and bot manipulation. DDOS (Distributed-Denial-of-Service) attacks, bot manipulation, sybil attacks and AML requirements mean that we can no longer interact online entirely anonymously, especially in situations where fraud can occur. The ‘proofs of personhood’ needed to demonstrate that one is indeed a human can range from a simple captcha to a notarised, KYC’d contract, but the lower end of the scale can be gamed – captchas are no longer completely effective at distinguishing between humans and bots. This is creating a challenging situation for any entity looking to secure their website and to interact only with humans – the greater the levels of assurance and reputation needed, the higher the costs and complexity, and the harder it will be to get real users to comply.
Proof-of-personhood exists on a matrix
Whilst many in the Decentralised and Digital ID space fret around the need for a strong proof-of-personhood and the potential privacy and UX disadvantages of using a higher level of KYC for every interaction, it is important to remember that the level of proof-of-personhood needed depends on the type of interaction you are having. If one is just trying to prevent a large number of bots getting through, a captcha perhaps may suffice, but if one is dealing with money (or digital money), the chances for fraud to occur increase dramatically, meaning a much stronger level of assurance is needed. Biometric proofs such as those used by projects such as Worldcoin and Humanity Protocol are not, and should not be necessary for every interaction one has online.
Low assurance proof-of-personhood would be methods such as using a CAPTCHA, whilst a site such as X requires only an email address to create an account (hence the deluge of bots). A site like X however, largely uses follower count, reputation and proof-of-payment (X verified) to define who is worth listening to or not – one’s reputation is somewhat divorced from the assurance that one is who they say they are.
Low-Medium assurance proofs-of-personhood are identity points more difficult for a bot to collect. Things that can only be collected by a person, such as attendance of a real life event, can be used as a more solid form of Proof-of-Personhood, especially when achieved using Verifiable Credentials. These could be a potential option for replacing CAPTCHAs, as many of these could be collected by a person over the course of the life from many different sources and would say little about a person except the event they attended.
High assurance proof-of-personhood can be thought of as forms of more in-depth KYC checks – going from just proving that you are in fact a person (negative KYC), to knowing exactly who you are (positive KYC) to also potentially checking one’s reputation, such as when someone applies for a mortgage and must supply their credit score. These kinds of proofs are absolutely essential for high-value interactions as the incentive to defraud and manipulate is much higher when money is involved.
How can cheqd help companies safeguard against fraud and bot manipulation?
There are many different forms of proof-of-personhood, appropriate for different settings and times. Along with ‘proof-of-attendance’, many different proofs can be turned into Verifiable Credentials to be held in a user’s identity wallet, such as Reusable KYC credentials, connected twitter accounts, github passes etc. Recently passed EU Identity regulations, eIDAS2, are gearing up to ensure every EU state has a wallet available for citizens capable of storing digital identity credentials by 2026, meaning that the use of verifiable credentials may soon become ubiquitous.
As leaders in the space, the cheqd network has built itself to the specifications required for EUDI wallets, ensuring our network will be interoperable with any wallet used to store government credentials. As a partner, we offer a plug-and-play solution for issuing and verifying the countless proof-of-personhood credentials users will pick up along the way, as well as a way for entities to monetise even small proofs if they wish to.
Creating opportunities to safeguard against fraud and bot manipulation in practice:
- Person collects proof for their various handles, e.g. email, Discord, Telegram
- Person completes KYC process and collects re-usable KYC credential.
- Person attends in-person events and collects POAPs.
- Person attends online event and collects POAPs.
- Person works for various companies or on multiple projects and collects either role credentials or endorsements for work done.
- As required, person can select appropriate handles, credentials and POAPs to establish trust for any interaction. (e.g. they require 3 proof points, type depending on level of assurance needed)
Personal AI Agents - Is your bot allowed to do that?
The idea of a personal AI agent has transitioned from science fiction just over a decade ago (the movie Her was released in 2013) to nearly reality today with chat4pt4o, autoGPT and other LLM ‘agents’. These LLMs are capable of breaking down and implementing tasks in a digital environment. While we are still in the early stages of this technology, it is clear that AI agents will grow rapidly. Over time, ‘AI agents’ are likely to handle more tasks for human beings, starting with roles like customer service reps, trading bots and personal assistants, and eventually taking on increasingly complex tasks on behalf of their ‘owners’.
This evolution means that, just as humans require permissions and login details to work for a company, so will machines. Current permission systems are fairly disjointed and unable to smoothly handle the large number of interactions that are likely to be occurring with and between AI agents in the future. For example, if one wished to ask an AI agent to book them a holiday, one would first need to set up permissions to log into your account, to access your email and banking details.
Additionally, over time, it is likely that much of the work done by AI agents will be between agents. This means that agents themselves will require methods to be able to trust each other – to be able to leave and read reviews, to verify that agents are indeed working for whom they claim.These identity features will be absolutely key to interacting with agents safely.
How cheqd can help create smooth, efficient permissions systems for AI agents using Verifiable Credentials.
Verifiable credentials can help create a portable reputation for AI agents without relying on a central third party to hold their reputation, which could turn into a very large database and prime target for cyberattacks. The use of VCs here would enable a much more seamless experience when requesting tasks that require access to multiple different platforms.Instead of requiring access to countless APIs, password details, and two-factor authentication gates, an agent could present a Verifiable Credential, which verifies they have permission from their owner to act on their behalf. This system would also help build reputation systems between AI agents, allowing them to issue VCs to each other for work completed and use these as proof of trustworthiness.
Creating commercial opportunities to create efficient permissions systems in practice:
- AI agent is requested to organise a small work party for an executive and is issued Verifiable Credentials by the executive that will give it access to their calendar, address, online shopping account and banking.
- The Agent contacts the executives employee’s AI agents, to pick the optimal date. Each agent checks each other’s credentials to ensure they are representing the right people.
- The optimal date is found between the agents and then shared to everyone’s calendars.
- Online shopping website checks the agent’s credentials before allowing purchase of party hats
- Once verified the website charges the executive’s bank account and send the party hats to their address.
Intrigued? Contact Us!
As we have shown throughout this article, the cheqd network has great solutions for some of the trickiest issues brought about by artificial intelligence. If you want to learn more, or speak to us about how our verifiable AI products can be used to improve your organisation, please contact us on this page.