Blog
Privado ID Solutions
Age verification has rapidly emerged as a critical topic on the Internet, triggered by a pressing wave in global regulations aimed at protecting minors online.
This article digests how decentralized, privacy-preserving age verification systems can be built without compromising user privacy:
Over the last year, several countries have introduced guidelines, regulations and proofs of concept around age verification for digital service providers. Some examples are:
Although these regulations can be seen as a response to concerns about the early access to porn, gambling and other hazardous online activities, access to social networks (whether cyber bullying, synthetic porn and other toxic dynamics) are increasing the mental health issues in the youth.
This means that we will see age verification controls not only in porn and gambling, but as another needed step just to browse the internet (social networks, app stores, content providers and even operating systems)
Internet freedom activists such as EEF, Internet Society and EDRi are challenging the need for these controls and have expressed their concerns about how this is just another excuse to remove our privacy rights and how it could marginalize certain groups of our population from the access to information.
Most of the criticism focuses on 5 areas:
Developing systems that accurately verify age without being easily circumvented is challenging. There's a constant arms race between verification technologies and methods to bypass them.
There are inherent limitations in current technologies, such as facial recognition and document verification, including issues with accuracy across diverse demographics and the potential for spoofing.
The industry must navigate a complex landscape of global and local regulations, including data protection laws like GDPR in Europe and COPPA in the United States, which dictate how personal information can be collected, stored, and used.
Ensuring that age verification solutions are accessible to all users, including those with disabilities or those without access to traditional forms of ID, is a significant challenge.
Ensuring the privacy and security of personal information is paramount. Users are often hesitant to share sensitive data, such as government-issued ID details, due to fears of data breaches and misuse.
The lack of a standardized approach to age verification across different countries and industries complicates the development of universal solutions.
For the scale we are discussing in this article (world-wide ecosystems of trust), the problem of Age Verification can’t be (and shouldn’t be) solved by a single actor.
Putting this in the hands of few tech giants like Google and Apple (that could just embed this verification into their operating systems and act as a single provider) is a bad idea for the same reasons we mentioned before (privacy, accessibility) but also from a political and strategic point of view –no single company should regulate our access to information at their discretion.
The alternative is an open ecosystem of trust where multiple credential providers (the companies verifying your age) can provide their services by adopting clear rules defined by the governing bodies of that ecosystem of trust and implementing well-known standards in digital identity systems that guarantee the user right to privacy and consent.
Each actor plays a different role in this solution:
Governments can also play the role of credential providers. The digitalization of our national ID creates an easy way to get age credentials directly from the government –but also removes the possibility for an open market of private business initiatives around age verification. eIDAS has faced some criticism for not making clear what will be the role of the private sector in the economy of credentials.
In this article, we will focus on the 2 challenges that must be solved by the digital identity systems: privacy and interoperability.
Although all these challenges are equally important, privacy is probably the biggest concern and blocker for adoption in this type of verifications.
There are several reasons why privacy is crucial in age verification –some are obvious, some are more subtle:
This creates a very interesting set of privacy requirements, where the information shared (What) is not as sensitive as how it is obtained, stored and shared (How).
This “how” involves all the checks and verifications required for a user to prove to a verifier that he/she is the rightful owner of a credential that satisfies the verifier query (Age >18). We can see all the verifications required in the following illustration:
There are 2 privacy-related design decisions to be made in this step:
For the first, there seems to be a growing consensus in the industry on-going for doing the verification on the user device instead of sending the user information (national documents, biometrics) through the network to a server. An example of this are providers like privately.eu, outdid.io, regulaforensics.com. This is clearly the most privacy preserving option and the preferred by the regulators.
For the second, the challenge with identifiers is that they can be tracked across multiple applications and doxxed (linked to real world identities). For example, imagine that your credentials are linked to your e-mail address or to your phone number –how easy would it be to know which adult sites you visited and to link that to your persona?
The good-enough solution would be to link the age credential to your DID (decentralized identifier) –this identifier is not linked to any social public profile, but it can still be used to track you across applications (and it also can be doxxed).
The best solution is a nullified DID (like the Profiles feature included in PrivadoID) that would create a single-use DID for each issuer and verifier. This way, you can still prove that you are the rightful owner of the credential without disclosing your permanent DID.
Although the technical implementation of Nullified DID is a bit complex, it’s easy to understand how they work:
The benefit is obvious –it’s impossible to track this user across applications, and even if a nullified DID is doxxed (linked to your real world identity), that will only provide information on the context of one application (it won’t reveal your entire history). Also, any data breach on the issuer would not compromise you in the context of other applications.
If privacy is the only concern, then the obvious answer is to store the credentials on the user’s device, especially if the credentials are generated locally. But privacy is not the only requirement here –user experience and low friction verification processes are also an important aspect for the mass adoption of these solutions.
From an user experience perspective, the ideal scenario would be “set it and forget it”: pass the verification process once and be free to reuse that verification seamlessly across all applications and devices. Isolating the credentials to one device implies that the process has to be repeated several times across all user devices. It also puts the custody responsibility of the credential on the user (backup & recovery).
The next best thing in terms of privacy is storing the credentials outside the device (e.g. in the Cloud) in a way that the credential information is only readable in the device. This allows for a multidevice experience and keeps the information only visible to the user.
To achieve this most identity solutions that rely on external storage encrypt the credential using the user's private keys (e.g. the same keys used to create the genesis DID). The credential information is stored encrypted and can only be decrypted by the identity client (on device) that holds the private keys.
Privado ID follows this pattern. In its current implementation of the Web Wallet, a message signed with the user crypto wallet is used to generate and derive all the necessary keys (for DID creation and storage encryption), as follows:
We can divide the presentation of credentials to a verifier in 2 major blocks:
For the first part, the privacy challenges come from the level of issuer involvement needed for the credential to be accepted as valid.
Is the issuer involvement needed in order to:
For a privacy preserving credential presentation, the answer to all these questions should be NO. The issuer should not be aware of when, how or where a credential is being used –especially in a Age-Verification context, where the age verification provider could track the activity of the user across all verifiers.
The best combination of answers the the four previous questions are:
Last, but not least, there is the topic of Schemas. Schemas are files that contain the “structure” of the credential, and are needed for the verifier to understand the format and semantic of the credential. These schemas must be hosted somewhere where they are publicly accessible for all parties (issuers, verifiers and identity wallets).
The challenge here is that every time the schema is used, the downloading of that file could leave a trace –especially in the case of identity wallets. The best way to prevent this is to treat these files as a public good, beyond the control of any single organization. For that, Privado ID offers the Schema Repository, based on IPFS. Schemas hosted on IPFS are out of the control of any single organization.
For the second part (satisfying the query), the privacy risks come from oversharing (sharing more information that is strictly required).
There are 3 levels of “disclosure” in sharing a credential:
There are no questions here –privacy preserving Age Verification needs to implement Zero Knowledge proofs, the only question is how. There are 3 approaches to this:
When it comes to age verification, privacy is the topic of choice for most regulators and credential providers. The challenges are clearly defined and there are a number of solutions and implementations that can be combined to offer the needed levels of privacy (as presented above).
Interoperability seems in comparison a “nice to have” property for these solutions, but that would be a deadly mistake for the entire age verification industry.
Age Verification providers are under the constant menace of being disrupted by the big tech companies that already dominate the user browsing experience –Google, Apple, Meta, Microsoft. Any of these players could add age verification to their user accounts and offer that as a service for any application that uses their social logins –sign in with Google to prove your age!
There are very good reasons to avoid that scenario (especially from a privacy point of view) –but the threat is real– both for the industry of credential providers and for the users and regulators trying to preserve their data sovereignty and privacy. And who gets to decide which side wins? As usual –the user convenience.
These big players can easily create the “set it and forget it” solution –one single verification that opens all doors at once, across all devices and with minimum friction. If the alternative offered by the credential providers is “pass the process in every single application and every single device, with different experiences and outcomes”, then the decision for the user is easy.
It’s in our society's interest to foster an open ecosystem of competing credential providers that work to improve the accuracy, accessibility and reliability of their methods and have a business model that doesn’t consist of selling the user data. But the industry needs to create the same “set it and forget it” experience. And that is a complex task that requires standards, consensus and governance.
Interoperability can be achieved with a single global issuer (if all credentials are issued by one or very few issuers, then it is easy to ensure credentials are compatible or that verifiers will accept multiple credential formats) –but that’s not the goal. Our goal is to achieve interoperability in an open, dynamic ecosystem of issuers and verifiers.
From a user perspective, looks like this:
From an Issuer point of view:
From a Verifier point of view:
An open ecosystem where N number of providers can deliver reusable credentials to be used in a M number of applications, where N and M will grow organically over time.
The first step to meet this requirement is to standardize the components of a Self Sovereign Identity:
This would allow for any credential issued by the credentials providers to be consumed by any application. It solves the technical interoperability of the ecosystem, but it doesn’t make it economically sustainable, compliant or easy to use by the end user.
For these challenges we need to add some infrastructure services like
Here we lay out all the components of the architecture (in purple the solution offered by Privado ID):
Given these components, any application could easily benefit from the entire ecosystem of issuers and from all the past issued credentials. The question (query) to the user identity wallet would look like this:
I need a ZK proof that you are over 18 years old that is based on a credential following the schema “AgeVerification_v1” and signed by any issuer that is included in the trust registry as compliant in the EU. If you don’t have such credential, my preferred issuer is “Privately” and here is the payment token for the issuer (that will pay for your credential issuance).
From a user point of view, the experience would look like this (including the first time issuance and the reuse of the same credential in a different application)
In the debate about an “Age aware” Internet, age verification is just one side of the story. Age Verification is (should be) the mechanism to make sure that only adults can access certain services and content. It is not intended to track or identify minors, just to exclude them from certain areas of the Internet.
But what about the services and content that is intended for minors but needs parental approval? Most countries define an “age of consent”, where a minor needs to be at least of a certain age to consent to the terms of use of that service. In some cases, the parental consent (or refusal) has to be taken into account.
Managing parental consent is a different and complex topic that needs to take into account multiple challenges:
Most of these challenges are not technological in nature and won’t be solved by technology –an interoperable ecosystem of credentials could be a good foundation layer, but this is a challenge that requires clear regulations and perhaps a public debate on the consequences of implementing these controls.
Age Verification is a fast-growing industry with a thriving ecosystem of age verification providers. We have seen innovation efforts in the development of new assessment mechanisms that can offer accuracy and reliability in the era of deepfakes and AI but the full-scale adoption of an Age-aware Internet requires a stronger focus on how these credentials are shared after they have been created.
We presented an architecture for a privacy-preserving and interoperable ecosystem of credentials that can meet regulatory requirements and allow the industry to compete with technology giants such as Google and Apple in delivering a “set it and forget it” experience for age verification.
The technology exists –the next is to work with industry leaders (such as avpassociation.com, euconsent.eu) to rally the key players of the industry to adopt them.