Lorem ipsum lorem ipsum

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

Lorem
Use Cases

Don’t Trust the Content, Trust the Source - Content Authenticity in the Era of AI

May 23, 2024
Blog
Use Cases

In today's digital age, distinguishing between human-created and AI-generated content is increasingly difficult, posing significant risks as AI is used to create deepfakes and propaganda. A recent Microsoft report highlighted China's use of AI to influence Taiwan elections through fake audio and memes. The danger lies in our brain's tendency to accept visual content as reality, leading to manipulation and erosion of critical thinking. Solutions require verifying content authenticity through digital reputations and blockchain technology, but these must balance security with privacy and practicality. Ultimately, addressing AI-generated media challenges is both a technical and philosophical endeavor, emphasizing the need for trusted digital identities and accountability in our online interactions.

It’s getting harder and harder to tell whether content is created by a real person or by an AI.

In an era where anyone can create any AI-generated content, users and governments are already using AI to create deepfakes and propaganda. According to a recent report from Microsoft, China used AI-generated content to influence voters this year in Taiwan elections –including posting fake audio on YouTube and AI-generated memes.

“While the impact of such content in swaying audiences remains low, China’s increasing experimentation in augmenting memes, videos and audio will continue,” said Microsoft in this report.

AI-generated content is dangerous and, as Marc Andreessen said, we need systems where people can certify that the content about them is real.

This article tackles the problems and dangers of AI-created fake videos and pictures and why we need better ways to verify if content is real or not:

  • AI-generated Media: AI can make very believable fake content that can trick people and spread false information quickly.
  • Challenges of Digital Reputation: Building solutions that can tell if digital identities and content are trustworthy is hard. You need to know if a source has a good reputation before you can trust its content.
  • Blockchain in Identity: Blockchain allows users to store personal identities and manage trust registries for updating revoked credentials or changing security keys. These kinds of changes don't happen often, but they are key for security and trust.

We don’t understand the power of AI Photos & Videos yet

Why are AI-generated disinformation campaigns so effective? Is fake AI content that dangerous?

Our brains haven’t yet been adapted to process photos and videos in a different way than we process reality. Everything you see on a video or a photo enters as a non-verbal form of information and is processed in the same way you process the reality that surrounds you. That doesn’t happen with a text or a podcast, because they are verbal forms of communication.

As a consequence, your rational “toolbox” (categorization, generalization, abstraction & critical thinking) does not apply “by default” to the information you receive from videos and photos. Your brain takes them as reality, turning off your critical thinking.

This means that our brains are often on 'autopilot', accepting what we see without questioning it deeply. And we wonder, is the decline of our collective critical thinking a consequence of the way information is conveyed to us?

AI generated photos used this hack to take mass-manipulation to extreme levels. How likely do you feel about this sequence of events?

  1. You receive a photo of a famous person doing something you really despise (like smoking a cigarette next to a baby).
  2. You hate it and that feeling lasts for some time until you scroll to the next thing.
  3. Later you find out the photo was faked.
  4. You feel betrayed and annoyed…and powerless because you know there is nothing you can do.

How likely is it that you have developed a negative unconscious bias toward that person even after you learn that it was fake?

Your brain experienced these things in different times —you lived with that “fact” for some time and the next time you hear about that person, your brain will not start from an empty page.

Now imagine that every person in the world has easy access to that AI technology.

And next, Sora is announced.  

Don’t trust the content, trust the source

In an era where AI-generated content can be created by anyone, you read a blog with the title “Vladimir Putin confirmed to have pancreatic cancer”. Do you trust it immediately? Would you accept it if there was no source of the information? Do you check who is publishing it?

Later that day, you see a video of Zelenskyy in handcuffs escorted by Russian soldiers from a governmental building in Kiev –the video is recorded by a soldier’s phone. What’s your reaction?

The difference between the first and the second scenario is that, in the same way we don’t process verbal and non-verbal channels, our trust assumptions are different in each case. In the first case we trust the source, in the second we trust the content.

We have lived with this trust assumption for so long that it’s difficult to change it – to accept that nothing can be trusted without a trusted source.

Ok - don’t check the content, check the source. Clear. But how?

Newspapers and traditional news channels did that for us for years –they had something to lose (their reputation or being sued) so they checked their sources before publishing something. You just had to decide which news channel to trust and they would take care of the rest.

But now, every person is a news agency. Let me prove it using my AI skills:

Now, anyone can create fake content! The challenge is that the speed at which this is produced and consumed makes checking sources very costly and time-expensive. Most fakes come from unknown sources and appeal to the “somebody saw this and recorded with their phone” narrative, which makes it more credible and also harder to verify.

We need to create accountability and digital reputation for these new sources –that means that for any piece of digital content that I consume, it should be linked to a digital identity, and that digital identity has a reputation.

Created by Human is not a solution

It could seem that all we need to certify about these images and videos is that they were captured by a human being using a non-tampered device –and that they weren’t modified afterwards.

This is pretty much the approach suggested by the Content Authenticity Alliance and their Content Credential, but that’s just not good enough:

  • What happens if I take a picture of another picture?
  • Not to mention the nightmare of securing the chain of trust of all the components from your camera sensor to the app that you are using to upload the picture.
  • And even if that would work –which manufacturers are to be trusted issuers of these credentials?

The real answer remains in the source and its reputation –the fear of losing your credibility for good.

The Challenge with Digital Reputation

The biggest challenge with digital reputation is that you need to decide if you want to work with positive-only reputation or if you want to implement negative reputation systems.

Positive-only reputation systems work by assuming that nobody can be trusted initially (no reviews = not to be trusted). You will need to collect positive feedback to create a positive reputation –but then the challenge is to assess the validity of that feedback (e.g. fake reviews in Amazon). Also, people could be “farming” good reputation for some time and have trusted identities ready to use when they need to release the fake information (something we can see nowadays in the reputation-farming that happens in Web3 for Airdrops).

On the other hand, negative reputation is not as simple as letting people to down-vote other people - it has to do with the impossibility to create multiple identities (putting the counter to zero for the bad guys). Negative reputation only works for well-known identities like influencers, famous people or companies –just because they have something to lose if they damage their brand.

Creating a new online identity is costly (not impossible) but is a good deterrent. This is in fact the first use case that could be implemented right now –public figures “signing” all their content to avoid fakes–. This could be done today.

For the rest of us, anonymous internet users, creating an effective negative digital reputation implies that we all share a universal and permanently attached personal unique identifier (e.g: your biometric hash or national ID), so we can’t just create disposable identities. The bad news is that it is a direct route to dystopia (see Chinese Social Scoring).

“Context Based Unique Identifiers” could help to reduce the risk of having a unique permanent digital identifier by providing a different “alias” of that identifier to each service provider.

Context Based Unique Identifiers to the Rescue

Fortunately, there are some cool technical solutions that can reduce the risk of that dystopian scenario.

As we have described in other publications, the idea of “Context Based Unique Identifiers” could help to reduce the risk of having a unique permanent digital identifier by providing a different “alias” of that identifier to each service provider. Let’s see how this would work:

  • You would obtain your unique identifier from a trusted source (Biometric, National ID).
  • For each interaction to a new service provider, your identity wallet would present a cryptographically derived identifier. This identifier is unique in the context of this service provider.
  • If you are caught distributing fake content, this service provider (e.g: Instagram) could reflect it in your reputation or ban you from the service.
  • Other service providers are unaware of what’s happening in Instagram - they can not link you with that reputation because it is context based. Your reputation on other platforms (where you did nothing wrong) is untouched.

So you still have something to lose (being banned from an important social network forever!) - but it will not affect other aspects of your life (you won’t lose access to Google).

The Role of Blockchain

Don’t trust, verify! Blockchain plays a significant role in certifying the authenticity of the content (everybody seems to agree) –but how?

Any solution to the content-authenticity problem has to meet the following:

  • Scale to billions of users (to certify what they produce and verify what others have produced).
  • Be integrated in all aspects of our digital live (as web browsers and apps).
  • Verification almost-instant, effortless. Certifications transparent for the user.
  • Not controlled by corporations (sustainable public good).
  • 99% transactions are free.

Unfortunately, some of these requirements –instant, free & easy– don’t work well with blockchains.

We think blockchain should play a role in storing our personal identities and to support other parts of the trust infrastructure needed for self sovereign identity systems, such us credential revocations, key rotations and trust registries. These are rare transactions that represent less than 1% of the total transactions in the system.

Everything else can be done by using private keys, signatures and cryptography –all for free and private in the user device.

In Summary

Addressing the challenges posed by AI generated media & deepfakes will require both social and technical solutions –from changing our trust assumptions to the creation of digital long lasting reputation.

From a technical point of view, we believe that the solution should combine:

  • Decentralized Identifiers.
  • Verifiable Credentials (for people and content).
  • Permanent Unique Identifier sources of trust (preferably governments).
  • Context Based Unique Identifiers using Zero Knowledge.
  • Permissionless and Trustless registries (Trust Registries).

Of course this is not just a technical decision –it’s a philosophical one. We have reached the point where there is too much of our lives on the Internet to keep living there without digital trusted identities and without real accountability.

If you want to explore identity solutions with Polygon ID, please contact the business development team.

Share this post

Stay up to date

Get our newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.