NEW

2024 Global Cyber Confidence Index

Arrow pointing right
ExtraHop Logo
  • Productschevron right
  • Solutionschevron right
  • Why ExtraHopchevron right
  • Blogchevron right
  • Resourceschevron right

Arrow pointing leftBlog

Continuous Compromise: When Deep Fakes Get Too Real

Patrick Dennis

June 26, 2023

At RSA Conference in April, I saw a demonstration of how a movie studio could use an AI-based neural network and other tools to convincingly manipulate video footage and change troublesome lines of dialog post production, without having to reshoot scenes. It was impossible to tell the original line from the AI-generated one after the technology worked its magic. I also learned that it’s now possible to change the entire dialog of a movie, including actors’ lip movements, from one language to another.

These tools are amazing and have several legitimate uses in the entertainment industry: think of watching foreign films without annoying subtitles. But in the wrong hands, it has the potential to create chaos. Deep fake technology is now getting so advanced that deep fakes are becoming impossible to spot.

The danger is that in many cases, people will choose to believe the deep fake rather than the real situation. A deep fake that already confirms a large group’s biases may gain major traction among the people who want it to be true.

The vast reach of social media complicates society’s deep fake problem. We’ve already seen how quickly millions of people can get exposed to a deep fake before the subject of the fake can respond.

Ultra-convincing deep fakes will be a huge tool for disinformation. In many ways, they’re the new version of the long-running Russian KGB-style kompromat campaigns, in which agents spread damaging, sometimes true, information about business leaders, politicians, and other public figures. These kompromat campaigns, which started decades ago and continue to this day, were often used to generate bad publicity, to blackmail targets, or to exert other types of influence.

This type of smear campaign is not a new threat vector, but we have to reimagine what the threat looks like with the availability of the technology, near-universal distribution via social media, and near-zero production costs.

Imagine a deep fake video of a U.S. president declaring war on another world power, or a deep fake video of a world power leader declaring war on the U.S.? Is it hard to envision a scenario in which the targeted country responds immediately?

Imagine a deep fake video showing an important political figure taking a bribe, using a racial slur, or engaging in another form of misconduct.

Imagine a foreign economic competitor creating a deep fake video allegedly showing a prominent U.S. business executive concocting a price fixing scheme, insulting customers, or admitting to using child labor.

Once those deep fakes have spread worldwide, how do the individuals whose likenesses have been manipulated meaningfully correct the record? More importantly, what happens when the victims of the disinformation are citizens of entire countries whose democratic institutions are threatened by both foreign and domestic actors using this technology to advance their political goals?

In a recent U.S. Senate hearing on AI, Sam Altman, CEO of ChatGPT maker OpenAI, called for watermarks on videos made with deep fake technology. ExtraHop supports legislative efforts in this area, even as we recognize that watermarks aren’t going to stop bad actors. If tagging and watermarking are optional, bad people will still use deep fake technologies without these controls in place.

With U.S. democracy, an institution that I hold dear, at stake, quick action is critical to stop this rapidly advancing threat. I'd like to see individuals, academics, AI researchers, bipartisan political leaders, and business figures come together to develop and implement pragmatic and lasting solutions to this problem. If we need to develop technology to spot deep fakes as quickly as they're created and disseminated, let's get our brightest minds on that. If we need to pass new federal laws preventing political parties, campaigns, and donors from creating ads that use deep fakes, let's get it done. If we need public service campaigns to raise awareness around deep fakes, what's stopping us?

We may not be able to prevent what foreign adversaries do, but we should be able to limit the impact of this activity in our own country. Protecting democracy from deep fakes should be an issue that unites us as a country, and I invite you to join me in taking action to work toward recognizing and taking action on deep fakes.

Explore related articles

Experience RevealX NDR for Yourself

Schedule a demo