HomeLifestyleTechnology

The rise of the deepfake: How worried should we be?

BY James O'Malley

3rd Jan 2023 Technology

The rise of the deepfake: How worried should we be?

Just what are deepfakes? James O'Malley explores how internet deepfakes are ushering in a new generation of digital fakery

If you’re a geek like I am, then you’ll remember the dramatic moment at the end of 2016 Star Wars prequel Rogue One when, just before the closing credits, we catch a brief glimpse of a fresh-faced Princess Leia, looking as though Carrie Fisher had just stepped out of a time machine from 1977. 

The digital recreation was a big technical achievement. It was almost perfect, and was clearly the result of thousands of hours of work by the animators and artists working on the film.

"Two years later, a new technology arrived that would render Rogue One almost quaint: the Deepfake"

But just two years later, a new technology arrived that would render Rogue One almost quaint: the “Deepfake”. Suddenly instead of requiring skilled artists, digital face-swaps could be created with little more than a bedroom PC. And the technology can be used to put words in the mouth of, well, pretty much anyone. 

How do deepfakes work?

One famous early deepfake saw a video of President Obama modified to show him calling Donald Trump a “total and complete dipsh*t”. And in 2020, Channel 4 created a similar fake video of the Queen for its annual Alternative Christmas Message

The way deepfakes are made is by using the similar clever “deep learning” techniques as the AI face-recognition on your phone: you “train” the deepfake app by showing it some video footage of your target, and it will crunch through the images to spot patterns and identify the person’s facial features and how the face looks when smiling, or when certain sounds are made, and so on. Then you show it a video of another face saying the things you want the fake to say, and it will use that as a template to generate new images that match mouth movements, eyebrows and other facial ticks—making it look as though your target really was saying those shocking things. 

"With grim inevitability, many female celebrities have been deepfaked into much worse"

Unsurprisingly, the technology has already led to some mischief. There are already countless viral videos where Nicholas Cage has been “deepfaked” to star as Indiana Jones, James Bond and so on. And with grim inevitability, many female celebrities have been deepfaked into much worse. 

Deepfake danger

Perhaps the area where there is the most concern about the new technology, though, is in politics. What is striking about deepfakes is that almost anyone with a little technical know-how can make them. You can literally download the software to do it for free, and have it running in minutes. And it is easy to imagine how someone with bad intentions could use the technology to cause harm: imagine a video of President Biden or President Putin announcing that he was launching nuclear missiles…Let’s hope that Washington and Moscow don’t respond in kind too hastily. 

"Usually we think of 'video evidence' as the unimpeachable truth"

In fact, both the Obama and Queen videos were designed, in a sense, to warn of the dangers that deepfakes could pose. Because usually we think of “video evidence” as the unimpeachable truth—but now, the worry goes, how can we trust anything we see, if it can, in theory, be faked? 

And the scale of the challenge from AI-generated imagery is, if anything, getting even worse. Last year saw an explosion of new AI tools that have been “trained” like a deepfake on tens of millions of images across the web, and which can generate entirely new images from just a few keywords. For example, an app called “Stable Diffusion”, which like the deepfake app can run on almost any computer, can be fed a cue line, say “Princess Leia in the style of Monet”, and it will generate almost exactly that—no humans required. 

Princess Leia deepfake

Princess Leia in the style of Monet

However, here’s the strange thing. I’m not actually that worried about deepfakes and AI-art tools being used to spread fraud and misinformation. Why? Because we don’t actually need them. 

We’ve all clicked like on an image purporting to show the crowd at a protest, even though it’s a photo of people at an entirely different gathering. And we’ve all shared a made-up news story, just because it is unflattering to a politician we don’t like. So we don’t need sophisticated new technical wizardry to fool ourselves into believing something we really want to believe. Even if deepfakes do become widespread it won’t make much difference—we’re already too good at tricking ourselves. 

Keep up with the top stories from Reader's Digest by subscribing to our weekly newsletter

This post contains affiliate links, so we may earn a small commission when you make a purchase through links on our site at no additional cost to you. Read our disclaimer

Loading up next...