Opinion

Is Deepfake Technology Capable of Destroying Democracy?

Is Deepfake Technology Capable of Destroying Democracy?Preview: Is Deepfake Technology Capable of Destroying Democracy?

Are you familiar with the identity swapping power of deepfake technology? Armed with just a collection of photos, a video to superimpose your face on to and deepfake software capable of mapping your face, anyone can make it look as though you’ve done something you haven’t.

Creepy, right?

But how damaging could this emerging tech really be? Is it, as some people have suggested, a ticking time bomb capable of dividing us and destroying democracy as we know it?

In this Northcoders Discuss, we asked three members of the Northcoders team to weigh in with their thoughts on the subject: content manager, Lucy; senior tutor, Mitch; and onboarding manager, Josh.

First up with her thoughts on the topic is Lucy...

Deepfake technology is easy to use and has resulted in a tidal wave of face-swapped fake celebrity porn and doctored images flooding the internet. But celebrities aren’t the only ones falling victim to these identity scams. Since deepfakes first gained mainstream attention a couple of years ago, the software has become easier to access and attacks aren’t limited to public figures. AI revenge porn against ordinary people, especially women, is on the rise. This technology is really devastating for these women because the intent behind it is almost always to hurt and degrade, and it can look so much like the real thing. 

Deepfake technology may have given rise to some murky posts on the internet, but there’s way more to it than just creepy Redditers, vengeful exes and porn. Take the recent Star Wars movie, where Carrie Fisher was recreated using AI or the hours of Nicholas Cage deepfakes on YouTube.

Nick Cage DeepFakes Movie Compilation

Nick Cage DeepFakes Movie Compilation

It's clear that the major danger of this tech is that it’s getting harder and harder to distinguish what’s real from what’s not. And, as impressive as that is, it’s also damn scary.

After all, it might be possible to spot a fake now, but what about in six or 12 months? Who is going to police the internet, helping victims to remove defamatory content? Once released, is it ever really going to be possible to wipe all traces of a video from the internet, even with the new GDPR regulations and the ‘Right to Be Forgotten’ and ‘Right to Erasure’ clauses?

"It’s almost impossible to erase a video once it’s been published to the internet,” law professor Eric Goldman told the Verge earlier this year. “We have to look for better ways of technologically verifying content. We also have to train people to become better consumers of content so that they start with the premise that this could be true or this could be fake, and I have to go and figure out that before I take any actions or make any judgements.

"It absolutely bears repeating that so much of our brains' cognitive capacities are predicated on believing what we see. The proliferation of tools to make fake photos and fake videos that are indistinguishable from real photos and videos is going to test that basic, human capacity."

What then, we have to wonder, does deepfake tech mean for our society? A society that is so driven by the notion of photographic and video ‘evidence’. One that is so focused on documenting everything, from birthdays to robberies, with video or photos to prove that it did, or did not, happen.

Take a look at the video below. It's pretty obvious that the video of Donald Trump is a deepfake, but only when it’s seen side by side with the original sketch featuring Alec Baldwin.

Comedy it may be, but that hasn’t stopped it from being banned on YouTube in Canada and the USA. Advanced deepfakes have the power to influence the political views of people across the globe.

Trump | Deepfakes Replacement

Trump | Deepfakes Replacement

If deepfake tech has the power to reinvent reality, doesn't it then follow that it also has the power to erode any trust we have left in a fair democracy? After all, what is a democracy built on if not trust and belief? Does fake news + deepfakes = democratic disaster?

It's not hard to imagine that an American citizen shown the Trump deepfake video on the right (see video above) without seeing the original might (at first glance at least) mistake it for the genuine article. The result? The video could shape their view of the president and influence how they vote in the future. Deepfake tech wields a significant amount of power.  

I believe deepfake technology will (and should) make us question everything we see online. If we're not vigilant, it could very easily erode our trust in democracy until there's nothing left but suspicion, misinformation and lies. 

Josh, what do you think?

After watching a Northcoders graduate talk about deepfakes during this summer's lightning talks, I was aware of the concept but had no idea how easy it was to create these videos. A quick online search provides countless links to free software packages, tutorials and examples.

My biggest concern is not how this content affects those we traditionally associate with faked content, like politicians and celebrities, but normal people who are not prepared for being targeted with this content. It’s much harder to discredit a video when you don’t have a lawyer, PR team and the financial muscle to prove it’s a fake.

By combining deepfake technology with someone’s Facebook or iCloud account, creating a video to discredit a colleague, ex or rival becomes trivial. Even after you prove a video is fake, the damage has already been done. As technology and these videos become more elaborate, it will become harder for ordinary people to prove deepfake videos are fake. It seems inevitable that this blurring of fiction and reality will raise questions about how much we can trust any image or video.

It seems deterrence is the best long-term solution to preventing damage being caused by deepfake videos. This can currently be problematic as in the UK there and no laws around the creation of fake non-consensual images. However, it’s my opinion that it’s so important the law changes to protect people who are victims of deepfake content.

Thank you Josh. Jonny, over to you… 

I think Josh is absolutely right that any individual could be subjected to all kinds of malicious behaviour - financial fraud, inappropriate representation, sabotage, and so on. These all have potentially devastating consequences, but arguably we do in some way have the structures to deal with such criminal intent. One could see how laws regarding slander and fraud could be adapted to deal with people who used technology in this way, even as the technical challenges in identifying the crime mount. These sorts of criminal and judicial systems have a rocky history but they have a history nevertheless and people are capable of at least of adopting them as a lens through which to condemn this behaviour.

I’m more concerned with what Josh alluded to when he talked about trusting ‘any’ image or video. The last few years in particular have highlighted how credulous we are as a general population, and how the assertion of untruth as truth can have real political impact on society. Theoretically, technology should be empowering us more and more to be critical thinkers as data gets closer to our fingertips. But while I don’t believe past generations were any more or less gullible in essence, technology has not risen as a unified grand dome above us all inspiring and informing us all equally, but as little bubbles in which our prejudices can be reflected, perhaps magnified, often normalised, back at us.

So if and when the time comes that our ability to ascertain whether videos of a certain event are faked or not falls below an acceptable confidence level, what is to stop people in positions of power carrying out their whims in full glare of cameras, knowing they can rely on a defence of ‘that’s a fake video’?

Sources where we are not accustomed to particularly high quality video come to mind: police body cameras, CCTV, purported ‘home video’ footage. It’s an inevitable result of the situation.

When the ability to analyse DNA matured, we were able to look back at the past and uncover miscarriages of justice and apprehend many who thought they were safe. I wonder if there will be a period decades in the future akin to this, when our current abilities feel primitive once again and we discover how many mistakes we had made - in any case, it would small comfort to those deceived or deprived now.

What does the future hold?

Thank you to all of our Northcoders Discuss contributors. It's safe to say the deepfake debate won't be going away anytime soon. What do you think? Is deepfake tech the future? Will it cause more harm than good? Let us know your thoughts!

Anyone can learn to code

With coding bootcamps now available in both Leeds and Manchester, it's never been easier to access our industry-leading coding education. Ready to get started? Download our free ultimate guide to your first four weeks of code.