Skip to main content
The Hill

AI and deepfakes might actually save American democracy from itself

By September 6, 2024No Comments

(This column originally appeared in The Hill)

Last week, California’s legislature passed a bill banning deepfakes, citing concerns over how AI tools are increasingly being used to trick voters, among other crimes. I say, good luck with that!

Deepfakes are just one part of the political misinformation campaigns carried out by some of our most trusted platforms over the last few years.

Mark Zuckerberg admitted that Facebook had suppressed news during the pandemic at the behest of the White House. Elon Musk, when he took over X, released a trove of documents showing that the former leaders of his company had done the same. And the RussiansChinese and Iranians are all accused of manipulating social media with bots delivering misinformation in order to dupe American voters.

In case you’ve been hiding under a rock these past few years, a “deepfake” is, according to the U.S. Government Accountability Office, “a video, photo, or audio recording that seems real but has been manipulated” using artificial intelligence technology, and can “depict someone appearing to say or do something that they in fact never said or did.”

My firm implements software and technology that relies on AI. I frequently discuss the risks inherent in AI with my clients. One of the major risks is the misinformation caused by these deepfakes. And it’s a serious problem.

The technology has evolved significantly and rapidly. It can do some pretty crazy stuff. Deepfake voices can dupe an employee into transferring millions to a criminal account. Deepfake developers can impersonate CEOs by creating new videos from existing content. The technology can persuade customer service reps into revealing private information or make it seem like celebrities like Taylor Swift are giving away prizes or — much worse — doing porn.

With all due respect and sympathy to Swift, deepfakes in politics can be even more damaging. AI has been used to imitate campaign phone calls from President Biden, insert profanity into former President Obama’s talks, and even to show Donald Trump and Kamala Harris walking hand in hand on the beach.

OK, that last one’s kind of funny. But really, it’s not.

Last month, a manipulated video ricocheting across X appeared to show Joe Biden cursing his critics — including using anti-gay slurs — after he announced he would not seek reelection. There was also an image shared across platforms that appeared to show police forcibly arresting Trump. These may seem obviously fake, but such images can change people’s opinions. They can sway elections and alter politics.

Regulators from D.C. to Colorado (and of course California) are getting involved and passing laws to limit the power of AI and the companies that help make these applications. But regulation may not be the ultimate solution to the deepfake problem. Instead, it’s worth considering just letting it all play itself out.

Yes, that’s right: Don’t ban deepfakes. Allow them. Encourage them. Let them proliferate.

Why? Because the more deepfakes appear on social media news, the more diluted this news becomes with non-news. And that’s a good thing.

This is already happening. When you see a video of a politician online spouting some controversial view or doing something outrageous, do you find yourself wondering whether it’s real? Why are you asking this question? Because you’ve probably been duped by deepfakes.

Maybe you thought that Pope Francis was wearing a huge white puffer jacket, or that The Weeknd and Drake collaborated on a song. Nope. So now you’re starting to doubt, right?

You may say that the American people are dumb and will always be duped. But that’s not really true. It’s just that some take longer than others to learn.

Already, according to one recent survey, 90 percent of people claim that they fact-check news stories. People do become wiser, given time. So as AI and deepfake technology get better and better, we will question more and more. Fears and doubts of deepfakes will invade the minds not only of the younger or tech savvy but also grandmas and grandpas in Florida. It won’t be overnight, but it will happen. It’s already happening.

So what then? People will inevitably stop relying on social media for their news. They won’t just take every video or story on their Facebook or X feed at face value. They will decide that maybe it’s fake and maybe it’s not.

This doubt will inevitably drive voters to seek out better, independent, objective and more reliable media sources they can trust. You’re still going to have your left and right media — that will never change — but at least it won’t be full of misinformation from China or AI-generated videos of fake events like we see now on social media.

Say what you want about today’s big media platforms, but the reputable ones aren’t posting videos of Donald Trump and Kamala Harris walking hand in hand on a beach.

Am I overly optimistic? Naive? Probably. Experts are concerned about the effects of deepfake technology. I’m inclined to think that it will ultimately soften the impact of the social platforms, neutralize their influence on public opinion and improve our news sources.

Banning deepfakes isn’t the answer. Letting them run their course may actually save the media — and our political system.

Skip to content