From Israel to the U.S., Deepfake Videos Are Becoming a Major Threat to Democracy

AI-generated images, audio and video are increasingly affecting our ability to separate fact from fiction in the political sphere. Today, Donald Trump could plausibly deny the infamous ‘Access Hollywood’ tape, one expert warns

Omer Benjakob
Send in e-mailSend in e-mail
A combination photograph showing an image purporting to be of British student and freelance writer Oliver Taylor and a heat map of the same photograph produced by Tel Aviv-based deepfake detection company Cyabra is seen in this undated handout photo obtained by Reuters.
A combination photograph showing an image purporting to be of British student and freelance writer Oliver Taylor and a heat map of the same photograph produced by deepfake detection company Cyabra.Credit: Cyabra/REUTERS
Omer Benjakob

If 2016 ushered in a wave of fake news and automated social media bots spreading misinformation, 2020 will be the year of the “deepfake election,” experts warn.

Citing the rise of synthetic media services that make it easier to purchase or create manipulated images, audio and video, they warn that deepfakes – as well as so-called cheapfakes – could pose a real threat to democratic processes across the world, including the U.S. presidential election on November 3.

Deepfakes, which have been around since the 1990s but rose to prominence in the past two years, are manipulated videos or other digital representations produced by sophisticated machine-learning techniques that yield seemingly realistic images and sounds. They can be used to create seemingly real faces and videos that make it appear as if people said or did something they hadn’t.

At the beginning of the month, Facebook announced it had taken down two closed groups and more than a dozen accounts linked to a Russsian disinformation campaign. Graphika, a social analytics company that specializes in identifying disinformation and helped Facebook oust the operation, revealed in a subsequent report that all of these fake accounts had used artificial intelligence-generated images to create seemingly real profile pictures on Facebook, Twitter, LinkedIn and other social media platforms, all to lend the users an air of legitimacy.

Graphika noted the small number of bogus accounts used in the seemingly Russian operation – just 13 in all – to highlight its unique aspect in comparison to previous fake news operations. “Rather than the thousands of accounts that the original IRA ran to reach for a mass audience in 2014-2017 ... this operation conducted its targeting with pinpoint precision,” the report stated. IRA refers to the Internet Research Agency, a Russian troll farm based in St. Petersburg that successfully operated numerous fake social media accounts in the years surrounding the 2016 U.S. presidential election.

“This is the first time we have observed known IRA-linked accounts use AI-generated avatars,” the report added. “The ease with which influence operations can now leverage machine learning to generate fake profile pictures is an ongoing concern.”

In Israel recently, anti-Netanyahu activists claimed to have revealed a small network of fake users who they claimed also used preexisting, manipulated images as profile photos. Meanwhile, in the United Kingdom, an activist couple known for taking the Israeli surveillance firm NSO to court and advocating for Palestinian rights was targeted as “known terrorist sympathizers” in a smear campaign revealed by Reuters to be the work of one single fake avatar whose profile picture was also a deepfake.

In this instance, too – like in the case involving the Russian-linked accounts – the deepfake was used across media platforms, the fake user even maintaining a proper media presence with bylines on legitimate news outlets that had unwittingly allowed the avatar to publish op-eds. Cases like these, Graphika wrote, “continue a trend of influence operations trying to use a smaller number of more convincing and carefully crafted accounts.”

The rise of cheapfakes

Deepfakes are becoming an increasingly major concern worldwide, building upon the problems created by fake news. For example, according to Graphika, last year saw the first known cases of Chinese deepfakes. A Chinese network of deepfake videos in English was also taken down last month, as were the scores of accounts used to disseminate them.

Like the Russian-linked cases, these accounts had AI-generated images as profile pictures as well – highlighting how synthetic media technology can be used both for distribution and content creation.

Giorgio Patrini is the founder and CEO at Amsterdam-based Sensity, a “visual threat intelligence” firm that’s working with Microsoft to raise public awareness on the threat of political deepfakes. He tells Haaretz that he and others in the field “believe the tools for creating synthetic media, both visual and audio, are increasingly being used as weapons.”

Online demo of Neural Voice Puppetry: Audio-driven Facial Reenactment

Inbal Orpaz, who researches synthetic media as a potential national security threat at Israel’s Institute for National Security Studies, agrees. She warns that the proliferation of deepfakes “has clear ramifications for our ability to have a political discourse clean of external influence efforts.”

The most common usage of deepfakes (about 96 percent) is in “deep porn,” where the faces of famous actors are grafted onto the bodies of people in pornos. But Patrini says he’s also seen a rise in politically focused activity over the past year. Indeed, Sensity – which was at the time called Deeptrace – revealed in 2019 that online campaigns in Gabon and Malaysia employed deepfakes for political purposes. The firm says over 30 U.S. politicians have been targeted in similar ways.

“We focus on what attackers or bad actors didn’t have in their tool kits in 2016: namely, tools to manipulate images and video and also audio, not just bots or fake news,” Patrini says.

“When we talk about targeted attacks on political figures, they’re usually targeted with a single video that then gets amplified on social media,” he continues. “With regards to the potential danger that deepfakes pose to the political sphere, what we’ve seen most of – and it’s still a massive part – is cheapfakes or low-end tools for simple modifications like speeding up or slowing down video, or partially altering images with splicing.

One prominent example was seen in April, when President Donald Trump retweeted a cheapfake of Joe Biden making a convoluted face. The image in question was a GIF that was manipulated with the help of simple image-editing tools. A month later, Trump tweeted an altered video of House Speaker Nancy Pelosi that purported to show her stammering. The video, it was soon revealed, was made with the help of more complex tools that make use of AI.

“If you want to target someone and have a specific video in mind, you don’t need to resort to high-end deepfake, since you need only one image or video to go viral,” Patrini explains. “On the other hand, if you’re gonna create something that will scale, then you need an algorithm. These tools allow most of the hard work to be done by the algorithm. So, if you want to create bots and want them to have realistic profile images, then Photoshop will take tons and tons of hours. But a program will do the imagery for free, and be [ready] within minutes.”

According to Orpaz, “It’s the rise in the different types of tools for creating synthetic media that allow operations like these to be scaled and generally makes it economically lucrative to develop these tools.” Patrini concurs and says his firm has noticed a rise in “both supply and demand” for such services, usually offered illegally or via the darknet.

Cottage industry

There are over 150 private companies worldwide working in the field of “synthetic media” – the preferred industry term for legitimate, AI-created images or audio. There are at least 20 Israeli startups in this sector. One is CannyAI, a firm offering synthetic video services that allow companies to create a single video in multiple languages.

The company’s co-founder, Omer Ben-Ami, takes issue with the claim that companies such as his are somehow to blame for the rise in political deepfakes.

“The reason deepfakes are so easily used in nefarious ways so far – mostly for pornography – is because of open source projects or academic papers published on the subject,” he charges. “In a way, if deepfakes as an open source project didn’t exist, there wouldn’t be anything substantial to fight against,” he adds, noting that his company doesn’t provide public access to its technology.

For him, political cheapfakes are just a “form of video editing,” and “the cheapfake of Nancy Pelosi could be done with any common visual effects or video-editing software.”

Ben-Ami recognizes the threat of manipulated videos – “for example, if someone wants to extort someone else” – but explains that “synthetic videos are usually easily discernible. Social media plays a role here when it allows content to spread even when it’s clearly fake. But this is part of a broader discussion about fake news, of which synthetic media is just a small part.”

Firms like CannyAI, Sensity and others – including the Israeli company Cyabra, which works with media outlets and social media firms, helping to detect fake images – all aim to raise awareness of the problem.

“While we think the tech is still far from creating something completely undetectable, we think it’s important for people to be updated on the possibilities,” Ben-Ami says. “What we’re doing, hopefully, is raising the issue that videos shouldn’t be 100 percent trusted at a quick glance, and that the tech is improving and might reach a point where all these previously detectable ‘tells’ disappear.”

In this file photo taken on January 28, 2020 the Microsoft logo is seen at the International Cybersecurity Forum in Lille, France.
In this file photo taken on January 28, 2020 the Microsoft logo is seen at the International Cybersecurity Forum in Lille, France.Credit: AFP

Such “tells” can include asymmetry between a person’s eyes – a glitch known to happen with deepfake technology that lends some of the AI-generated profile pictures a somewhat uncanny feel.

Cyabra CEO and co-founder Dan Brahmy told Haaretz that it’s important to remember the technology also had real value: “Synthetic media is infamous for misleading the public and undermining trust in the media. However, like most things in life, synthetic media can [also] be used for good. Art, entertainment and health are just a few areas that can, and have been, enhanced by deepfake technology.”

According to research conducted by Orpaz, these legitimate player are key to fighting the deepfake phenomena. However, it’s a pricey business requiring large investment, which specific targets or media outlets generally can’t afford, she says.

Sensity is doing exactly this, and Patrini says his company is about to launch a tool that will, at least until the November U.S. election, allow real-time tracking of what he terms visual threats to politicians as they happen and first appear online (a tool Microsoft is set to use). CannyAI, too, is in talks over sharing its data with other organizations for detection services.

However, the combined threat of cheap- and deepfakes poses a wider challenge for those offering technological solutions, Patrini admits: “If a bot creates 10,000 fake accounts, then in theory it’s possible to create a filter that will find these. On the other hand, the more humans who are involved in the loop, the more difficult it becomes to automate the detection process.”

Facebook, Amazon Web Services and Microsoft offered $1 million into their search for such solutions, yet the machine-learning engineer who won the “deepfake detection challenge” in September 2019 had an accuracy rate of only 65 percent. “This isn’t enough for us to trust automatic detection tools and for other platforms to embed them within their systems,” Orpaz says.

Brahmy says his firm’s “technology uses algorithms that analyze hundreds of unique parameters to determine if a profile is real or fake,” reaching what he says is “unprecedented levels of accuracy.”

However, he does note that “it will take more than technology to win the fight. People are needed, too. Publishers, media outlets, governments and social media platforms all have a responsibility to fight disinformation.”

For all concerned, the real issue behind deepfakes is less technological and more a social one, undermining our ability to differentiate between fact and fiction.

“During the last presidential campaign, we saw Trump’s ‘Access Hollywood’ tape with the ‘grabbing’ comment,” Patrini says, referring to the release of the video in October 2016.

“If that were to happen today, any political campaigner could and would say ‘That’s not me’ – and they’d have real and justified plausible deniability. They just couldn’t say that in 2016, that’s the biggest difference.”

Comments