Sophisticated digital imaging technology is being used to create 'deepfakes'; images and videos that are indistinguishable from reality.
Things have moved on from celebrity faces being superimposed onto porn performers. More complicated algorithmic techniques can depict people doing what they’ve never done, say things they never said or would dream of saying. As the fake slurring Nancy Pelosi video shows, they can now have a political agenda.
Professor Hany Farid of University of California, Berkeley, specialises in detecting the signs of digital manipulation, and is trying to design systems that can tell the real from the fake before things go viral.
He’s now working on new detection methods for deepfakes, he’s worked with a US military research agency on media forensics. But with hundreds of hours of uploaded video hitting YouTube's servers every minute, it's an all-but-impossible task: and the stakes are high if things get missed.
He told Kim Hill that while deepfakes are nothing new, the technology has now extended beyond the realms of “the Hollywood studios” so that the average internet user can use it to create sophisticated and compelling fakes.
“[Combined] with the reality of social media today which is that the same people who can now create this content can distribute it to the world, to the millions and millions of people within hours, in many ways that’s the new risk, it’s not the phenomenon itself, it’s the scale at which it can now be deployed on society.”
However, he says he’s aware that every time they develop a detection method, then the deepfake innovators will up their game too. It’s a constantly evolving game and he says his aim is to do the best he can – to narrow the space of people who can get away with it and manage the risks.
“The way I think about these forensic techniques is that they do not eliminate the ability to create a fake, what it does is takes it out of the hands of the average person, it makes it more difficult, more time-consuming, more risky to do it because you’re more likely to get caught.
“There will be always be people, like me, who can create compelling and sophisticated fakes, there will always be the Hollywood studios, but while it’s a risk, it's still a manageable risk than millions and millions of people who can bombard the internet with fake video, news and images.”
The advancements made have been clear in the past 12 to 18 months, where there’s been trends in deepfakes getting better with higher quality that’s harder to detect, Prof Farid says.
“I don’t think it’s a stretch of imagination to say in the next year or two years, between the synthesis of text, images, audio, video, it’s going to become harder and harder to believe what we see and hear online.
“I’ll add that there’s one, in some ways, larger threat here, because as we enter a world where computers can synthesise news stories and images and videos, suddenly nothing is real, because everybody has plausible deniability.
“So any time a politician is caught doing saying or something that is embarrassing or illegal they can simply say it’s fake and they’ll have plausible deniability.”
That eventually leads to what’s been described as “truth decay”, which Prof Farid says in combination with the ease of access of social media, increasing polarisation of society and different agendas at play, stirs up “the perfect storm”.
And the threat is not abstract or far off, Prof Farid says, with examples being seen in today’s world.
“My understanding is that the folks at Twitter have deleted many, many accounts that they have linked back to the Chinese because of their use in disinformation [in the Hong Kong protests] ... It’s happened in the Brexit campaign in the UK, it happened here in the US for our last national election … we’ve seen horrific violence in Myanmar, and Sri Lanka, and the Philippines, and India, all surrounded by fake news.”
A large portion of the problem lay in the hands of social media platforms, who have done a poor job in moderation of deepfakes or investing to moderate it, Prof Farid says.
He says there are mechanisms, other than being able to be detect a fake, that platforms can employ as barriers for the spread of fake information – including changing their business models to discourage abuse of their sites.
“For example, because the business model of YouTube and Facebook and Twitter is that we are not the customer, we don’t pay them, they have created a system where it is very easy to get on board.
“So bots can create an account, after account, after account, and we call this frictionless system – it is designed to have zero friction in the system, so they can maximise their user base, so they can maximise their advertising dollars, that is the underlying business practice of Silicon Valley and social media in particular.”
And now that we’ve reached an age where technology is being weaponised, it’s time for the people in charge to take responsibility and raise ethical questions before making codes available for the rest of the world, Prof Farid says.
“You don’t have to literally hand a loaded gun to the average person on the internet in order to push the boundaries of science and technology.
“I’m an advocate of simply thinking through these issues before we rush into things, which is frankly why we’re in the mess we are today with social media, because it has been very much the move fast and break things mindset and not innovate, move slowly, and don’t break things, which is I think the more responsible way to think.”
On the other hand, in June this year the US House of Representatives Intelligence Committee pressed social media giants on how they’re handling the problem of deepfakes. And while the response was underwhelming, Twitter pointed to pre-existing policy that “prohibit coordinated account manipulation, malicious automation, and fake accounts.”
Facebook chief executive Mark Zuckerberg had already said a month prior that posting false information was not against the site’s rules, but acknowledged that it did not respond quickly enough to the Pelosi video.
Zuckerberg also said Facebook is considering developing a specific policy on deepfakes.
However, Prof Farid says it’s time for legislators and advertisers to crack down and put some constraints on platforms, because it’ll put pressure on them to do better at moderation.
“What the technology platforms will want you to believe is that they can automate the process of content moderation and it’s simply not true ... they need to start investing in more human moderation, they need to start investing in technology and they need to start investing in changes in the underlying business model to deal with this issue.
“You can’t build a monstrosity like this and say 'well I can’t manage it now', you have to find a way to fix the problem you created.
“It is beholden on our legislators to start putting in regulatory constraints and start … putting serious constraints on these platforms and get them to wake up.
“Where change has to come from is where all the revenue is and that’s advertisers, there are 20 CEOs in the world who are spending billions and billions of dollars on these platforms, advertising, which is fuelling Silicon Valley and they can turn around tomorrow and say we’ve had enough of the mess that is social media, we’re simply going to stop advertising on your platforms … watch how fast these platforms get smart and watch how fast they get at good at content moderation.”
Eventually, we may reach a point where devices with cameras employ media verification technologies to signal that the content, such as video or image, is trusted, Prof Farid says.