How we are losing the arms race against deepfake technology


If a lie can make halfway around the world before the truth kicks in, consider this scenario.

Late in the evening, a video bearing the POTUS seal comes out of the Oval Office. The US President is behind the Resolute Desk, casting a steely gaze into the barrel of the camera: “My fellow Americans.” The president explains that provocations in the South China Sea, the build-up of troops on the Indian border and other perceived acts of aggression mean that China can no longer be considered a nation state. The president said that after consulting with its major allies, including Australia and the United Kingdom, the United States had no choice but to launch a powerful pre-emptive military strike against China. The president concludes: “God bless our troops”.

The video is not real, of course. It is a “deepfake” which is manufactured quickly and inexpensively by a rogue state or a private organization. To add to the feeling of “authenticity,” the video is being shown on a hacked official White House social media channel.

Now consider what happens next. In the following difficult minutes, how would the Chinese authorities react? Are we sure the verification would precede retaliation?

Deepfakes, or synthetic media, are still and moving images manipulated by sophisticated simulation software and artificial intelligence (AI). Using a machine learning system called a deep neural network, the system examines a person’s facial movements and synthesizes images of someone else – possibly an actor being filmed in front of a screen. green – doing similar movements.

Synthesized videos could be used, for example, to place a religious leader in a compromising position or put words in the mouth of a US president. And with the exponential improvement in synthetic media technology, the results are becoming more realistic and frightening. In March, the Cyber ​​Division of the FBI issued a private industry notice that warned that “malicious actors will almost certainly take advantage of synthetic content for cyber and foreign influence operations in the 12-18. next months ”.

Some famous examples – with pictures of Tom cruise and Barack obama – show the potential of deepfakes to create an alternative narrative. In 2020, Indian politician Manoj Tiwari used deepfake technology to make video of him speaking in Hindi dialect speak directly to more voters.

“The problem is, it’s very difficult to counter that,” says AI expert Toby Walsh. “You will come to a point where, unless you have literally seen it with your own eyes, you cannot be sure that it has not been rigged. It’s so convincing that you won’t be able to tell it apart from the real thing.

Walsh is a laureate and science professor of AI in the Department of Computer Science and Engineering at UNSW and explores the concept of “Trusted AI”. He believes that the emergence of deepfake technology is a danger to every society and fears that it will have a significant impact on politics.

“An election could easily be overturned by a rival who publishes a [deepfake] video at the last moment, ”he says. “There will be no time to demonstrate that this was not true for someone in a compromising situation or saying something compromising. This is enough to change the outcome of the elections.

Walsh says the existence of synthetic imaging technology could even allow politicians to pretend something is wrong when it’s real. “Previously, if you were caught on camera saying something unpleasant or unacceptable, you had to admit it,” he says. “But now you don’t do it anymore.” For example, [former US president Donald] Trump denied something that to our knowledge was something he said – something unsavory about women. But he can just dismiss it as a “deepfake” and get away with it. “

The “arms race” deepfake

As synthetic media technology continues to improve, detection becomes even more difficult. State University of New York’s Siwei Lyu, Ming-Ching Chang, and Yuezun Li found an early way to distinguish real videos from deepfakes in examine eye blinking patterns, but I understood that this would not be a permanent solution as the technology progressed. Meanwhile, media outlets such as The Washington Post and Duke University’s Reporters’ Lab are developing techniques to help fact-checkers tag manipulated videos to alert social media platforms or search engines.

Facebook (now Meta) says it has found a method to detect and assign deepfakes by relying on “reverse engineering from a single image generated by AI to the generative model used to produce it”. The social media giant claims the technique “will make it easier to detect and trace deepfakes in the real world, where the deepfake image itself is often the only piece of information that detectors have to work with.”

Walsh believes that the private or public sector is unlikely to be able to develop tools that will outsmart anyone who seeks to create deception. “It’s an ongoing arms race,” he said. “They will always be able to synthesize deepfakes better than detectors.

“There’s no way you can protect yourself from this unless you get a certificate of provenance – proving that a video is from, say, the ABC or the BBC. Even then, you have to be careful that people cannot make fake watermarks.

Walsh believes there are only limited ways for the global community to start dealing with deepfakes. The first is through better education: “You have to teach people to be skeptical about what they see and hear, and [only] consult reputable sources who will have done the groundwork for you so you can be confident [what you see]. “

A second is self-regulation – an international standard requiring deepfake writers to mark their work as “synthetic” or “manufactured.” A third is to put the blame on platforms – especially Facebook, YouTube, TikTok and Twitter – for making sure the material they post is “real.”

A fourth could be that governments ban the technology altogether. “We might need to have that kind of a conversation,” Walsh says. “There was a time when you couldn’t export powerful crypto tools from the United States because they were considered too great a threat to national security to fall into the wrong hands.”

Like facial recognition technology which also relies on AI, the manipulation of synthetic images offers “so many disadvantages and so few advantages,” says Walsh. “This technology has disturbing implications, and I don’t think we’re prepared for the challenges it will pose.”


About Author

Comments are closed.