TechDigits

Tech news
Thursday, Apr 18, 2024

Deepfakes are now trying to change the course of war

Deepfakes are now trying to change the course of war

Five years ago, nobody had even heard of deepfakes, the persuasive-looking but false video and audio files made with the help of artificial intelligence. Now, they're being used to impact the course of a war.

In the third week of Russia's war in Ukraine, Volodymyr Zelensky appeared in a video, dressed in a dark green shirt, speaking slowly and deliberately while standing behind a white presidential podium featuring his country's coat of arms. Except for his head, the Ukrainian president's body barely moved as he spoke. His voice sounded distorted and almost gravelly as he appeared to tell Ukrainians to surrender to Russia.

"I ask you to lay down your weapons and go back to your families," he appeared to say in Ukrainian in the clip, which was quickly identified as a deepfake. "This war is not worth dying for. I suggest you to keep on living, and I am going to do the same."

Five years ago, nobody had even heard of deepfakes, the persuasive-looking but false video and audio files made with the help of artificial intelligence. Now, they're being used to impact the course of a war. In addition to the fake Zelesnky video, which went viral last week, there was another widely circulated deepfake video depicting Russian President Vladimir Putin supposedly declaring peace in the Ukraine war.

Experts in disinformation and content authentication have worried for years about the potential to spread lies and chaos via deepfakes, particularly as they become more and more realistic looking. In general, deepfakes have improved immensely in a relatively short period of time. Viral videos of a faux Tom Cruise doing coin flips and covering Dave Matthews Band songs last year, for instance, showed how deepfakes can appear convincingly real.

Neither of the recent videos of Zelensky or Putin came close to TikTok Tom Cruise's high production values (they were noticeably low resolution, for one thing, which is a common tactic for hiding flaws.) But experts still see them as dangerous. That's because they show the lighting speed with which high-tech disinformation can now spread around the globe. As they become increasingly common, deepfake videos make it harder to tell fact from fiction online, and all the more so during a war that is unfolding online and rife with misinformation. Even a bad deepfake risks muddying the waters further.

"Once this line is eroded, truth itself will not exist," said Wael Abd-Almageed, a research associate professor at the University of Southern California and founding director of the school's Visual Intelligence and Multimedia Analytics Laboratory. "If you see anything and you cannot believe it anymore, then everything becomes false. It's not like everything will become true. It's just that we will lose confidence in anything and everything."

Deepfakes during war


Back in 2019, there were concerns that deepfakes would influence the 2020 US presidential election, including a warning at the time from Dan Coats, then the US Director of National Intelligence. But it didn't happen.

Siwei Lyu, director of the computer vision and machine learning lab at University at Albany, thinks this was because the technology "was not there yet." It just wasn't easy to make a good deepfake, which requires smoothing out obvious signs that a video has been tampered with (such as weird-looking visual jitters around the frame of a person's face) and making it sound like the person in the video was saying what they appeared to be saying (either via an AI version of their actual voice or a convincing voice actor).

Now, it's easier to make better deepfakes, but perhaps more importantly, the circumstances of their use are different. The fact that they are now being used in an attempt to influence people during a war is especially pernicious, experts told CNN Business, simply because the confusion they sow can be dangerous.

Under normal circumstances, Lyu said, deepfakes may not have much impact beyond drawing interest and getting traction online. "But in critical situations, during a war or a national disaster, when people really can't think very rationally and they only have a very truly short span of attention, and they see something like this, that's when it becomes a problem," he added.

Snuffing out misinformation in general has become more complex during the war in Ukraine. Russia's invasion of the country has been accompanied by a real-time deluge of information hitting social platforms like Twitter, Facebook, Instagram, and TikTok. Much of it is real, but some is fake or misleading. The visual nature of what's being shared — along with how emotional and visceral it often is — can make it hard to quickly tell what's real from what's fake.

Nina Schick, author of "Deepfakes: The Coming Infocalypse," sees deepfakes like those of Zelensky and Putin as signs of the much larger disinformation problem online, which she thinks social media companies aren't doing enough to solve. She argued that responses from companies such as Facebook, which quickly said it had removed the Zelensky video, are often a "fig leaf."

"You're talking about one video," she said. The larger problem remains.

"Nothing actually beats human eyes"


As deepfakes get better, researchers and companies are trying to keep up with tools to spot them.

Abd-Almageed and Lyu use algorithms to detect deepfakes. Lyu's solution, the jauntily named DeepFake-o-meter, allows anyone to upload a video to check its authenticity, though he notes that it can take a couple hours to get results. And some companies, such as cybersecurity software provider Zemana, are working on their own software as well.

There are issues with automated detection, however, such as that it gets trickier as deepfakes improve. In 2018, for instance, Lyu developed a way to spot deepfake videos by tracking inconsistencies in the way the person in the video blinked; less than a month later, someone generated a deepfake with realistic blinking.

Lyu believes that people will ultimately be better at stopping such videos than software. He'd eventually like to see (and is interested in helping with) a sort of deepfake bounty hunter program emerge, where people get paid for rooting them out online. (In the United States, there has also been some legislation to address the issue, such as a California law passed in 2019 prohibiting the distribution of deceptive video or audio of political candidates within 60 days of an election.)

"We're going to see this a lot more, and relying on platform companies like Google, Facebook, Twitter is probably not sufficient," he said. "Nothing actually beats human eyes."

Newsletter

Related Articles

TechDigits
0:00
0:00
Close
FTX's Bankman-Fried headed for jail after judge revokes bail
America's First New Nuclear Reactor in Nearly Seven Years Begins Operations
Southeast Asia moves closer to economic unity with new regional payments system
Today Hunter Biden’s best friend and business associate, Devon Archer, testified that Joe Biden met in Georgetown with Russian Moscow Mayor's Wife Yelena Baturina who later paid Hunter Biden $3.5 million in so called “consulting fees”
Google testing journalism AI. We are doing it already 2 years, and without Google biased propoganda and manipulated censorship
Musk announces Twitter name and logo change to X.com
The future of sports
TikTok Takes On Spotify And Apple, Launches Own Music Service
Hacktivist Collective Anonymous Launches 'Project Disclosure' to Unearth Information on UFOs and ETIs
Typo sends millions of US military emails to Russian ally Mali
Server Arrested For Theft After Refusing To Pay A Table's $100 Restaurant Bill When They Dined & Dashed
Democracy not: EU's Digital Commissioner Considers Shutting Down Social Media Platforms Amid Social Unrest
Sarah Silverman and Renowned Authors Lodge Copyright Infringement Case Against OpenAI and Meta
Why Do Tech Executives Support Kennedy Jr.?
The New York Times Announces Closure of its Sports Section in Favor of The Athletic
Florida Attorney General requests Meta CEO's testimony on company's platforms' alleged facilitation of illicit activities
The Poor Man With Money, Mark Zuckerberg, Unveils Twitter Replica with Heavy-Handed Censorship: A New Low in Innovation?
The Double-Edged Sword of AI: AI is linked to layoffs in industry that created it
US Sanctions on China's Chip Industry Backfire, Prompting Self-Inflicted Blowback
Meta Copy Twitter with New App, Threads
BlackRock Bitcoin ETF Application Refiled, Naming Coinbase as ‘Surveillance-Sharing’ Partner
UK Crypto and Stablecoin Regulations Become Law as Royal Assent is Granted
A Delaware city wants to let businesses vote in its elections
Alef Aeronautics Achieves Historic Milestone with Flight Certification for World's First Flying Car
Google Blocked Access to Canadian News in Response to New Legislation
French Politicians Advocate for Pan-European Regulation on Social Media Influencers
Melinda French Gates Advocates for Increased Female Representation in AI to Prevent Bias
Snapchat+ gains 4 million paying subscribers in its first year
Apple Makes History as the First Public Company Valued at $3 Trillion
Elon Musk Implements Twitter Limits to Tackle Data Scraping, but Faces Criticism for Technical Misunderstanding
EU and UK's Slow Electric Vehicle Adoption Raises Questions About the Transition to Green Mobility
Top Companies Express Concerns Over Europe's Proposed AI Law, Citing Competitiveness and Investment Risks
Meta Unveils Insights on AI Usage in Facebook and Instagram, Amid Growing Calls for Transparency
Crypto Scams Against Seniors Soar by 78% in 2022, Experts Urge Vigilance
The End of an Era: National Geographic Dismisses Last of Its Staff Writers
Shield Your Wallet: The Perils of Wireless Credit Card Theft
Harvard Scientist Who Studies Honesty Accused Of Data Fraud, Put On Leave
Putting an End to the Subscription Snare: The Battle Against Unwitting Commitments
The Legal Perils of AI: Lawyer Faces Sanctions for Relying on Fictional Cases Generated by Chatbot
ChatGPT’s "Grandma Exploit": Ingenious Hack Exposes Loophole in AI, Generates Free Software Codes
The Disney Downturn: A Near Billion-Dollar Box Office Blow for the House of Mouse
A Digital Showdown: Canada Challenges Tech Giants with The Online News Act, Meta Strikes Back
Distress in the Depths: Submersible and Passengers Missing in Titanic Wreckage Expedition
Mark Zuckerberg stealing another idea: Twitter
European Union's AI Regulations Risk Self-Sabotage, Cautions smart and brave Venture Capitalist Joe Lonsdale
Nvidia GPUs are so hard to get that rich venture capitalists are buying them for the startups they invest in
Chinese car exports surge
Reddit Blackout: Thousands of Communities Protest "Ludicrous" Pricing Changes
Nvidia Joins Tech Giants as First Chipmaker to Reach $1 Trillion Valuation
AI ‘extinction’ should be same priority as nuclear war – experts
×