China has released new rules banning online video and audio providers from using deep learning to produce fake news, as countries around the world continue to battle online disinformation and so-called deepfake technology.
The new regulation published on Friday said that both providers and users of online video news and audio services are not allowed to use new technologies such as deep learning and virtual reality to create, distribute and broadcast fake news.
The regulation comes about one-and-a-half months after California introduced legislation to make political deepfakes illegal, outlawing the creation or distribution of videos, images, or audio of politicians doctored to resemble real footage within 60 days of an election. In April, the European Union released a strategy to investigate online disinformation, including deepfakes.
Deepfakes refer to manipulated videos, or other digital representations produced by sophisticated artificial intelligence, that yield fabricated images and sounds that appear to be real. It can be used, for example, to overlay images of celebrity faces on other peoples’ bodies, fooling viewers.
But China’s regulation is much broader, encapsulating not just political news but broader use of these technologies including, but not limited to, deep learning and virtual reality.
“With the adoption of new technologies, such as deepfake, in online video and audio industries, there have been risks in using such content to disrupt social order and violate people’s interests, creating political risks and bringing a negative impact to national security and social stability,” said the cyberspace authority in a notice regarding the introduction of the regulation on Friday.
With smartphones and camera apps becoming increasingly sophisticated, users can now elongate legs, change the eye colour and add a myriad of other features to create false photographs and videos of people that look real, and this technology is now available to the masses.
The new China regulation jointly published by three government agencies including the country’s top internet watchdog, the Cyberspace Administration of China, is effective from January 1, 2020.
It requires that providers and users of online video news and audio information services put clear labels on any content that involves new technologies such as deep learning in the process of creation, distribution and broadcast.
It also asks content providers to use technology to detect audio and video news content that has been manufactured or potentially manipulated.
Concerns over deepfakes have grown since the 2016 US election campaign, which saw increased use of online misinformation, according to investigations by US authorities.
In China, deepfakes started to grab headlines in September when a Chinese app that lets users swap their faces with film or TV characters – very convincingly – went viral. The face-swapping app known as Zao became No 1 on the free entertainment app list in the Apple App Store within two days of its debut.