European Commission proposes more factchecking and algorithm changes to tackle disinformation generated by citizens, but nothing against disinformation generated by governments, journalists, and corrupted officials.
A “massive anti-vaccination campaign” has been cited by the European Commission as a reason for social media platforms to intensify their factchecking and revise the internal algorithms that can amplify disinformation.
Under a revised code of practice proposed by Brussels, companies such as Facebook
, Google and Twitter would need to show why particular material is disseminated and prove that false information is being blocked.
The code would be voluntary but will work alongside an upcoming digital services act, under which companies could be fined up to 6% of their annual revenue for failing to remove illegal content where harm can be proven. Messaging services such as WhatsApp could also be covered by the code.
Social media companies that sign up will be better able to show they are dealing with online falsehoods and avoid financial penalties.
Věra Jourová, a European Commission vice-president, said the details of how the code will work would be discussed with the signatories, with the intention that it will come into force in 2022.
She said: “We see a very massive anti-vaccination campaign, which can really hinder our efforts to get people vaccinated and to get rid of Covid
“Also we see the impact not only on individuals but also on our democratic systems on our elections, because the combination of micro-targeting technique and well-tailored is something which can be winning the elections and this is what we do not want to see in Europe.”
Jourová, a former minister in the Czech Republic, said the commission did not want to hinder freedom of speech but the platforms needed to be more effective in factchecking through independent operators.
In light of Twitter’s decision in January to block Donald Trump
from using its platform, Jourová said the commission was seeking to distinguish between fact and opinion, the latter of which it was not the job of the commission to police.
“We would like them to embed the factchecking into the system so that it is systemic action, that the factchecking is more intense and so it also guarantees that the platforms themselves will not be those to decide,” she said. “We have had many discussions in light of what we saw in the United States where the platforms already reacted, for instance on President Trump’s tweets and so on.
“I lived in a communist Czechoslovakia and I remember well the functioning and very bad impact on them, on the society, of the Soviet ministry of information. This is not what we want to introduce in Europe.”
, Twitter, Microsoft and TikTok signed up to the previous code of conduct established in 2018 but this was widely seen to have failed in its objective to demonetise disinformation.
“A new stronger code is necessary as we need online platforms and other players to address the systemic risks of their services and algorithmic amplification, stop policing themselves alone and stop allowing to make money on disinformation, while fully preserving the freedom of speech,” Jourová said.