Facebook has made changes to its policy banning posts suggesting the Covid-19 was man-made.
has reversed its policy banning posts suggesting the Covid-19 was man-made, on the heels of renewed debate over the origins of the virus which first emerged in China.
The latest move by Facebook
, announced late Wednesday on its website, highlights the challenge of policing misinformation and disinformation on the world's largest social network.
"In light of ongoing investigations into the origin of Covid-19 and in consultation with public health experts, we will no longer remove the claim that Covid-19 is man-made or manufactured from our apps," the statement said.
"We're continuing to work with health experts to keep pace with the evolving nature of the pandemic and regularly update our policies as new facts and trends emerge."
The new statement updates guidance from Facebook
in February when it said it would remove false or debunked claims about the novel coronavirus which created a global pandemic killing more than three million.
The move followed President Joe Biden
's directive to US intelligence agencies to investigate competing theories on how the virus first emerged -- through animal contact at a market in Wuhan, China, or through accidental release from a research laboratory in the same city.
Biden's order signals an escalation in mounting controversy over the origins of the virus.
's actions impact content posted by some 3.45 billion active users of its applications, including its core social network, Instagram, WhatsApp and Messenger.
has relied largely on independent fact-checking groups, which up until now had widely dismissed the theory of a laboratory release.
One of these groups, PolitiFact, reported last September that public health authorities had "repeatedly said the coronavirus was not derived from a lab" but earlier this month revised its guidance, said "that assertion is now more widely disputed," saying it would continue to review the matter.
in a separate statement said it was stepping up its efforts to curb misinformation by limiting the reach of users who "repeatedly" share false content.
Until now, Facebook
had only taken this action on individual posts, but now will clamp down on the users who are the largest spreaders of false content.