In a letter published on the Center for Democracy and Technology website, a coalition of more than 90 groups from around the world urged Apple CEO Tim Cook to drop plans to introduce the surveillance feature – known as a CSAM hash – to detect child pornographic imagery stored on the iCloud.
The letter, published on Thursday, points to the use of “notoriously unreliable” machine learning algorithms to scan for sexually explicit images in the ‘Messages’ service on iOS devices. It notes that this could result in alerts that “threaten the safety and well-being” of young people with abusive parents.
“iMessages will no longer provide confidentiality and privacy to those users through an end-to-end encrypted messaging system in which only the sender and intended recipients have access to the information sent,” the groups warned.
They added that the technology could also open the door to “enormous pressure” and legal compulsions from various governments to scan for images deemed “objectionable” such as protests, human rights violations and even “unflattering images” of politicians.
Signatories to the letter include the American Civil Liberties Union, Electronic Frontier Foundation, Access Now, Privacy International, and the Tor Project. Besides, a number of overseas groups have added their concerns about the policy’s impact on countries with different legal systems.
An Apple spokesman told Reuters the company had addressed privacy and security concerns earlier. Last week, it released a document detailing why the scanning software’s complex architecture allowed it to resist attempts at abusing it.
Earlier this month, a separate letter posted on GitHub and signed by privacy and security experts, including former NSA whistleblower Edward Snowden, condemned the “privacy-invasive content scanning technology”. It also warned that the policy “threatens to undermine fundamental privacy protections” for users, under the guise of child protection.
Other concerns have been raised about the possibility of “false positives” in the hash-scanning feature, which looks for an image’s ‘hash’ – a string of letters and numbers that are unique to the image – and matches it to databases provided by child protection agencies like the National Center for Missing and Exploited Children (NCMEC).
Although a recent Apple FAQ claimed the likelihood of a false positive “less than one in one trillion [incorrectly flagged accounts] per year”, researchers reported the first case of “hash collision” – where the feature identified two completely different images as producing the same hash – this week.
According to TechCrunch, “hash collisions” are a “death knell” for systems relying on encryption.
However, the tech news outlet said Apple downplayed the concerns in a press call and argued that it had protections in place – including human moderators reviewing flagged incidents before they are reported to law enforcement – to protect against the false positive issue.