Why the Apple abuse detection system isn’t working for all

The tech giant is facing criticism over a new system that finds child sexual abuse material (CSAM) on US users' devices.
9 August 2021

Why is Apple being criticized for its system that detects child abuse? (Photo by Mladen ANTONOV / AFP)

  • The tool, known as neuralMatch, will scan images before they are uploaded to the company’s iCloud Photos online storage
  • Apple will also examine the contents of end-to-end encrypted messages for the first time
  • Concerns are mounting that the technology could be expanded and used by authoritarian governments to spy on its own citizens

Last Friday, Apple unveiled plans to scan US iPhones for images of child sexual abuse, via updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. The new system will search for matches of known child sexual abuse material (CSAM) before the image is stored onto iCloud Photos. As promising as it sounds, the announcement has raised alarm among experts who warn that the new system by Apple could open the door to surveillance of millions of people’s personal devices.

According to Apple’s latest technical paper published this month, the new feature uses a “neural matching function,” called NeuralHash, to assess whether images on a user’s iPhone match known “hashes,” or unique digital fingerprints, of CSAM. It does this by comparing the images shared with iCloud to a large database of CSAM imagery that has been compiled by the National Center for Missing and Exploited Children (NCMEC). If enough images are discovered, they are then flagged for a review by human operators, who then alert NCMEC.

Separately, Apple also plans to scan users’ encrypted messages for sexually explicit content as a child safety measure, which also alarmed privacy advocates. The company was one of the first major companies to embrace end-to-end encryption, in which messages are scrambled so that only their senders and recipients can read them. Law enforcement has long pressed the company for access to that information. Apple said the latest changes would roll out this year as part of updates to its operating software for iPhones, Macs and Apple Watches.

Notably, the detection system will only flag images that are already in the center’s database of known child pornography. Apple said that new versions of iOS and iPadOS  will have “new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy”. Apple also confirmed the scanning technology as a part of a new suite of child protection systems that would “evolve and expand over time”.

To recall, Apple has been under government pressure for years to allow for increased surveillance of encrypted data. Coming up with the new security measures would mean Apple would have to perform a delicate balancing act between cracking down on the exploitation of children while keeping its high-profile commitment to protecting the privacy of its users.

Tech companies including Microsoft, Google, Facebook and others have for years been sharing digital fingerprints of known child sexual abuse images. Apple has used those to scan user files stored in its iCloud service, which is not as securely encrypted as its on-device data, for child pornography. As recently as 2018, tech firms reported the existence of as many as 45 million photos and videos that constituted child sex abuse material—a terrifyingly high number.

What is the tech implication of the new Apple system?

People are primarily uncomfortable with the idea of looming surveillance by an algorithm that sounds rather disruptive. It is also quite reasonable to have more public discussion before the app is launched because the potential long-term implications of such a program have many privacy advocates and organizations extremely worried. 

WhatsApp head at Facebook Will Cathcart says the  Apple system “could very easily be used to scan private content for anything they or a government decides it wants to control. Countries where iPhones are sold will have different definitions on what is acceptable”. 

He also argued on Twitter that WhatsApp’s system to tackle child sexual abuse material has reported more than 400,000 cases to the US National Center for Missing and Exploited Children without breaking encryption. Apple, for its part, claims there is only a one-in-one-trillion chance of its NeuralHash system producing a false positive, and that there is an appeals process already in place for users who wish to dispute any results.

Still, the Electronic Frontier Foundation, a digital rights group believes it is “a fully-built system just waiting for external pressure to make the slightest change”. EFF also highlighted that scanning capabilities similar to Apple’s tools could eventually be repurposed to make its algorithms hunt for other kinds of images or text—which would basically mean a workaround for encrypted communications, one designed to police private interactions and personal content.

On the other hand, a security professor at Johns Hopkins University,  Matthew Green, who is believed to be the first researcher to post a tweet about the issue said, “This will break the dam — governments will demand it from everyone.”

Separately, in a statement published last week, the Center for Democracy and Technology took aim at the iMessage update, calling it an erosion of the privacy provided by Apple’s end-to-end encryption: “The mechanism that will enable Apple to scan images in iMessages is not an alternative to a backdoor—it is a backdoor,” the Center said. “Client-side scanning on one ‘end’ of the communication breaks the security of the transmission, and informing a third-party (the parent) about the content of the communication undermines its privacy.”

Because of all these concerns, a cadre of privacy advocates and security experts have written an open letter to Apple, asking that the company reconsider its new features. As of Sunday, the letter had over 5,000 signatures. Weather all this would deter the tech giant plan remains unknown but in an internal company memo leaked Friday, Apple’s software VP Sebastien Marineau-Mes acknowledged that “some people have misunderstandings and more than a few are worried about the implications” of the new rollout, but that the company will “continue to explain and detail the features so people understand what we’ve built.”