Data privacy backlash pushes Apple, Twitter to tighten user protection

Apple has delayed its controversial child safety tracking features while Twitter introduces Safety Mode to protect users.
6 September 2021

Apple will delay the rollout of its controversial new child pornography protection tools, accused by some of undermining the privacy of its devices and services. (Photo by John MACDOUGALL / AFP)

Data privacy issues continue to play a huge influence on the decisions big tech companies like Apple and Facebook make today. As regulators and laws continue to be amended to give more protection to users, any changes in data privacy rules by big tech players can have a huge impact on their products.

One of the biggest impacts of changes to privacy policies was experienced by WhatsApp earlier this year. When the messaging app announced that it would make changes to its privacy policy to allow content to be used for targeted marketing, many app users were not pleased.

In fact, WhatsApp’s privacy policy amendment plans saw millions of their users uninstall the app and move to alternative messaging apps. The app was also criticized globally, with the majority of users unhappy about the new changes. Following the backlash, WhatsApp has since delayed its privacy changes and vows that its amendments will not have any impact on its user’s data privacy.

More recently, Apple announced a planned detection technology update to its iPhones that allows it to scan images on the phones for child abuse content. While the move is commendable as it shows Apple’s concern and seriousness in tackling child sexual abuse, digital privacy advocates and many users were not feeling the same.

Concerns of Apple technological abuse on data privacy 

Digital privacy advocates heavily criticize the move on concerns that Apple will be infringing user’s data privacy when it scans through their photos on their Apple devices. Weeks of the debate saw Apple hosting numerous webinars and talks to explain to users how the new features would not infringe data privacy.

However, after a couple of weeks from the initial announcement and widespread criticism, Apple has delayed plans to introduce the detection technology on its devices. In a statement, the company said, “Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

Initially, it was announced that Apple’s method of detecting known child sexual abuse material was designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known child sexual abuse image hashes provided by the US’s National Centre for Missing and Exploited Children and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.

Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

The manual reviews by a human, along with the on-device tracking, may further set a dangerous precedent. And critics feel that the technology could be abused by authoritarian states.

Apple has delayed its controversial child safety tracking features due to privacy concerns, while Twitter introduces Safety Mode to protect users.

The short message platform will be rolling out an enhanced ‘Safety Mode’. Source: Twitter

Creating a safer environment 

Meanwhile, Twitter also announced planned updates to improve data privacy. The company is currently testing new privacy-related features that will give users greater control over their followers, including visibility control on their posts and likes.

According to Jarrod Doherty, Senior Product Manager at Twitter, the ‘Safety Mode’ feature temporarily blocks accounts for seven days for using potentially harmful language, like including insults or hateful remarks or sending repetitive and uninvited replies or mentions.

“When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the Tweet’s content and the relationship between the Tweet author and replier. Our technology takes existing relationships into account, so accounts you follow or frequently interact with will not be auto blocked,” said Doherty in a blogpost on Twitter.

Social platforms like Twitter, Facebook and Instagram continue to face mounting criticism as social media abuse claims dog their services. While some countries and organizations have begun taking action on hate speech within an app, detecting and identifying the users is often a major sticking point. The new features by Twitter may just be able to cut down on reported abuse on the platform.

As technology and social media platforms continue to play an important role in society, big tech players and social media companies need to ensure they do the best they can to not only protect their user’s data privacy, but also ensure these platforms are safe environments for vulnerable users.