Thu. Dec 5th, 2024

Is Your Phone Spying On You?

Fact Sheet: Client-Side Scanning

What It Is and Why It Threatens Trustworthy, Private Communications

Encryption is a technology designed to help Internet users keep their information and communications private and secure. The process of encryption scrambles information so that it can only be read by someone with the “key” to unscramble the information. Encryption protects day-to-day activities like online banking and shopping. It also prevents data from being stolen in data breaches and ensures private messages stay private. Encryption is also crucial to protect the communications of law enforcement, military personnel, and increasingly, emergency responders.

End-to-End (E2E) encryption—where the keys needed to unscramble an encrypted communication reside only on the devices communicating—provides the strongest level of security and trust. By design, only the intended recipient holds the key to decrypt the message. E2E encryption is an essential tool to ensure secure and confidential communications. Adding message scanning, even if it is “client-side”, breaks the E2E encryption model and fundamentally breaches the confidentiality that users expect.

What is Client-Side Scanning?

Client-side scanning (CSS) broadly refers to systems that scan message contents—i.e., text, images, videos, files—for matches or similarities to a database of objectionable content before the message is sent to the intended recipient. For example, your anti-virus software may do this to find and disable malware on your computer.

With major platform providers moving towards implementing more E2E encryption, and calls by some in law enforcement to facilitate access to message contents to help identify and prevent sharing of objectional content[1], client-side scanning could emerge as the preferred mechanism to address objectionable content shared on E2E encrypted services without breaking the cryptography.

However, client-side scanning would compromise the privacy and security that users both assume and rely on. By making the contents of messages no longer private between the sender and receiver, client-side scanning breaks the E2E trust model. The complexity it adds could also limit the reliability of a communications system, and potentially stop legitimate messages from reaching their intended destinations.

Client-Side Scanning to Prevent the Sharing of Objectionable Content

When intended to prevent people from sharing known objectionable content, client-side scanning generally refers to a way for software on user devices (often referred to as “clients” and including smartphones, tablets, or computers) to create functionally unique[2] digital “fingerprints” of user content (called “hashes”). It then compares them to a database of digital fingerprints of known objectionable content such as malicious software (malware), images, videos, or graphics.[3] If a match is found, the software may prevent that file from being sent, and/or notify a third party about the attempt, often without the user being aware. Newer approaches to client-side scanning also look for new objectionable content using more sophisticated algorithms. This is difficult and makes the chance of false positives even more likely.

How Client-Side Scanning Works

There are two basic methods of client-side scanning for objectionable content on an E2E encrypted communications service. One performs the comparison of digital fingerprints on the user’s device, and the other does the comparison on a remote server (the content stays on the device).

1. Comparison performed on the user’s device (local digital fingerprint matching)
The application on a user’s device (phone, tablet, or computer) has an up-to-date full database of functionally unique digital fingerprints of known content of interest. The content that the user is about to encrypt and send in a message is converted to a digital fingerprint using the same techniques applied to digital fingerprints in the full database. If a match is found, or an algorithm classifies the content as likely objectionable, then the message may not be sent, and a designated third party (such as law enforcement authorities, national security agencies, or the provider of the filtering services) could be notified.

2. Comparison performed on a remote server
There can be significant challenges with maintaining a full database and sophisticated algorithms that perform real-time analysis on a user’s device. The alternative is to transmit the digital fingerprints of a user’s content to a server where a comparison with a central database is performed.

Problems with Client-Side Scanning for Objectionable Content

When the comparison of digital fingerprints is done on a remote server, it could allow the service provider, and anyone else with whom they choose to share the information, to monitor and filter content a user wants to send. When the comparison takes place on the user’s device, if third parties are notified of any objectionable content found, the same considerations apply. This fundamentally defeats the purpose of E2E encryption. Private and secure E2E encrypted communications between two parties, or among a group, are meant to stay private. If people suspect their content is being scanned, they may self-censor, switch to another service without client-side scanning, or use another means of communication.

It creates vulnerabilities for criminals to exploit: Adding client-side scanning functionality increases the ‘attack surface’ by creating additional ways to interfere with communications by manipulating the database of objectionable content. Adversaries with the ability to add digital fingerprints to the database and receive notifications when matches to those fingerprints are found would have a way to monitor select user content before it is encrypted and sent. This would allow them to track to whom, when, and where certain content was communicated. These fingerprints could include commonly used passwords or other information to enable attacks such as social engineering, extortion, or blackmail. By leveraging a system’s blocking features, criminals could even choose to block users from sending specific content. This could be targeted to impact legitimate uses, potentially impeding the communications of law enforcement, emergency response and national security personnel.

It creates new technical and process challenges: If comparisons are made on the user’s device, maintaining an up-to-date version of the full reference database and algorithms on every device presents its own set of challenges. These include potential process constraints (e.g., the process to add or remove content fingerprints to the database, and who has control over or access to it), bandwidth needed to transmit updated versions of the database, and the processing power on devices required to perform the comparison in real-time. Other considerations include the potential exposure of the reference database by installing it on the client device, potentially providing criminals with information about the scanning system. If comparisons are made on a central server, the digital fingerprint of content the user is attempting to send will be available to whoever controls that central server—regardless of whether it qualifies as “objectionable” in the view of the surveilling party. This opens a new set of issues around the security and privacy of users, potentially exposing details of their activity to anyone with access to the server.

Function creep—it could be used for other things: The same methods implemented in the hope of combating the worst of the worst (e.g., child exploitation or terrorism content, the two most often cited purposes to justify their use) can also be turned to mass surveillance and repressive purposes. A 2021 paper on the risks of client-side scanning noted that a CSS system could be built in a way that gives an agency the ability to preemptively scan for any type of content on any device, for any purpose, without a warrant or suspicion. Likewise, the same techniques to prevent the distribution of child sexual abuse material (CSAM) can be used to enforce policies such as censorship and suppression of political dissent by preventing legitimate content from being shared or blocking communications between users, (such as political opponents). Restricting the database to solely include fingerprints of images, videos, or URLs related to illegal activity (as some propose) is difficult. By creating digital fingerprints of more content to compare with the digital fingerprints of user content or by broadening the scope of an algorithm to classify additional types of user content as objectionable, whoever controls the system can screen for any content of interest. A client-side scanning system could be extended to monitor the text content of messages being sent, with clear and devastating implications for freedom of speech.

Lack of effectiveness: E2E encrypted communications systems exist outside the jurisdiction of any one government. A truly determined criminal would be able to switch away from services known to be using client-side scanning to avoid getting caught. It is technically simple for criminals to make modifications to objectionable content, thus changing the digital fingerprint and avoiding detection by the client-side scanning system.

Conclusion

Stopping the spread of terrorist and child exploitation material is an important cause. However, it cannot be achieved by weakening the security of user communications to potentially monitor what people say to each other. Client-side scanning reduces overall security and privacy for law-abiding users while running the risk of failing to meet its stated law enforcement objective. E2E encryption guarantees billions of users around the world can communicate securely and confidentially.[4] Major platforms continue to move towards its adoption as a way to underpin trustworthiness in their platforms and services.[5] Client-side scanning in E2E encrypted communications services is not a solution for filtering objectionable content. Nor is any other method that weakens the core of the trusted and private communications upon which we all rely.

References

source

Internet Society, June 2018. Encryption Brief.

Matthew Green, December 2019. Can end-to-end encrypted systems detect child sexual abuse imagery?

Electronic Frontier Foundation, November 2019. Why Adding Client-Side Scanning Breaks End-To-End Encryption.

Centre for Democracy and Technology (CDT), 2021. Content Moderation in Encrypted Systems.

Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Jon Callas, Whitfield Diffie, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Vanessa Teague, Carmela Troncoso, October 2021.

effects of client-side scanning (CSS) on information security and privacy

  1. Bugs in our Pockets: The Risks of Client-Side ScanningPDF
  2. Impact of Client Side Scanning PDF
  3. Telecommunications_Overview_1992

 


Endnotes:

[1] https://www.newamerica.org/oti/press-releases/open-letter-law-enforcement-us-uk-and-australia-weak-encryption- puts-billions-internet-users-risk/

[2] A system could be developed where the digital fingerprints are less unique, resulting in more pieces of content using the same fingerprint. However, where false positives could result in the use of serious resources (such as a police raid) designers of client-side scanning systems are incentivized to make the digital fingerprints as unique as possible.

[3] Client-side scanning is just one of the ways proposed for law enforcement or security agencies to gain access to encrypted user communications. For more information see: https://www.internetsociety.org/resources/doc/2018/encryption-brief/

[4] https://telegram.org/blog/200-million and https://www.newsweek.com/whatsapp-facebook-passes-two-billion-users-pledges-encryption-support-1486993

[5] https://www.facebook.com/notes/2420600258234172/


 

In addition to making end-to-end encryption available for iCloud Photos, Apple today announced that it has abandoned its controversial plans to detect known Child Sexual Abuse Material (CSAM) stored in iCloud Photos, according to a statement shared with WIRED.

Apple’s full statement:

After extensive consultation with experts to gather feedback on child protection initiatives we proposed last year, we are deepening our investment in the Communication Safety feature that we first made available in December 2021. We have further decided to not move forward with our previously proposed CSAM detection tool for iCloud Photos. Children can be protected without companies combing through personal data, and we will continue working with governments, child advocates, and other companies to help protect young people, preserve their right to privacy, and make the internet a safer place for children and for us all.

In August 2021, Apple announced plans for three new child safety features, including a system to detect known CSAM images stored in iCloud Photos, a Communication Safety option that blurs sexually explicit photos in the Messages app, and child exploitation resources for Siri. Communication Safety launched in the U.S. with iOS 15.2 in December 2021 and has since expanded to the U.K., Canada, Australia, and New Zealand, and the Siri resources are also available, but CSAM detection never ended up launching.

Apple initially said CSAM detection would be implemented in an update to iOS 15 and iPadOS 15 by the end of 2021, but the company ultimately postponed the feature based on “feedback from customers, advocacy groups, researchers, and others.” Now, after a year of silence, Apple has abandoned the CSAM detection plans altogether.

Apple promised its CSAM detection system was “designed with user privacy in mind.” The system would have performed “on-device matching using a database of known CSAM image hashes” from child safety organizations, which Apple would transform into an “unreadable set of hashes that is securely stored on users’ devices.”

Apple planned to report iCloud accounts with known CSAM image hashes to the National Center for Missing and Exploited Children (NCMEC), a non-profit organization that works in collaboration with U.S. law enforcement agencies. Apple said there would be a “threshold” that would ensure “less than a one in one trillion chance per year” of an account being incorrectly flagged by the system, plus a manual review of flagged accounts by a human.

Apple’s plans were criticized by a wide range of individuals and organizations, including security researchers, the Electronic Frontier Foundation (EFF)politicianspolicy groupsuniversity researchers, and even some Apple employees.

Some critics argued that the feature would have created a “backdoor” into devices, which governments or law enforcement agencies could use to surveil users. Another concern was false positives, including the possibility of someone intentionally adding CSAM imagery to another person’s iCloud account to get their account flagged.

Note: Due to the political or social nature of the discussion regarding this topic, the discussion thread is located in our Political News forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.

source


Client-Side Scanning: Organized espionage on end devices

A basic principle of information security is access control. We are all used to data being only available to people and systems with the right permissions. The discussion about the search for forbidden image files on Apple systems sparked the discussion about the so-called Client-Side Scanning (CSS) technology.

Searching for specific content past access restrictions has always been an attractive shortcut. It is now becoming apparent that CSS leads to serious problems that endanger the basis of information security and do not bring the hoped-for benefits. Instead, additional security gaps arise.

Search of end devices

Recently, the EU Commission and law enforcement agencies have repeatedly raised the issue of circumventing secure encryption. Mathematically, strong encryption cannot be carried out without stored duplicate keys or intentional weakening of the technologies used. A move has therefore been made to force access to the data being sought either on the platform itself, i.e. on the operators’ servers, or directly on the end devices. Messenger platform providers are the first choice. Some try to protect access to customer data by additional encryption with keys on the client. This shifts the focus back to the end devices.

Apple announced a few months ago that the operating system would search for prohibited images on iPhone and iPad devices. The system forms checksums of digital image files and uses an algorithm to compare them with a database that contains the characteristics of the files being searched for. The algorithm should also be able to recognize slightly altered images, which has already been refuted by experiments by security experts.

Microsoft’s PhotoDNA works in a similar way for online services that work with images. The big point of criticism at Apple is the anchoring of the search in the operating system itself. This provides a search function for content that can search for any data. The restriction on the image database used can be changed at any time by instructing the software from Apple or third parties. This also applies to any updates that can revoke or make it impossible to switch off the function at any time.

CSS contradicts security principles

The effects of client-side scanning (CSS) on information security and privacy have now been evaluated by renowned researchers in the form of a publication . Related approaches in the past and the impact of security vulnerabilities on working with CSS-enabled devices were considered. The result contradicts the promises of all supposedly secure filter and search technologies. Shifting scan capabilities from a platform’s servers to the client allows for deep attacks. The protection mechanisms on the end device are effectively rendered ineffective as a result.

In addition, any data can be found with the search infrastructure, since the search is based on configurable comparisons. Due to the deep integration into the operating system, the search can be constantly adapted and carried out. In practice, CSS is therefore a widespread violation of privacy. Deployed on company systems, the effects are even worse, as sensitive data is accessible regardless of company policy. In the case of possible weaknesses in the CSS implementation, industrial espionage is then unhindered. For this purpose, for example, only contact data must be searched for instead of pictures. A graph is then automatically generated that shows who is in contact with whom.

In addition, the lack of disclosure of the search infrastructure and associated algorithms is a serious problem. Content on social media platforms is already subject to automated filters. The criteria are not published. Reports of blocked accounts without justification have been criticized in the past. Even with a complaint about wrong decisions, there is no insight into the inner structure of the cause. If you transfer this behavior to CSS, then this problem also transfers to the daily use of smartphones or tablets.

philosophy of safety

The last 50 years of information security experience have brought with them a large pool of experiences and tested concepts. Secure communication protocols and secure systems have very clear technical requirements that must be met. There is no room for negotiation when it comes to mathematical concepts.

Fundamental building blocks for security are fully controllable platforms for proprietary software and strong encryption algorithms with no intentionally built-in weaknesses or backdoors. CSS cannot prevent improper use of digital infrastructure. The opposite is true, as any complexity artificially introduced through client-side scanning (CSS) can introduce further security risks.

CSS was introduced to keep end-to-end encryption and allow investigations into prohibited content. This squaring of the circle is not possible, since numerous weaknesses in the design have been found since Apple’s plans became known. If digitization is to be pursued seriously, then information security is non-negotiable. Business, state authorities and civil society must be able to rely on the protection of their data.

Numerous components are already built into current systems that are poorly documented and contain potential vulnerabilities. CSS is another building block to build new threats.

hope you read german deepsec.net 


Bugs in Our Pockets: The Risks of Client-Side Scanning

For more than two decades, U.S. law enforcement has fought against the use of strong cryptography by the public in telecommunications. In 1992, the FBI argued that due to encryption, 60 percent of criminal wiretaps would be useless within three years—and, in the worst case, none might be intelligible. Ever since the U.S. government loosened cryptographic export controls in 2000, the FBI talked of doom and gloom regarding criminal investigations due to the public’s use of encryption.

Since the 1990s, the bureau has tried to thwart the use of end-to-end encryption, a system in which only the sender and the receiver can read the message. First, there was the Clipper, a National Security Agency design in which digitized voice communications would be encrypted with keys that would be split and escrowed by two agencies of the U.S. government. That didn’t fly; neither industry nor other nations were willing to use such a system. Next, there was the effort by FBI Director James Comey to press for exceptional access—strong encryption that provides access to unencrypted content to legally authorized searches. Technologists, including Lawfare contributor Bruce Schneier and me, argued that such solutions weren’t feasible. Mandating such a solution would decrease society’s security, not increase it. The Obama administration agreed, seeing the cost of widely available encryption tools as outweighed by the costs to public safety, national security, cybersecurity and economic competitiveness of imposing access requirements.

Law enforcement, and some national security agencies, haven’t given up. And despite the increasing number of former senior national security and law enforcement officials who have publicly supported the widespread use of encryption, U.S. law enforcement and allied countries around the globe are back with a new proposal to get around encryption. This one, in fact, does exactly that.

The new proposal is client-side scanning, scanning content on a user’s device prior to its encryption or after decryption. Supporters of the technology argue that such scanning can uncover child sexual abuse material (CSAM) without putting people’s privacy at risk. The supporters reason that people whose phones don’t have CSAM will have nothing to fear; the scanning will be local and, if there is no targeted material on the device, no information will ever leak from it.

Paul Rosenzweig wrote a long and thoughtful piece on the law and policy of client-side scanning in these pages a year ago. He followed up with two recent posts, one on Apple’s CSAM effort and another on Apple’s postponement of its deployment. Now my colleagues and I have written a technical analysis of the threats posed by client-side scanning systems, “Bugs in Our Pockets: The Risks of Client-Side Scanning.”

The proponents of these systems argue that they enable privacy while ensuring society’s safety by preventing forbidden content from being sent to the world. But as we describe in our paper, there are multiple ways in which these systems can fail, including by failing to detect targeted content, mistaking innocuous content for targeted material, and the like. It is far from clear that client-side scanning systems can provide the kind of successful evidence gathering that its proponents claim. At the same time, client-side scanning brings great danger. Such systems are nothing less than bulk surveillance systems launched on the public’s personal devices. Currently designed to scan for CSAM, there is little that prevents such systems from being repurposed to scan for other types of targeted content, whether it’s embarrassing personal photos or sensitive political or business discussions.

In 1928, in his dissent in Olmstead v. United States, Justice Louis Brandeis wrote:

When the Fourth and Fifth Amendments were adopted, “the form that evil had theretofore taken” had been necessarily simple. Force and violence were then the only means known to man by which a government could directly effect self-incrimination. It could compel the individual to testify—a compulsion effected, if need be, by torture. It could secure possession of his papers and other articles incident to his private life—a seizure effected, if need be, by breaking and entry. Protection against such invasion of “the sanctities of a man’s home and the privacies of life” was provided in the Fourth and Fifth Amendments by specific language …. But “time works changes, brings into existence new conditions and purposes.” Subtler and more far-reaching means of invading privacy have become available to the government. Discovery and invention have made it possible for the government, by means far more effective than stretching upon the rack, to obtain disclosure in court of what is whispered in the closet.

Moreover, “in the application of a Constitution, our contemplation cannot be only of what has been, but of what may be.” The progress of science in furnishing the government with means of espionage is not likely to stop with wire tapping. Ways may some day be developed by which the government, without removing papers from secret drawers, can reproduce them in court, and by which it will be enabled to expose to a jury the most intimate occurrences of the home.

Client-side scanning, by exposing the personal photos, thoughts, and notes from a user’s phone, does exactly what Brandeis feared might come to pass. Read our paper to understand the technical flaws of client-side scanning solutions and why they provide neither safety nor security for society. 26


The Apple Client-Side Scanning System

Washington, D.C.’s cyber policy summer was disrupted earlier in August by an announcement from Apple. In an effort to stem the tide of child sexual abuse materials (CSAMs) that are flooding across the cyber network (and it really is a flood), Apple announced a new client-side scanning (CSS) system that would scan the pictures that iPhone users upload to the cloud for CSAM and, ultimately, make reports about those uploads available to law enforcement for action. The new policy may also have been a partial response to criticism of Apple’s device encryption policies that have frustrated law enforcement.

The objective is, of course, laudable. No good actor wants to see CSAM proliferate. But Apple’s chosen method—giving itself the ability to scan the uploaded content of a user’s iPhone without the user’s consent—raises significant legal and policy questions. And these questions will not be unique to Apple—they would attend any effort to enable a CSS system in any information technology (IT) communications protocol, whether it is pictures on an iPhone or messages on Signal.

Last year, I wrote an extended analysis of these legal and policy questions, and if you want more detail than this post provides, you might go back and read that piece. My assessment then (and now) is that many of the potential technical implications of a CSS system raise difficult legal and policy questions and that many of the answers to those questions are highly dependent on technical implementation choices made. In other words, the CSS law and policy domain is a complex one where law, policy, and technological choices come together in interesting and problematic ways. My conclusion last year was that the legal and policy questions were so indeterminate that CSS was “not ready for prime time.”

Clearly, the leadership at Apple disagrees, as it has now gone forward with a CSS system. Its effort provides a useful real-world case study of CSS implementation and its challenges. My goal in this post is simple: first, to describe as clearly as I can exactly what Apple will be doing; and second, to map that implementation to the legal and policy challenges I identified to see how well or poorly Apple has addressed them.

Apple’s efforts, though commendable, raise as many questions as they answer. Those who choose to continue to use iPhones will, essentially, be taking a leap of faith on the implementation of the program. Whether or not one wishes to do so is, of course, a risk evaluation each individual user will have to make.

Apple’s New Program

Apple announced its new program through a series of public comments, including a summary on its web site. The comments explicitly tied the new technology to child safety, linking its efforts exclusively to the proliferation of CSAM. At the outset, it is important to note that Apple’s newly unveiled efforts are really three distinct new technologies, two of which don’t have direct bearing on CSS. Two of them (providing new tools in the Messages app to allow greater parental control and allowing Siri to intervene and warn when CSAM material may be accessed) are not directly relevant to the CSS discussion and I will leave them aside.

It is the third effort that raises the issues of concern. Here is how Apple describes it:

[A] new technology in iOS and iPadOS* will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States.

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.

Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.

Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

What does that mean in plain English? Apple has provided an FAQ as well as a technical summary that are intended to clarify the program.

It’s a lot to parse, but here’s a distilled version. A new program, called NeuralHash, will be released as part of iOS 15 and macOS Monterey, both of which are due out in a few months. That program (which will not be optional) will convert the photographs uploaded from a user’s iPhone or Mac to a unique hash.

A “hash” is a way of converting one set of data, like a picture, into a different unique representation, such as a string of numbers. In the past, each unique picture has created a unique hash and so slight changes in a picture, like cropping an image, have changed the hash value. Notably Neural Hash is reported to have the capability of “fuzzy matching,” so that small edits or cropping of an image do not change the image’s hash value—that sort of editing has, historically, been an easy way around hash matching programs.

Before a user uploads photos to iCloud, those image hashes will be matched on the device with a database of known hashes of CSAM that is provided to Apple by organizations like the National Center for Missing & Exploited Children (NCMEC). In other words, the NCMEC hashes will also be on the user’s device—again, without the option of turning it off. The matching function will use a technique called private set intersection, which does not reveal what the content of the image is or alert the iPhone owner of a match.

Instead, the matching alert for any positive match will be sent to Apple without initially identifying the user who is the source of the matching alert. Apple has promised that it will not unmask the alert (that is, deanonymize and identify who the user is) unless and until a threshold of CSAM is crossed. In a public defense of the system, Apple suggested that as a matter of policy that threshold would be approximately 30 images on a given phone that matched known CSAM before an alert would be generated.

If an alert passes the threshold specified, Apple will then decrypt the images and have a human manually review them. If the manual review confirms that the material is CSAM, Apple can then take steps such as disabling an account and reporting the imagery to NCMEC, which in turn passes it on to law enforcement. In its public announcement Apple says there is less than a 1 in 1 trillion chance of a false positive, but the company is nonetheless providing an appeals process for users who think their material has been incorrectly characterized.

The Pros and Cons of Apple’s Approach

The reaction to Apple’s announcement was swift … and wildly divided. Privacy advocates immediately raised concerns about the implementation. While supportive of the overall goal, many saw significant privacy risks. The executive director of NCMEC characterized these objections as the “screeching voices of the minority.” Meanwhile, some security experts suggested that the limited nature of the scanning contemplated would pose few real privacy risks.

My own assessment is more tentative. Some aspects of what Apple proposes to do are highly commendable. Others are rather more problematic. And in the end, the proof will be in the pudding—much depends on how the system is implemented on the ground next year.

One way to think about the pros and the cons is to borrow from the framework I introduced last year, breaking down the analysis into questions of implementation and of policy implications. Using that earlier framework as a guide here is a rough-cut analysis of the Apple program:

Implementation Issues

Some implementation choices necessarily have legal and policy implications.

Mandatory or voluntary? The first question I raised last year was whether or not the system developed would be mandatory or voluntary. Clearly, voluntary programs are more user-protective but also less effective. Conversely a government mandate would be far more intrusive than a mandate from a commercial enterprise, whose products (after all) can be discarded.

On this issue, Apple has taken a fairly aggressive stance—the Neural Hash matching program will not be optional. If you are an Apple user and you upgrade to the new iOS, you will perforce have the new hash-matching system on your device and you will not have the option of opting out. The only way to avoid the program is either to refuse the update (a poor security choice for other reasons) or to change from using Apple to using an Android/Linux/Microsoft operating system and, thus, abandon your iPhone. To the extent that there is a significant transaction cost inherent in those transitions that makes changing devices unlikely (something I believe to be true), it can fairly be said that all Apple users will be compelled to adopt the hash-matching program for their uploaded photos, whether they want to or not.

At the same time, Apple has adopted this system voluntarily and not as the result of a government mandate. That context is one that will insulate Apple’s actions from most constitutional legal problems and, of course, means that Apple is free to modify or terminate its CSS program whenever it wishes. That flexibility is a partial amelioration of the mandatory nature of the program for users—at least only Apple is forcing it on them, not a collection of multiple smartphone software providers, and not the government(s) of the world …. So far.

Source and transparency of the CSAM database? A second question is the source of and transparency of the CSAM database. Apple’s proposed source for the authoritative database will be the National Center for Missing & Exploited Children—a private, nonprofit 501(c)(3) corporation. It is not clear (at least not to me) whether NCMEC’s hash database listings will be supplemented by adding hashes from the private holdings of major for-profit tech providers, like Facebook, who have their own independent collections of hashed CSAM they have encountered on their platforms.

Notably, since some aspects of reporting to NCMEC are mandatory (by law certain providers must provide notice when they encounter CSAM), the use of the NCMEC database may again raise the question of whether NCMEC is truly private or might, in the legal context, be viewed as a “state actor.” More importantly, for reasons of security, NCMEC provides little external transparency as to the content of its database, raising the possibility of either error or misuse. It’s clear from experience that NCMEC’s database has sometimes erroneously added images (for example, family pictures of a young child bathing) that are not CSAM—and there is no general public way in which the NCMEC database can be readily audited or corrected.

Notice provided: When and to whom? How will notice of offending content be provided, and to whom? Here, Apple has done some good work. Before notice is provided to NCMEC, Apple has set a high numerical threshold (30 CSAM images) and also made the process one that is curated by humans, rather than automated. This high threshold and human review should significantly mitigate the possibility of false positives. By providing the notice to NCMEC, which will in turn provide the notice to law enforcement, Apple is taking advantage of an existing reporting mechanism that has proved relatively stable.

To be sure, the apparent decision to not provide contemporaneous notice to the user raises some concern, but delayed notification is common in the law when there is a risk that evidence will be destroyed. Add to this the promise of eventual notification and an appeals process within Apple and, on this score, Apple deserves relatively high marks for its conceptual design.

Accuracy of matching algorithm? A further problem is, naturally, the question of whether or not the new Neural Hash matching system works as advertised. As Matthew Green and Alex Stamos put it: “Another worry is that the new technology has not been sufficiently tested. The tool relies on a new algorithm designed to recognize known child sexual abuse images, even if they have been slightly altered. Apple says this algorithm is extremely unlikely to accidentally flag legitimate content …. But Apple has allowed few if any independent computer scientists to test its algorithm.” Indeed, as I understand it, Apple thinks that any independent evaluator who attempts to test the system without its consent is violating its intellectual property rights (a stance it has taken in many other contexts as well).

To be sure, the added safeguard of having an Apple employee review images before forwarding them to NCMEC may limit the possibility of error, but it is at least somewhat troubling that there is no independent verification of the accuracy of the new program, and that Apple is resisting greater transparency.

Efficacy? A final implementation question is the integration of the whole system. A recent study by the New York Times demonstrated that the systematic linkage between the NCMEC database and screening systems in search engines was incomplete and yielded many false negatives. Again, the new Apple system has yet to be thoroughly tested so it’s impossible to say with certainty that the integration into Neural Hash is successful. It could be a poor system that sacrifices privacy without providing any gains in effectively interdicting CSAM. At this point, only time and real-world testing will establish whether or not Apple’s new system works.

Policy Implications

Irrespective of the details of Apple’s architecture, the company’s implementation choices raise some fundamental policy questions that are also worth considering.

Hash control? The hash database of CSAM will, by necessity, be widely distributed. All databases are subject to degradation, disruption, denial or destruction. While Apple has a good general track record of securing its systems against malicious intrusion, it does not have a record of perfect security—indeed, no one could. As far as I can discern, Apple will be taking no special precautions to control the security of the CSAM hash database—rather, it will rely on its general efforts. Though the risk is small, there is no perfectly secure data storage and distribution system. It requires a heroic assumption to be completely confident of Apple’s hash control. And that means that the hash database may be subject to manipulation—raising the possibility of all sorts of malicious actions from false flag operations to deep fake creation of fictitious CSAM.

Basic cybersecurity? Likewise, I have concerns about the overall security of the system. Any CSS program will necessarily have significant administrative privileges. Again, Apple’s security is quite robust, but the deployment of the NeuralHash as part of the operating system will, by definition, expand the potential attack surface of Apple devices—again, with unknown effect.

Scalability and mutability? The NCMEC database contains more than 4 million distinct hashes. Apple has not said (as far as I am aware of) what portion of that database will be pushed down to devices to conduct the on-device hash matching. It seems likely that a smaller, curated list will be distributed to end users to avoid scalability issues. But, again, this is a question that, apparently, has yet to be fully tested; will the curated smaller list suffice for effectiveness, or will a larger list (with concomitant increases in “bloat” inside the device) be necessary?

Commercial impact? The new CSS system will necessarily increase device processor usage. It will also likely result in an increase in network usage for the database downloads. While other improvements may mask any degradation in device performance, the costs will be small, but real. They will also not be born uniformly, as the entire CSS application will be imposed only on higher-end devices capable of running iOS15 (and thus the entire discussion here doesn’t apply in parts of the country and the world where relatively few people use newer iPhones—or use iPhones at all). Finally, it seems likely that Apple will need to lock down the CSS application within the iOS to prevent tampering, exacerbating the trend away from device owner control.

Form of content and loss of privacy and control? To its credit, Apple has decided to, in effect, let users be the masters of their own control and privacy. By limiting the CSAM scanning to images that are uploaded to the iCloud, Apple allows end users to secure their privacy from scrutiny simply by refraining from using the iCloud function. In that sense, users hold in their hands the keys to their own privacy preferences.

But all is not, of course, that simple, and the credit Apple gains for its choice is not that great. First, and most obviously, the iCloud storage function is one of the very best features of Apple products. It allows users to access their photos (or other data) across multiple devices. Conditioning privacy on a user’s decision to forgo one of the most attractive aspects of a product is, at a minimum, an implementation that challenges the notion of consent.

Second, as I understand it, the choice to scan only uploaded images is not a technological requirement. To the contrary, unlike current scanning programs run by competitors such as Google or Facebook, the Apple CSS system will reside on the individual user’s device and have the capability of scanning on-device content. That it does not do so is a policy choice that is implemented in the iOS15 version of the CSS system. That choice can, of course, be changed later—and while Apple promises not to do so, reliance on the company’s assurances will, no doubt, heighten anxiety among privacy-sensitive users.

In addition, Apple’s decision to limit scanning to content that is in image form (and not, say, to scan text in Messages) is also a policy choice made by the company. Apple has already started down the road of message scanning with its decision to use machine-learning tools to help protect children from sexually explicit images (a different part of their new child-protective policies). But in doing so, it has made clear that it can, if it wishes, review the content of Messages material—and there is no technological reason text could not be reviewed for “key words.” As such, much of the user’s privacy is currently assured only by the grace of Apple’s policy decision-making. Whether and how much one should trust in that continued grace is a matter of great dispute (and likely varies from user to user).

Subject matter and scope? Finally, there is the question of subject matter. Initially scanning will be limited to CSAM materials, but there is a potential for expansion to other topics (for example, to counterterrorism videos, or to copyright protection). Nothing technologically prevents the NeuralHash system from being provided with a different set of hashes for comparison purposes.

This is particularly salient, of course, in authoritarian countries. In China today, Apple is subject to significant governmental control as a condition of continuing to sell in that country. According to the New York Times, in China, state employees already manage Apple’s principal servers; the company has abandoned encryption technology; and its digital keys to the servers are under the control of Chinese authorities. Given China’s history of scouring the network for offending images and even requiring visitors to download text-scanning software at the border, one may reasonably wonder whether Apple’s promise to resist future expansion in China can be relied on. It is reasonable to wonder if at some point a government could compel Apple (or any other company that develops similar CSS products) to use this technology to detect things beyond CSAM on people’s computers or phones.

Legal Issues

Finally, most of the legal issues I mulled over a year ago are rendered moot by the structure of Apple’s CSS program. Since the company has adopted the CSAM scanning capability voluntarily, it will not be viewed as a state actor and there is sufficient distance between Apple and the government (and NCMEC) to make any contrary argument untenable. As a result, almost all of the constitutional issues that might arise from a compulsory government-managed CSS program fall by the legal wayside.

It is likely that the Apple CSS program may impact other legal obligations (for example, contractual obligations between Apple and its vendors or customers), and it may even implicate other generally applicable law. I will be curious, for example, to see how the CSS program fares when analyzed against the requirements of the EU’s General Data Protection Regulation. At least as I understand EU law, the near mandatory nature of the system within the iOS will make questions of consumer consent problematic for Apple. But that area of law is well outside my expertise (as is, for example, an analysis under California’s state-level privacy law). I simply note the problem here for the sake of completeness.

Conclusion

So, what’s next? Some observers might reasonably fear that Apple’s step is but the first of many and that other IT communications providers will be pressured to follow suit. So far, though, there seems to be resistance. WhatsApp, for example, has announced that it will not follow Apple’s lead. It remains to be seen whether WhatsApp, and others, can maintain that position in light of the political winds that will surely blow.

For myself, I hope they do. Because as far as I can see, Apple’s new technology has yet to come to grips with some of the hardest questions of policy and implementation. Much of the validation of the system is highly dependent on the degree to which one trusts that Apple will implement the system as advertised and resist mission-creep and other political pressures. The extent to which one has trust in Apple is highly context dependent and variable. As they say, your mileage may vary.

One gets the sense (perhaps unfairly) that Apple felt itself compelled to act as it did by external factors relating to the law enforcement pressures it had felt from the U.S. government. And it is far too early to tell whether the decision to do so will prove wise, from a business perspective. But at least at a first cut, it certainly seems that Apple’s promise—“What happens on your iPhone stays on your iPhone”—now rings hollow source

It is a truism that bad news drops on Fridays, timed to fly under the radar. The Friday before Labor Day is especially auspicious for hiding problems By that standard, Apple yesterday seems to have acknowledged a rather large public relations error.

Late on Friday, Apple stated that it would postpone its plans to deploy a system that scanned images on iPhones for child sexual abuse material (CSAM). This client-side scanning (CSS) system was first publicly announced in early August.

To review the bidding, the idea was that a new program, called NeuralHash would be included in iOS 15 and macOS Monterey releases which were due out in a few months. That program (which would not have been optional) would convert the photographs uploaded from a user’s iPhone or Mac to a unique hash value. Those hashes in turn would be matched with a database of known hashes of CSAM that is provided to Apple by organizations like the National Center for Missing & Exploited Children (NCMEC). A matching alert for any positive match would be sent to Apple, which would review the alleged match by hand only after 30 alleged matches and, if apt, report the match to NCMEC and thence to law enforcement authorities.

While the goal was laudable, to be sure, many privacy advocates were concerned about its intrusiveness and the mandatory nature of the system. Many, including me, wrote reviews of the proposal that ranged from highly critical to cautiously doubtful.

It now appears that the degree of controversy was too great for Apple to withstand and that it wanted to go back to the drawing board. In its statement announcing the pause Apple said: “Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.” It remains to be seen exactly what comes next, but one thing is for sure—this story isn’t over by a long-shot. source

By Paul Rosenzweig

 

error: Content is protected !!