Educated Guesswork

More on Apple's Client-side CSAM Scanning

Apple has released more information about their client-side CSAM scanning function (See my original writeup). Though none of this fundamentally changes the situation -- and it's not clear why they didn't just share these details before -- it's worth going through them and the points they've been making.

Scanning Threshold/False Positive Rate #

Starting small, Apple has published their proposed detection threshold, 30 CSAM images. This is computed by taking a conservatively estimated 10-6 false positive rate (their measured rate is 3 in 100 million) and then conservatively assuming an image library bigger than the biggest of any current iCloud user and then solving for an overall false positive rate of 10-12.[1].

This seems like a fairly reasonable procedure for the non-adversarial case. Of course, it doesn't work at all for the adversarial case, for instance where an attacker knows the hash for a CSAM image and then creates an innocuous image that has the same hash. This could happen in at least two ways: first, the database itself could leak in some way. Second, the attacker could know that a particular piece of CSAM is in the database and then compute its hash directly. Either form of attack requires the attacker to know the NeuralHash algorithm, which Apple hasn't disclosed, but they might be able to get that by reverse engineering the binary (in fact, Apple's verifiability claims depend on this, as described below.)

Apple's Review #

Apple also published more details on their review process which seems to involve two steps: (1) checking a second hash before human review in order to minimize the chance of humans reviewing false positives and then (2) human review of the "visual derivative":

Once Apple's iCloud Photos servers decrypt a set of positive match vouchers for an account that exceeded the match threshold, the visual derivatives of the positively matching images are referred for review by Apple. First, as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possibility that the match threshold was exceeded due to non-CSAM images that were adversarially perturbed to cause false NeuralHash matches against the on-device encrypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation.

Several points are worth making here. First, to make this work the visual derivative needs to be something that a person can look at and compare to the real image. Apple hasn't been super-clear about what the "visual derivative" is but they say a "visual derivative of the image, such as a low-resolution version", which is consistent with what one would expect. Second, in order for the second hash to be a useful countermeasure, the second perceptual hash needs not just to be independent but also secret. Otherwise, an attacker might be able to create an image which matched both hashes. Of course, because the second hash isn't run on people's phones but rather on Apple's (and probably the child safety organizations's) servers,[2] it's less vulnerable to attack. And if it is compromised, Apple can change it and have the child safety organizations recompute the hashes without changing anyone's phone software.

Multiple Jurisdictions #

There have been a number of concerns that Apple would be forced to include non-CSAM content (CDT, EFF). In response to these, Apple proposes to only include hashes which are provided by at least two separate child safety organizations in different jurisdictions:

The first protection against mis-inclusion is technical: Apple generates the on-device perceptual CSAM hash database through an intersection of hashes provided by at least two child safety organizations operating in separate sovereign jurisdictions – that is, not under the control of the same government. Any perceptual hashes appearing in only one participating child safety organization’s database, or only in databases from multiple agencies in a single sovereign jurisdiction, are discarded by this process, and not included in the encrypted CSAM database that Apple includes in the operating system. This mechanism meets our source image correctness requirement.

...

This approach enables third-party technical audits: an auditor can confirm that for any given root hash of the encrypted CSAM database in the Knowledge Base article or on a device, the database was generated only from an intersection of hashes from participating child safety organizations, with no additions, removals, or changes. Facilitating the audit does not require the child safety organization to provide any sensitive information like raw hashes or the source images used to generate the hashes – they must provide only a non-sensitive attestation of the full database that they sent to Apple. Then, in a secure on-campus environment, Apple can provide technical proof to the auditor that the intersection and blinding were performed correctly. A participating child safety organization can decide to perform the audit as well.

I don't doubt that this is technically possible. In fact, I'm a little surprised that the proof of correctness has to be done on Apple's campus rather than having a zero-knowledge proof that anyone can verify (maybe that's coming?). In any case, I'm not sure how comforting it really should be to people that Apple requires inputs from child safety organizations from different countries: it's not like two governments couldn't collude to put each other's non-CSAM images into their databases either on a one-off basis or as part of some kind of more formalized arrangement such as Five Eyes.

In any case, it would be good to know which other child safety organization Apple is using to construct their initial database; presumably it's NCMEC in the US, but who outside the US?

iCloud-Only #

One natural question is whether this is limited to iCloud. Apple has been pretty dismissive of this question. Here's Craig Federighi talking to WSJ's Joanna Stern:

I think that's a common but really profound misunderstanding. This is only being applied as part of the process of storing something in the cloud. This isn't some processing that's running over the images you store in your messages or in Telegram or anything else... you know what you're browsing on the Web. This literally is part of the pipeline for storing images in iCloud.

This seems to me like the wrong standard. As I mentioned in my original post, this system could readily be technically applied to images other than those in iCloud. The major difference is that with iCloud, Apple actually has a copy of the original image. However, based on the description they have provided, they don't need the original image because they review the visual derivative. If Apple wanted to (for instance) scan every image in Photos rather than just the ones that were uploaded to iCloud, this seems like it ought to be pretty straightforward.

A more interesting question is whether they could scan images in third party programs. I'd initially thought this would be fairly challenging because they would have to scrape pixels off the screen, but then I realized that Apple provides image rendering APIs such as CoreImage and UIImage. Presumably lots of implementors use these, so in principle Apple could modify them to upload a voucher each time an image is displayed.[3]. This would obviously cost some bandwidth, but isn't necessarily prohibitive. The situation is actually even easier for Web browsing on iOS: because Apple prohibits the use of any Web engine other than their own, they already have access to any image which is being rendered.

So, I don't really think this is that profound a misunderstanding. While it's certainly true that Apple would need to rearchitect their system some in order to scan non-iCloud images, there doesn't seem like any in principle reason why they couldn't do so, just because that's not how the system is currently built; they've already done the hard part.

List Verification #

Finally, there's the question of verifying the lists. Apple writes:

Since no remote updates of the database are possible, and since Apple distributes the same signed operating system image to all users worldwide, it is not possible – inadvertently or through coercion – for Apple to provide targeted users with a different CSAM database. This meets our database update transparency and database universality requirements.

Apple will publish a Knowledge Base article containing a root hash of the encrypted CSAM hash database included with each version of every Apple operating system that supports the feature. Additionally, users will be able to inspect the root hash of the encrypted database present on their device, and compare it to the expected root hash in the Knowledge Base article. That the calculation of the root hash shown to the user in Settings is accurate is subject to code inspection by security researchers like all other iOS device-side security claims.

The general reasoning here is sound: (1) Auditors verify that the database is correctly constructed. (2) Apple commits publicly to the hash of the database[4] and so users can verify that they have the same hash as the auditors looked at which is the same as everyone else's. (3) Researchers can verify that Apple's hash computation code is accurate. However, in practice I don't think this provides that high a level of assurance.

First, as I said above, the database construction procedure -- including the auditing -- doesn't necessarily guarantee that there are no non-CSAM images in the database, just that child safety organizations in two countries are willing to put a given image in. Second, everybody having the same database doesn't actually guarantee that there aren't country-specific entries in the database. Apple could just put hashes for every country's images into the database and then sort things out on the server side. At minimum, they could just server-side filter the vouchers based on matching the independent perceptual hash (see above) against a country-specific database, but there might also be a way to arrange that there is a separate voucher decryption key for each country so that only the vouchers for a given country decrypt.[5]

This brings us to the question of calculating the root hash for the database on a given device. The problem here is that you're trusting the phone to tell you the hash of the database. Apple's response to this is that security researchers are able to check that the hash computation code is correct, but that just tells you that the code they reviewed is correct, not that the code on an individual phone is correct. In order to verify that you need to examine the individual phone, not just look at the code Apple is distributing. Importantly, you can't trust what the phone tells you about what code is running on it because the phone itself could be compromised.[6] It's not just a matter of verifying that the database is correct but also that the NeuralHash algorithm behaves as expected. For instance, it could read different parts of the database depending on which geography the device was in. At the end of the day you need to be able to study and verify the whole system.

Finally, all of this depends on researchers being able to inspect iOS code, but of course most of the code in iOS isn't open source so you have to reverse engineer it and Apple isn't always that forthcoming with details of how things work (to just take an example from this case, they haven't published the details of NeuralHash, even though, as noted above, that's required to verify that the system behaves as claimed). Moreover, Apple historically hasn't been that enthusiastic about security researchers studying the iOS software.

I understand Apple's desire to assert that the whole system is independently verifiable, but I think that's a bit of a category error here. At the end of the day, neither Apple hardware nor Apple software is an open system and if you're going to buy an Apple device you're at some level trusting Apple with your data. Obviously it would be better if that weren't the case, but as long as it is, it's not clear to me how useful it is to have just this piece be verifiable.


  1. As a side note, this allows us to solve for the expected size of the library, but I'm too lazy to do it. ↩︎

  2. Apple never gets the images, so the child safety organization already computes the NeuralHash values for the images. They would need to either compute the visual derivative and send it to Apple or compute the visual derivative and then the independent hash and send it to Apple. ↩︎

  3. Probably with some kind of local cache to prevent multiple uploads or maybe a prefilter to remove anything that clearly isn't CSAM. ↩︎

  4. Note that this is a different kind of hash than the NeuralHash, and detects any change. ↩︎

  5. It's obviously the case you could do this without the independent auditing stage that apple proposes, just by using a per-country blinding key. I'm not sure if it's possible with the auditing, however. I suspect it depends on the details of how that is done. ↩︎

  6. Actually verifying the software running on a given device is quite a challenging problem because you need some way to examine the code on the device which isn't mediated by by that same code. For instance, you could read the data off the disk (these days a flash drive) but the disk itself isn't just dumb storage, it's got a processor in it that controls reading off the disk and runs the interface to the computer, and the device might be able to rewrite the firmware on that processor. ↩︎

Keep Reading