Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Feels like nobody here has bothered to read the actual technical specifications.[1]

This already adds new level of encryption into iCloud stored images. They have essentially created E2EE system with a specific access (or backdoor), while preventing the use of backdoor for other purposes than CSAM (so nobody can ask randomly to decrypt something else). They can only decrypt images, when user's account reaches the treshold of CSAM hash count:

> Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images.

While this is not perfect end-to-end encryption solution, it is better than only server side encryption. Now there are two levels of encryption. If someone breaches Apple's servers and they have also access for server side private keys, they still need matching NeuralHash value to decrypt the images.

[1]:https://www.apple.com/child-safety/pdf/Expanded_Protections_...



I think the argument here is that (1) the model is going to have false positives (e.g. revealing pictures of you and your spouse, your beach photos, etc.) that will permit access for non-CSAM (or at the very least, mark your account as suspicious in the eyes of authorities), and (2) the model itself can be updated/replaced for any reason and potentially at any government's demand, so the limits on scope are effectively moot


For argument (1), they are only looking matches from existing database of hashes what NCMEC is providing. They are not developing general AI to identify new pictures, they only try to stop redistribution of known files. Because of that, their claim for 1/1 trillion false postives might be actually close to be correct since it is easily validated on development phase. Also, there is human verification before law-enforces are included.

For argument (2), this might be valid, but yet again, all we can do is to trust Apple, as we do all the time by using their closed source system. Model can be changed, but it is still better option than store everything unencrypted? In case you mean forging hashes to decrypt content.

For the sake of surveillance, it is not strong argument because again, system is closed and we know only what they say. Creating such model is trivial, and is not stopping government for demanding if Apple would want to allow that. System would be identical for antivirus engines which have existed since 1980s.

This is such a PR failure for Apple, because all their incoming features are improving privacy on CSAM area, everything negative comes from speculation which was equally already possible.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: