Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple’s new abuse prevention system: an antritust/competition point of view (quintarelli.it)
451 points by simonebrunozzi on Aug 6, 2021 | hide | past | favorite | 287 comments


There is no such thing as a trustworthy third party and even trusting yourself is questionable at the best of times. We are constantly balancing a bunch of different considerations with regards to the way that we compute, purchase devices, and utilize services. Security and privacy are of course important, and Apple to date has had a fairly good (if shallow) track record in this regard, at least in the United States.

With that being said, what Apple is doing here is just a blatant violation of that 'trust' and certainly a compromise to their commitment to privacy. Under no circumstances is it justifiable to essentially enlist people's devices to police their owners, while using the electricity that you pay for, the internet service you pay for, and the device itself that you pay for to perform a function that is to absolutely no benefit to the user and in fact can only ever be harmful to them.

It doesn't matter that the net data exfiltrated by Apple ends up being the same as before (through scanning unencrypted files on their servers). The distinction is so obvious to me that I find it incredible that people are legitimately arguing that it's the same, or that it in some way this is actually helping preserve user privacy.

As mentioned in the article, this does absolutely nothing towards protecting children other than directing all but the biggest idiots towards platforms that can't be linked to them, which I'd imagine, they already are.


> Under no circumstances is it justifiable to essentially enlist people's devices to police their owners, while using the electricity that you pay for, the internet service you pay for, and the device itself that you pay for to perform a function that is to absolutely no benefit to the user and in fact can only ever be harmful to them.

This is an obvious misrepresentation. An opt-in system to detect when adults are trying to groom kids by sending them porn seems like the opposite of harm. I imagine a lot of parents want that.

As for scanning what gets sent to iCloud. That’s also an opt-in service, and frankly it seems entirely reasonable for Apple not to want their servers to be used as a repository or hub for child porn.


It's not a misrepresentation whatsoever.

If Apple wishes to scan what's on their servers, that is their prerogative. They can use their compute resources and energy to do so. You needn't install spyware on a person's device that is of no benefit to the user. I'll reiterate, this can only ever be harmful to the user. Its utility right now is at its absolute best and most altruistic and it is still a violation of people's privacy and stealing computing resources from the device owner.

> That’s also an opt-in service, and frankly it seems entirely reasonable for Apple not to want their servers to be used as a repository or hub for child porn.

This will not stop their servers being used as a repository for illicit materials, if that is what you're suggesting.


Maybe they want to turn iCloud fully e2e, so cannot actually scan anymore on their servers.

They have to scan for CSAM by US law.


Could someone point to the relevant statute/regulation?


As far as I'm aware, there is none, though the EARN IT act that passed the Senate last year would have brought us closer by opening a company to liability if they provide E2E encryption without a mechanism for CSAM detection. https://en.wikipedia.org/wiki/EARN_IT_Act_of_2020


They don’t have to scan by US law. They only have to report detections, so if you have a content moderation team, they must report.

The EU and UK are in the process of passing laws requiring scanning.


> stealing computing resources from the device owner.

No, it is opt-in. Nothing is being stolen.


> "As mentioned in the article, this does absolutely nothing towards protecting children other than directing all but the biggest idiots towards platforms that can't be linked to them, which I'd imagine, they already are."

I suspect you're more wrong than you think about this. People share large volumes of CSAM through lots of different services - I knew someone who worked on the problem at Linked In(!).

HN likes to downplay the actual reality as if it's always some trojan horse, but the issue is real. It's worth talking to people that work on combatting it if you get the chance. I'm not really commenting on Apple's approach here (which I haven't thought enough about), but I know enough that an immediate dismissal based on it 'not helping' is not really appreciating the real tradeoffs you're making.

You can be against this kind of thing from Apple, but as a result more CSAM will be undetected. Maybe that's the proper tradeoff, but we shouldn't pretend it's not a tradeoff at all.

"Robin Hanson proposed stores where banned products could be sold. There are a number of excellent arguments for such a policy—an inherent right of individual liberty, the career incentive of bureaucrats to prohibit everything, legislators being just as biased as individuals. But even so (I replied), some poor, honest, not overwhelmingly educated mother of five children is going to go into these stores and buy a “Dr. Snakeoil’s Sulfuric Acid Drink” for her arthritis and die, leaving her orphans to weep on national television" [0]

[0]: https://www.lesswrong.com/posts/PeSzc9JTBxhaYRp9b/policy-deb...

Edit: After digging in, HN commentary is missing the most relevant details about this particular implementation. iCloud image checking compares to known CSAM image hashes - this means effectively zero false positive rate.

For iMessage child account ML scanning it’s on device and just generic sexual material restriction - more similar to standard parental controls than anything else (alerts only go to the parent, and only happen on child accounts)

My initial impression is this is a good approach. The risk mostly comes from future applications (like the CCP adding hashes of other stuff that isn’t CSAM like tankman or something).

The frankly ignorant knee-jerk responses from technical HN readers do a disservice and weaken the ability of technical people to push back when necessary.


One of the prerequisites to "discuss the tradeoffs" of this ordeal is the trust that the people behind it, and those in the position to push things further are actually interested in keeping the scope limited.

There are diminishing returns to using the same old excuse again and again. I'd say most people are just tired of the whole "Just think of the children!" into "Ah we got this system in place why not use it against <a little less evil but still illegal thing> too" followed by "It's the law in China/Turkey/Russia/wherever, <Company> can't just ignore it (and thus not help putting reporters, critics and other people into prison)" combo.

What you are saying is basically another rendition of "look the problem of child abuse exists and this could help so it is worth discussing", which is also a variant of the same.

> iCloud image checking compares to known CSAM image hashes - this means effectively zero false positive rate.

Actually we recently had news where an Apple tried to help cover up their errors [1], the system was supposed to be safe™, doesn't mean the people having control over it can't make mistakes, or worse.

What details are being missed here, exactly? Ultimately it is trivial to expand the hashes to compare to, isn't it? What does it matter that they use CSAM for now? It doesn't remove the involvement of humans in controlling the system, so false positive rate will never be "effectively zero", and it can easily be expanded.

We are left with the same two arguments as always:

- Think about the children

- Just trust the company, they use technology™, no you aren't allowed to check. Yeah they can easily abuse it, but we don't know for sure!

I wouldn't say HN is ignoring details, many people are just tired of the same old loop, there is no reason to put trust into these endeavors and it is reasonable to doubt even the motives.

[1]: https://news.ycombinator.com/item?id=27878333


It does not imply a false zero positive rate at all, because the hashes used are by necessity fuzzy, and even a completely benign picture can trigger a match.

Now they're saying that if the match is a false positive, it'll be screened by the human reviewer. But that means there's some stranger there potentially looking at my private photos (and, by the way, I wonder which country they'll be located in?). That's already completely and utterly unacceptable - you don't have to wait for any "future complications".


You need to stop assuming everyone is commenting out of ignorance and read up on how perceptual hashing works. It’s about as far as you can get from zero false positives, if configured otherwise it becomes just a poor version of SHA or whatever and can only detect exact matches.


At the time of my comments people weren’t discussing the fuzziness of perceptual hashing - they weren’t discussing implementation details at all.

I don’t know enough about the specific implementation or perceptual hashing details and probably pushed back too hard as a result of the other comments at the time (which were comments out of ignorance).

The level of downvotes I received is disproportionate to what I wrote anyway - this is clearly a political issue on HN. The irony is I’d probably align more with the risks being too high position, but it needs to be considered after actually understanding what the true risks are first.


Apple is obviously already scanning for CSAM for whatever non-E2E content hits their servers. With this, they offload some of the cost of computation to the end nodes but that can’t be the only reason for pushing this tech, so it’s only natural to ask why and look for avenues of exploitation. The Trojan horse/foot-in-the-door hypothesis is basically Occam’s razor.


This Daring Fireball blog post is a better summary than the discussion I’ve found on HN: https://daringfireball.net/2021/08/apple_child_safety_initia...

Occam’s razor is that they’re doing what they say they’re doing and not some more complicated future action.


How will you know? And actually, how will Apple know that's all their doing? They won't because all they can see is a hash, so they don't actually know if they're checking for anti-winnie the pooh memes, or CP. And there is no public auditing, there is no auditing by your senators, there is no public disclosure of the hashes they are checking, and no public disclosure of the exact algorithm used. And even if they tried to do the latter two, you'd have no way of verifying it.

You said earlier upthread, that HN is ignoring the tradeoffs. I find that distasteful, people have considered the tradeoffs of this and they don't like them, but that doesn't mean they need to preface every single statement about the thing with "well actually i've considered the tradeoffs, and I've found them to be blah, and..." That would quickly get tiresome which is why people don't do it.

I'll make a different, similar generalization. Like most people pushing this, you are pretending to care about the "tradeoffs", but you don't. You don't care about the downsides, you just say you do, and then blithely ignore them when they are brought up. What's the golden rule again?


I would say most (if not all) know how easily you can change the scope of a database like this(like you said yourself) to check for other inconvenient terms, links, memes, etc that do not "toe the party line"(whether is eastern or western - I don't care) and shut down inconvenient discourse.

The point is: This is too easy to abuse.


Being specific with the arguments and addressing the nuance matters imo.

Most of the HN responses are dumb, get the details wrong, and don’t take the trade offs seriously. That kind of thing causes me to dismiss them entirely.

There are legitimate arguments and risks which I’d concede, but they’re not what most people in the comments are talking about.


The trade-off here is to get pedophiles off of Apple services and on to something else, we give Apple an entry point into examining our private data. I understand it's hashes now, that does nothing to prevent this from expanding.

"People just don't understand!!" has never been a convincing argument.


This is just a slippery slope argument.

The implementation specifically for detecting CSAM can be okay while using that for other purposes can not be okay.

The goal is to stop child sexual abuse, FB reports millions of cases a year with a similar hash matching model for messenger.

> ""People just don't understand!!" has never been a convincing argument."

That's not my argument - I just think most of the HN comments on this issue both miss the relevant details, and are wrong.


Slippery slope fallacy is often implied incorrectly when it comes to people. Human nature is subject to the "foot in the door" sales tactic.

https://en.wikipedia.org/wiki/Foot-in-the-door_technique


I think HN takes the incorrectly perceived moral high ground and ignore the fact that there is a real duty to act here.

There are people who profit from CSAM and in turn this creates a market for abuse. Unless we create structures which disincentivizes these behaviors — right now there are none - they will continue to grow. What Apple is building will basically make any criminal who sells to someone sloppy enough to store in iCloud risk being traced as the origin of the CSAM. Back to the darkest web they go.

Anti CSAM is inevitable. You will find similar systems for all major providers eventually.


I'm pretty sure child abuse is a criminal offense everywhere. That's quite a disincentive.


Is CSAM a specific list of images that does not change, or does a government body actively control this list? Is there anything that would technically prevent the addition of a specific image of an enemy of the state? Hypothetical question, how will Apple respond when the FBI asks Apple to scan for images they know are also on the phone of the latest domestic terrorist? And how likely do you think the FBI will try to expand the scope of images that are scanned?


I don't know - I think these are the good questions and the things I would want to know in more detail. I think this kind of thing is where the risk lies.

I think this is why getting the details right matter because if people are arguing about unrelated nonsense it's harder to focus on the actual risks represented by the questions you're asking.


> You can be against this kind of thing from Apple, but as a result more CSAM will be undetected.

You cannot claim to be making "the real reasonable analysis" and write this. So much for "you're all geeks stuck on technical details". Quite the contrary: I'm sick of bogus software pretending to solve problems for me, while the quality of tech has exponential degraded over the last 20 years (often due to trying to solve some unsolvable problem in a bad way that backfires).

Now imagine you have a 16 year old girlfriend. She sends you a nude photo. Your phone calls the cops on you (it doesn't matter if the phone doesn't quite do this now, it will in the future. They will use their ML crap to detect the age of subjects in photos and explicitness of the photo). You normally wouldn't go to jail for this since 16 is legal in 99% of the civilized world, but thanks to America with their super duper "non-technical" innovations that only big boy white collars can understand, you can go to jail for having a photo of your legal girlfriend.


Mostly good...

> big boy white

but why the totally uncalled racism and sexism here?

It does nothing to strengthen your argument and it is just dumb.


I am truly sorry for your loss in that your brain is implemented using regex.

I meant "white collar big boys", but I did not bother to edit as I'm writing.

The guy above is claiming everyone who is against apple's yet-another-bogus-TPM-style-snakeoil is a little geek who does not understand anything outside their little tunnel.

Also now that I re-read his comment:

> Edit: After digging in, HN commentary is missing the most relevant details about this particular implementation. iCloud image checking compares to known CSAM image hashes - this means effectively zero false positive rate.

False: it's a perceptual hash. Ignoring the fact that if for some reason you choose to let people host stuff in your icloud account (perhaps as a neat hack), which may be out of terms of service, but certainly not worth 20 years of jail: perceptual hashes have false positives, and can confuse images that appear harmless but were crafted to look like $badimg. But you don't have to be technical to understand that having your devices police you is bad, you just have to not be blinded by politics and boogeyman your state has sold you.


It’s not just HN folks, go read the “open letter”.


So this new Apple stuff has made me decide not to buy an M1. I'm leaning Framework laptop.

For my phone though... no idea. My iPhone is honestly such a solid piece of tech. I don't _want_ to go Google either... so what else do i have?

I know lots of people run de-googled Androids, which i guess works, but i'd prefer to avoid them entirely. Is there anything that works?

edit: I know of the Purism phone (https://puri.sm/products/librem-5-usa/) but that's the only one i know of. Anyone know of others?


My MacBook Pro is overdue for an upgrade, and I literally ordered a Framework laptop (on which I plan to run Debian) partially in response to this announcement.


Wow I’m actually looking at framework right now as well after this announcement- my 2015 pro has been great but the trade offs are just too high

I wonder if framework will have a mysterious bump in sales because of this


The other thing that pushed me towards Framework was the trivially user-replaceable parts, including the battery.

My MBP's battery failed (swelled up) twice while under warranty, but it was a pain to deal with each time: wipe the SSD (because it's soldered on), hand it over to Apple, wait several days for repair, and then restore from backup. And now it's no longer under AppleCare coverage anyway.

In contrast, Framework designed their motherboard so we can even use it as a standalone PC once we're done using it as part of a laptop. That's such a difference in terms of user reparability.


I am looking at Framework, but it still feels like a legacy tech. I have XPS 15 from 2019 and it feels slow and fans drive me crazy. I don't think Framework will be much different. I was looking at M1 for fanless experience and performance. I wish they at least considered ditching Intel and adding a good AMD option - then I will buy it.


You're not wrong, in the sense that an M1 Mac would have better performance and battery life than the laptop I have on order. But to me, the tradeoffs (not only in terms of privacy, but also user-repairability) are worth it. (And my new Intel-based Framework laptop will still be much faster than the laptop I've been using for the last several years.)


The Pinephone running something like Ubuntu Touch (UBports) or PostmarketOS is something to keep an eye on but it is still in heavy development and in my option not quite a reliable "daily driver" yet.


commenting here as I share the same exact sentiment. I guess I am going to look into LineageOS which falls in the de-googled category but im not sure I am in love with that idea.

edit: Whoa, 2K for that purism phone. fuckin' a.


The standard (https://shop.puri.sm/shop/librem-5/) is cheaper - i guess the USA one comes with a large uptick for locally sourcing?


ah thats much more reasonable. The reviews on this phone are pretty awful although looks like its rather new to the market.

Anyone who comes across this comment use one?


Yep, having spec’d a few boards for assembly in the usa and China, the manufacturing costs seems to always be about double for USA fabrication.


Planting false evidence is getting a new twist here. The attacker doesn't even have to make a report! The victim's computer does it for him. Disk encryption malware may have new successor. Effective and scalable extortion as a service.


This is a great point. You don't even need to unlock an iPhone to take a picture. So in theory anyone with access to your phone for a few seconds could incriminate you with little effort.


This is already possible today with things like iCloud Photos, Google Photos, OneDrive etc.


Today: "WTF is this?" delete

iOS 15: "WTF is this?" SWAT team crashes through window


This is not true. The check is only against known CSAM hashes.


Just photograph a known bad picture.


Then that's a different photo and will have a different hash.


These aren’t cryptographic hashes. They are perceptual hashes and a picture of a picture could absolutely end up with the same phash value.


is there really no fuzziness to it? If not, can’t this be defeated by simply reencoding the image?


I think it has gotten more sophisticated to detect cropped images and small changes now: https://inhope.org/EN/articles/what-is-image-hashing

The example is somewhat contrived.

If a 'friend' takes your phone and has access to it and then uses it to take images of CSAM similar enough to the original image that it triggers the hash match and does this enough times to go over Apple's threshold to flag the account after these images are uploaded to icloud without the original phone owner noticing then yes it might cause a match.

At that point the match is probably a good thing (and not really a false positive anyway) - since it may lead back to the friend (that has the illegal material).


Or you know, anyone who wants to plant material on a device and has physical access. Say a disgruntled employee before leaving, or ex, or criminal or...

Or anyone who can just text you since imessage backs up to icloud automatically...


While iMessage backups to iCloud, this measure is only for photos stored in the iCloud Photo Library. So sending a text with the photo is not enough.


Are you sure the hash function literally called "NeuralMatch" running on the device with 2+ gen of AI capable chips won't have "collisions"?


Picture of a picture?


I don't think this issue is widespread enough and harmful enough to society to risk the privacy of all Apple users. Full stop.

I haven't been particularly impressed with Apple's security record[1] lately, and I don't trust them to not mess this up.

1 - https://bhavukjain.com/blog/2020/05/30/zeroday-signin-with-a...


Why do you think this?


I can't find a widely accepted statistic but these numbers should illustrate the point:

- A 2016 study by the Center for Court Innovation found that between 8,900 and 10,500 children, ages 13 to 17, are commercially exploited each year in the United States. (Center for Court Innovation, 2016) https://www.courtinnovation.org/sites/default/files/document...

- The annual number of persons prosecuted for commercial sexual exploitation of children (CSEC) cases filed in U.S. district court nearly doubled between 2004 and 2013, increasing from 1,405 to 2,776 cases. https://www.ojp.gov/sites/g/files/xyckuh241/files/archives/p...

This is a niche crime from everything I've seen.

Apple, if it were truly interested in the net good of children, could have picked something that impacts more of them (nutrition? early childhood education?), didn't introduce new vulnerability / abuse surface area, and was less politicized.


Here are other statistics that suggest this is anything but a “niche” crime.

https://storage.googleapis.com/pub-tools-public-publication-...


This seems unbelievably high (though I have no reason to doubt the researchers).

If the prevalence is really around 8-30%, this seems a lot bigger than what Apple could even make a dent in. (Because most of the offenders are relatives/acquaintances of the victims. The phones don't seem to have any influence on the underlying numbers.)

Furthermore, criminalizing content again pushes the actual problem deeper into the shadows.

Why there are no routine questions about abuse for kids? At least that would help to identify victims, remove them from the abusive environment, and even potentially help catch the perpetrator.


Thanks for the statistics. To me, these numbers are significant.

Maybe Whole Foods or maybe some popular restaurants are better candidates for working on improving nutrition in public schools? Why don’t we let apple contribute where it thinks it can. Maybe with apple that number goes down from 10,000 to 2,000. Wouldn’t that be a celebrated outcome?


We cannot stop all crime. There will never be a day where we stop all crime. It is not an achievable goal nor is it desirable because what constitutes a crime is written by the governments of the world and we have tens of thousands of years of reigning authorities to tell us that they will abuse the power they are invested with to protect themselves.

Authority and its keeping is the number two law of the jungle. Any power handed over in the name of security, "to stop all crime", is an affirmation, a concretization of its future abuse. You speak of the calculated cost of preventing child abuse as acceptable. What of the abuse of an entire people?

This is not handwavy theoreticals. We already know what happens, in the US, when you push an agenda in the name of protecting the children: it looks like FOSTA/SESTA, which has driven sex workers of America underground and exposed them to more violence, more danger in a profession already one of the most murderous professions in the world. Those murders, in the name of protecting the young, are at the feet of the people who would protect the children with more authority.


> To me, these numbers are significant.

What would be insignificant? 1 child? 100? There are 73,000,000 children (under 18) in the US alone. 10,000 is .0001% of that population.

> Why don’t we let apple contribute where it thinks it can.

Apple is the most profitable company in the world. It's a company that prides itself on its imagination and innovation, I wouldn't discount their ability to come up with something.

> Maybe with apple that number goes down from 10,000 to 2,000. Wouldn’t that be a celebrated outcome?

No, it's not. We make trade-offs all the time. The possible harm to Apple's user base is not worth the possibility that this reduces child abuse. There's a possibility these people move on to another platform and this does nothing.

To get that number down Apple creates an entry point for violating the privacy of half a billion users worldwide. Many of them are in China, where pressure from the government has already moved Apple in directions that are harmful to its customers[1].

1 - https://www.nytimes.com/2021/05/17/technology/apple-china-ce...


> But when a backdoor is installed, the backdoor exists and history teaches that it’s only a matter of time before it’s also used by the bad guys and authoritarian regimes.

Problem is that this scanning is necessarily fuzzy and there is going to be a false positive rate to it. And the way that you'll find out that you've tripped a false positive is that the SWAT team will knock your door down and kill your dog (at a minimum). Then you'll be stuck in a Kafkaesque nightmare trying to prove your innocence where you've been accused by a quasi Governmental agency that hides its methods so the "bad guys" can't work around them.

It isn't just "authoritarian regimes" abusing it, it is the stochastic domestic terrorism that our own government currently carries out against its own citizens every time there's a beaurocratic fuckup in how it manages its monopoly on violence.

This is the "Apple/Google cancelled my account and I don't know why" problem combined with SWATing.


Stochastic domestic terrorist is not a term I ever expected to read, but it seems very fitting for the “data-driven” (but generally poorly evaluated) future we find ourselves drifting into.


"Stochastic terrorism" has been a popular term for a while[0], but it usually refers to the actions carried out by individuals, not government agents. Ironically, though, the word "terrorism" itself originally referred to the actions of a tyrannical government against its own citizens[1], so perhaps the definition has come full circle.

[0] https://www.dictionary.com/e/what-is-stochastic-terrorism/

[1] https://www.merriam-webster.com/words-at-play/history-of-the...


This really has been decades in the making, hasn’t it? Apple is only adding the latest piece of the puzzle. The seeds of everything you’ve described were planted years ago.

How did we get here? How did everything become so politicized and polarized? How did law enforcement become so militarized? How did we as a society become so terrified and distrustful of our neighbours?


>Problem is that this scanning is necessarily fuzzy and there is going to be a false positive rate to it. And the way that you'll find out that you've tripped a false positive is that the SWAT team will knock your door down and kill your dog (at a minimum).

Not true. Hash matches are to be human reviewed. So no, people won't get "swatted" accidentally as you allege.

The other concerns people have been voicing are certainly valid though (IMHO).


>Hash matches are to be human reviewed.

Until it proves too expensive, then a different AI system will do it instead. I have zero faith that it'll be a fully competent, well trained, well rested, well paid person will actually be doing these reviews in the long run.


If you actually think there is going to be a fully automated system dispatching police SWAT teams throughout the US without a manual (or judicial) review then.... I really don't know what to tell you.


Yeah, it normally requires an anonymous phone call from a VoIP number.


Is this an allusion to an actual occurrence?



A dodgy phone call from an unknown number abroad by a stupid teenager is enough to get a SWAT team to murder innocent civilians, what makes you think they'll be smarter when the "call" comes from a robot?

I mean, seriously, you're assuming as beyond obvious something that's demonstrably wrong RIGHT NOW.


Those prank SWAT calls are (a) from a human and (b) based on the assumption of IMMEDIATE risk of death/injury. Furthermore, (c) the CP scanning implementation itself requires human review.

So no, under the proposed implementation (and current judicial requirements in the west) robots dispatching armed police to kill pets or people without ANY human oversight because of a situation that is not time sensitive is mere fantasy.

Don’t misunderstand, I am against this whole thing for the reasons many others have well articulated in this thread.

But the claim that robots are dispatching armed police without any human interference RIGHT NOW is provably wrong.

Show me one incident where this has happened already and I will donate $100 to the charity of your choice.

And yes, the judicial rules and Apples implementation could change in the future. That’s certainly a risk but is not the case RIGHT NOW. I mean, seriously?


Guess its hard to believe that their is a world outside of the United States. Already do whatever it takes to pin some BS on someone that opposes them and has them executed. This would be no different.


I don’t understand. If the government can just claim you had CP without proof, then they can just… do that? They don’t need Apples system.

Especially since anything flagged by this system is manually reviewed by Apple. So, there would exist counter-evidence for the govt claim.

Don’t misunderstand, I see how this system is ripe for abuse. I was just commenting on the specific claim that there will be automated SWAT call outs (presumably in the US).


> If the government can just claim you had CP without proof, then they can just… do that?

The problem is that they can do whatever if you have CP. Emphasis on "have": you might not even know you have it, because all it takes for you to be guilty is for a forbidden bit of data to be on your disk or cloud account. How did it end there? Doesn't matter much. It's unlikely you'll be convicted if someone else maliciously puts CP on your drive, but it's extremely likely your whole neighborhood will know about it before it's settled. Think about that.

> They don’t need Apples system.

Apple's system makes it sure that whenever someone puts CP on someone else's account, it rings the police and starts the nightmare. Without it, there's a few extra steps that make the nightmare much less likely to happen in the first place.


This is plausible and exactly why I oppose this system.

The only argument I made was against the claim that fully automated police dispatches will occur because of this system (in the west).

Making outrageous, tinfoil hat level claims does not help our collective argument against this system.

It just makes us look like paranoid conspiracy theorists.

Let’s make coherent arguments (like the one you made above) instead of fantasy ones.


It should be extremely easy for any group that has access to tools like NSO Group's to use the information on a device to convincingly add data to devices. What is generally missing is the question of why certain people's devices were selectively searched. Attacks by the west on journalists often rely on the excuse of border searches for example.


It'll all be shadow-ban type stuff, your account will be turned off, without appeal, things of that nature.


That's a different thread of comments. This one is about automated dispatching of SWAT teams.


I have to imagine that you’ll be investigated and can let Apple rehash your photos and find the ones that tripped the system. If Apple rehashes all your photos and suddenly none of them are tripping, and they accuse you of deleting the bad ones (assuming you haven’t deleted anything) the problem isn’t the hash algo, it’s something else in the system.

That said, I don’t like my phone being a snitch.


Or Mechanical Turked.


Eh, humans make mistakes, that may reduce the false positives but if this program runs long enough, I'm sure it will happen to someone when someone accidentally presses the wrong button.


Google and Apple have burned all their trust for this kind of thing. They already do terrible with all the people who get banned off their platforms with no notice of what they did wrong and no recourse. These companies cannot be trusted with the power to arrest you. They don't care if they ruin innocent lives due to edge conditions.


> Hash matches are to be human reviewed.

This comment suggests this phrase might be cleverly worded, to make it seem like the images are human reviewed, while that actually not being the case: https://news.ycombinator.com/item?id=28096059


Reviewing potential matches sounds like an awful job. Retention might be even lower than in those FB abuse centers.


That's part of why possession is illegal. The material is toxic.


This is wrong - the iCloud check is against known CSAM hashes, the false positive rate is essentially zero.


This isn’t a normal file hash. It’s not SHA or bcrypt. A wide variety of states (1s and 0s) can have the same hash.

The idea is to take an image and have all of its possible derivatives create the same hash.

For example, if a hash was made of the Mona Lisa, any copy no matter how large, small, black and white, would have the same hash.

Think of all the ways the Mona Lisa could be transformed and still be the Mona Lisa.

The combinations of ones and zeros would be in the billions. If not more.

And all those possible combinations of ones and zeros go back to the same “hash.”

That’s extremely resourceful intensive.

My guess is that they are going to transform the images into a very low resolution, black and white thumbnail. Then compare it against known abuse images that have been similarly transformed.

Or they’re using AI. They might be using AI.

Either way, it’s guesswork. How many images might be transformable to the same black and white thumbnail. I don’t know.


Yes but my understanding is that the hashes are perceptual, as opposed to cryptographic.

If the system was matching against known cryptographic hashes the collision / false positive rate would be small, but the fuzzy matching involved with perceptual hashing necessarily has a greater false positive rate.

And that doesn’t even begin to address the detection of sent and received “explicit images” which are detected on device and don’t have a set of known hashes.


I suspect that's why they have some threshold that moves the false positive rate to one in one trillion.

The iMessage bit is different - it's only on device, only on child accounts, and only alerts parents. It's more akin to a parental control feature than anything else.


It was pretty dumb of Apple to announce both of these things at the same time.

They have nothing to do with each other, and I've seen a dozen people on HN confuse them. If the message is getting muddled here, it will be hopelessly conflated in less technical circles.

I'm concerned and upset about the CSAM filter for all the reasons that keep hitting the front page, but don't care about the opt-in parental controls at all, and if I had kids, I might want them.

But if I thought the CSAM filter worked like the nudie-detector filter, I'd be wigging out.


> I suspect that's why they have some threshold that moves the false positive rate to one in one trillion.

So now instead of sending just one nice innocent very high resolution images of "Tokyo City" or something with something horrific hidden somewhere you have to send a few such images.

That is reassuring. I can never believe anyone except me will think about that.

(If the system is too dumb to detect this it is worthless, and if it is smart enough this opens the floodgates for anyone wanting to make trouble for just about anyone.)


This would require multiple hash collisions.


No need for hash collisions, that's what makes it so bad: just edit multiple known verybad images in multiple nice high res photos like I tried to explain above.


It’s a visual hash, not something like an md5. And now you’re talking of a directed attack to intentionally make hash collisions?


It doesn't matter if the cryptographic algorithm is one in a trillion. If there's a concurrency bug in the application which applies the algorithm the wrong account could be flagged against a legitimate image in the database. If the human involved overly trusts the system they may not validate against the actual file in the users account, or may assume the user deleted it somehow and figure its "safter" to let law enforcement figure it out.

Bugs in the application of the code, combined with human complacency and mistakes can certainly lead to errors, even if the cryptographic algorithm itself was perfect.

We really need to bring back comp.risks


This is a great point, thanks for adding that detail.


This seems naive, there are always false positives


It’s not naive, it’s math.


It's not a math problem, it's a human problem.

suppose you have a partner who is a 'petite' woman of 34. She enjoys posting nudies on a website, but without her face in that picture. Someone who collects child porn downloads it, because he enjoys that picture. A year later he gets caught by the police and all his pictures get marked as 'verified child porn'. Suddenly you get marked as owning child porn.


That isn’t CSAM.

https://www.nytimes.com/interactive/2019/09/28/us/child-sex-...

Apple’s thing has some sort of threshold anyway so one image would not trigger it. I don’t buy your example - the CSAM images are not what you’re describing.


Who determines whether something is CSAM? How do you or I know that every single one of those images actually is CSAM? How do we know if the FBI or CIA or CCP adds hashes of innocent pics in order to pin a crime against someone?


No. You can design a system where the FPR is essentially zero. Even if shitty md5 is used.


Perceptual hashes are not exact hashes, otherwise they would be useless for this task; you would just mirror the image or change 1 pixel.

They are instead fuzzy classifiers, and thus have non-zero error rates.


But this trigger requires many of these false flags to exceed the threshold (most likely why it exists). I imagine the "one in a trillion" numbers they claim for false flagging rate are probably cemented in reality, and make it trivial for human review.


I've explained the problem here: https://news.ycombinator.com/item?id=28099927


How does this relate to the allegation I was responding to? There is a difference between false positives and false negatives you know. False negative is where the criminal evades the system through measures you describe. False positive — what I was responding to — is where Apple incorrectly accuses an innocent person and thus violates their privacy during the subsequent review.


> No. You can design a system where the FPR is essentially zero. Even if shitty md5 is used.

How can you do that, considering md5 can have collisions?


A confusing thing that I think people haven't addressed clearly for the most part:

For a hash (whether cryptographic or perceptual), there is a chance of random collisions and also a difficulty factor for adversarially-created intentional collisions. The random collision probability has to be estimated based on some model of the input and output space (with cryptographic hash functions, you would usually model them as pseudorandom functions and assume that the collision probability is the same one created by the birthday paradox calculation).

Intentional collisions depend on insight about the structure of the hash function, and there are also different kinds of difficulty levels depending on the nature of the attack (preimage resistance, second-preimage resistance, and collision resistance). Gaining more insight about the structure of the hash function can act to reduce the work factor required for mounting these attacks. That should be true for perceptual hashes just as much as cryptographic hashes, but presumably all of the intentional attacks should start off easier because the perceptual hashes' threat models are weaker and there's much less mathematical research on how to achieve them.

And in AI systems involving classifiers, it was generally easy for people to create adversarial examples given access to the model. Perceptual hashes for estimating similarity to specific known images aren't the exact same thing because it's less like "how much like a cat is this image?" and more like "how much like NCMEC corpus image 77 is this image?", but maybe some of the same techniques would still work. In the cryptographic hash analogy, I guess that would be like trying to break preimage resistance.


Thanks for the detailed explanation. I understand why that works for perceptual hashes if you make them really precise, however I doubt it would work with md5, which is why I asked.


The discussion I thought we were having was about false positives and not adversarially induced false positives. For the former the random collisions have a probability of 1/(2^64).

To mitigate adversarial false positives one idea is to use the combination of a cryptographically strong hash along with a randomly selected perturbation of the file. Prior to hashing, perturb the file and submit both the hash and the selected perturbation to apple. Apple selects the DB based on the perturbation and proceeds with matching and thresholding.


> The discussion I thought we were having was about false positives and not adversarially induced false positives.

I think the rest of us have been discussing how this can and will be abused, by definition by adversaries.

Many of us have also observed for years how systems are abused so we sadly have a gut feeling for this.


I thought that the random collision for md5 was way higher than that. If it's that low, you're right, this would work. I'm not sure I understand the part about the pertubation.


It’s actually 1/(2^128).

If the attacker does not know how the image will be perturbed prior to hashing then he cannot generate an image which matches with known CSAM.


One in one trillion chance per year, per the paper on the Apple site.


I see the 'one in one trillion' but it's a bit vague; I read it as '1 in trillion chance' per image.

So when we have a billion iPhones in the wild taking 10 images a day...1 in a trillion chances happen every few months. Now, if that triggers some further review, maybe that's an acceptable false positive. If it triggers a SWAT team, I don't think it is.


They have also indicated that a single match won't be enough to trigger it, so an accidental match (being that 1 in a trillion person) is not enough.


Yeah, that’s essentially zero.


I would like to bet Apple's market value against that number being correct in an adversarial context.


> the false positive rate is essentially zero

I highly doubt that


Based on what? Unless you point out a flaw in the math, it’s zero.


> Based on what?

Based on the fact that ultimately we can't check the system. And based on the fact that at some point in the chain humans are involved. [1]

What we are left with is "trust in Apple" not "trust in math".

[1]: https://news.ycombinator.com/item?id=27878333


Apple will release this in their desktop pcs. The major consequence of this is that, a year down the line, microsoft will also be compelled/forced to join this "coalition of the good". After all, they already scan all your files for viruses, it would be a shame if they didn't scan for anything that is deemed incriminating and also call the cops on you. Of course, the children are being used as a trojan horse again. We don't even mention the giant logical leap from finding someone possessing CP to automatically considering them a child molester, yet someone posessing ~~an action~~ a murder video is not considered a murderer.

I m just imagining the situation where these companies took the initiative to scan all their users data in a situation like the attach in US capitol this year. Creating new affordances for spying always leads to their abuse in the first chance when an extreme circumstance occurs. So there is no excuse for creating those affordances just "because they can"


Of course, the next step after that is to make "non-authorised" operating systems -- which may include Linux (except for specially signed versions that will also include similar spyware) -- "deprecated" or "discouraged", then "suspicious", and perhaps even eventually illegal.

Stallman predicted a similar outcome, and although he (and many others) thought the end of computing freedom would be due to copyright/DRM, I wouldn't be surprised if "the children" is what eventually pushes things over the edge.

https://www.gnu.org/philosophy/right-to-read.en.html

in a situation like the attach in US capitol this year

...and as much as I'd like to see at least that amount of "watering the liberty tree" directed at Big Tech, it unfortunately would likely lead to even more authoritarian outcomes. Any future fights for freedom will need to happen online and non-violently, but on platforms that are also under their control.


> ...and as much as I'd like to see at least that amount of "watering the liberty tree" directed at Big Tech […]

I’m having a hard time finding a reading of this that isn’t advocating violence against tech workers. Is that what you intended?


Given corporate personhood, destroying the companies could be considered corporate murder. (I admit it's a very stretched reading.)


I agree that is a favorable reading.

But it is especially a stretch in the context of January 6th which involved violence directed at people. And the rest of the paragraph laments that future action must be nonviolent.


So, in fact, it's not advocating the violence.


It’s hard to say. Words like “I’d like” and “unfortunately” make me think it’s a desire for violence but an acknowledgement that it’s not worthwhile. Like I said, I’m trying to find the favorable reading. It seems hyperbolic at best.


tech workers are replaceable. even if we assume violence was implied, it s better directed at their datacenters.

I did not find it very hard to assume a not-the-worst-possible reading of the comment. What's with the comment police here lately that's labeling various comments? I hope this ain't becoming the new twitter


Are you aware of what “water the tree of liberty” references?

It is a Thomas Jefferson quote:

“The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.”

In the context of January 6th it’s hard to make a leap to metaphorical server blood. Datacenters still have people inside them, no mention of servers was made. Only a quote from a man who waged a real war.

> tech workers are replaceable.

Speaking as a tech worker, no, my life is not replaceable.

> I did not find it very hard to assume a not-the-worst-possible reading of the comment.

The not-the-worst-possible reading being what? Mobs attacking data centers?

> What's with the comment police here lately that's labeling various comments? I hope this ain't becoming the new twitter

I feel the same way. So let's tone down the rhetoric. If this post was metaphorical hyperbole it is the kind that got a president impeached.


> possessing CP to automatically considering them a child molester, yet someone posessing an action movie is not considered a murderer

That's specious reasoning. Someone who posesses an action movie likes action movies, while someone who posesses child porn likes child porn. One is ok, the other is pretty vile and illegal for a reason.

I don't agree with Apple on this but let's be clear on what is and what isn't


> Someone who posesses an action movie likes action movies, while someone who posesses child porn likes child porn.

Unless it's planted. [0] Or sent to you. [1] Or (farther out there) happens to be embedded on a site you visited and ends up in your browser cache.

[0]: https://www.nytimes.com/2016/12/09/world/europe/vladimir-put...

[1]: https://www.nytimes.com/2019/06/17/nyregion/alex-jones-sandy...


It only scans images in your iCloud Photo Library. Not paying for iCloud? No scanning. Not in your photo library? No scanning. And even then, only for known content, not new content.


Conveniently, iOS 15 also syncs images from your messages and many other places into your photos. Whether this will put that file in your iCloud Photo library, I do not know.

https://www.macrumors.com/how-to/see-photos-shared-with-you-...

On top of that, at this stage you are right. How long before they move it to every file in your device's storage "because of the children!"


So the attacker only needs to get access to your iCloud. Your iPhone will happily sync down photos uploaded elsewhere.

You don't have to be paying for iCloud, either. There's a free tier, so I'd imagine almost all iPhones are using some tier of it.

iCloud account break-ins aren't exactly rare. An accusation, even if false, could ruin an innocent person's life.


Not only is there a free tier, but the iPhone defaults are cleverly configured so that you quickly fill it up with random junk on your phone, and feel pressured into paying for more iCloud storage, because the default sync behavior is so non-obvious, and the settings to disable it are buried.

I know multiple people (most of them in their 50s or older) who started paying for iCloud because they thought it was their only option.


> There's a free tier, so I'd imagine almost all iPhones are using some tier of it.

i was quite surprised to see this was the default or at least was setup unknowingly to me.


If this was going to happen, it would have already happened on Android, Gmail, OneDrive, or any of dozens of services which already do what Apple is now doing.

Or are you saying that malicious activity is only interesting if it was on an Apple device?


You can have videos of somebody actually getting killed and it's not a crime. Reddit killed the /r/watchpeopledie subreddit, but it wasn't illegal to go on.

Until people are actively encouraging people dying for people getting killed by paying for gladiator matches, I agree with you it's not the same thing, but I don't think the person you're answering to is talking about action movies.


> You can have videos of somebody actually getting killed and it's not a crime.

yet


You realize if this is as problematic as we are talking about and peoples lives are ruined from false positives it will create a political movement that will make this illegal. Stop trying to lobby companies and lobby for politicians that will rein this shit in.


Microsoft already does this.

„Microsoft removes content that contains apparent CSEAI. As a US-based company, Microsoft reports all apparent CSEAI to the National Center for Missing and Exploited Children (NCMEC) via the CyberTipline, as required by US law. During the period of July – December 2020, Microsoft submitted 63,813 reports to NCMEC. We suspend the account(s) associated with the content we have reported to NCMEC for CSEAI or child sexual grooming violations.“

https://www.microsoft.com/en-us/corporate-responsibility/dig...


I think that's for files uploaded to onedrive etc. Not for all the files on any windows pc


The encouranged windows 10 setting is to sync your whole profile to OneDrive.


Does this imply that apple was not checking the photos stored in their servers for CP? Then icloud must have been the best way to share it


It at least looks like apple is the last one to do this. google, flickr, facebook, twitter etc did this since forever.


Almost like Apple was the last holdout to not give up customers’ privacy, and eventually had to cave. So they came up with a sophisticated system to handle the scanning on-device, and only for data that’s already destined for their servers.


>We don't even mention the giant logical leap from finding someone possessing CP to automatically considering them a child molester, yet someone posessing an action movie is not considered a murderer.

This is a wild and in my opinion a wrongheaded analogy. Possessing CP is a crime by itself. It doesn't matter if the person possessing is actually a molester or not. It is just like the possession of drugs being illegal and it does not matter whether the person has actually taken or plans to take those drugs.


The worst thing about it is that CP possession is a strict liability crime in most jurisdictions. Meaning that prosecution doesn't have to prove intent - if they can find it on your machine, you're guilty regardless of how it got there.


But for any serious charge you need indisputable evidence, and usually lots of it, usually the threshold is quite high.. But in the United States the system is insane so I could be wrong


All the evidence required is that it is there.

Does not matter if someone else put it there without your knowledge. Or maybe it matters, but then it's on you (for all intents and purposes) to prove that it was without your knowledges.


The punishment for strict liability is much lower than for the same act with intent.


It's still essentially a death sentence if you are innocent.


In criminal law you must be aware of an item for it to be under your possession. If someone plants illegal content on your computer, and you are unaware, you aren't legally in possession.


"Typically in criminal law, the defendant's awareness of what he is doing would not negate a strict liability mens rea (for example, being in possession of drugs will typically result in criminal liability, regardless of whether the defendant knows that he is in possession of the drugs)."

https://www.law.cornell.edu/wex/strict_liability


Note the word 'typically'.

The purpose of strict liability in possession is to prevent the defense that someone does not know the legal status of an item in their possession. It does NOT prevent the defense that someone does not _know_ something to be in their possession.

For example, it is not a defense to have drugs and claim "but I didn't know they were illegal". It is a defense to claim "I did not know they were there."

In drug cases with actual possession, it is difficult to support a defense of "I didn't know they were there", which is why charges typically result in criminal liability. They drugs were physically on you, and unless you have evidence that someone planted them, it is unlikely you could establish reasonable doubt.

But in cases of electronic material for networked devices, there is most certainly an affirmative defense to counter actual possession and constructive possession. Computer devices are hacked all the time, and network & device logs exist. For example, if a prosecutor agrees to the fact that a defendant had no knowledge of the material, a judge would toss the case and a jury would not convict you. The law is not meant to pedantically convict you of non-crimes.


Yeah it wasn't meant to be an analogy. For example consider a real murder video, is its posession illegal and does it mean its owner should be a murder suspect?


You are still falling into the same line of thinking here. The person isn't arrested because we assume they are a molester. They are arrested for possessing illegal material. Your argument here really sounds like you are suggesting that CP should be legal and it is the action of molesting that should be illegal. I don't think you would find many people who agree with that. Society has decided both are separate crimes and while there is likely large overlap, overlap is not required to be charged for either crime individually.


> Society has decided both are separate crimes

Actually it's legislators and the courts that come up with these laws and they are complicated. The question is if these laws reduce child abuse or simply increase spying, and in the end what is the acceptable balance between these two.


Legislators are a proxy for society. Do you believe that if you polled the US that any sizable portion of the country would be in support of legalizing CP?


If you polled the population, chances are they'd support torture, rape and execution as punishment for abusers.

The real crime is child abuse. Material related to that is also illegal because it presumably creates demand for the abuse. Whether that's actually true I don't know.


In liberal democracies legislators also protect fundamental rights from the whims of the majority.

> if you polled the US that any sizable portion

No, but they also would agree to that they should have an option to keep their data privately. Maybe the public should be polled about this tradeoff?

For example the case of virtual CP has an interesting legal history https://www.freedomforuminstitute.org/first-amendment-center...


>No, but they also wouldn't consent to having no option left to record their thoughts privately. Maybe the public should be polled about this tradeoff?

It is fine if this is your argument, but you don't have to wrap this argument in with the very legality of CP. You can acknowledge something is and should be a crime while also being against these type of automated dragnets to find people guilty of said crime.


> something is and should be a crime while also being against these type of automated

But there is a clear tradeoff between the two so saying that would be a useless platitude. If we really see the internet as an extension of our vocal chords, then we should have individual rights to it, especially considering the fact that the internet infrastructure is not provided by the governments themselves, only the spying is.


I don't know what to say to you if you consider someone voicing opposition to this move by Apple as a "useless platitude" if they don't also believe that we have a constitutional right to post CP on the internet.


> yet someone posessing an action movie is not considered a murderer.

I think it is more similar to drugs, possesing one doesn't mean you are consuming it, yet it is an illegal substance and production, transportation and distribution are understandably not allowed.


Good point. There are over 80,000 drug overdose deaths a year in the US, if these systems can detect nudity and CSAM, then why not save tens of thousands of lives by detecting heroin and fentanyl dealers, too?


[flagged]


That's a stupid assumption to make based on that. If someone emails or messages you a picture then you're in possession of it if you chose to be or not.


No it was not a stupid assumption. You need to read more carefully as I exposed a flaw in his reasoning. The suggestion was that since CP possession does not prove physical molestation, therefore it is OK. Actually possession of CP is a crime in itself. This is an objective fact. I've known of otherwise intelligent people who do believe its actually OK to possess child porn if they're not physically molesting anyone.


Of course it is not OK. Does making it illegal and violating everyone's privacy rights reduce child abuse? I'd like to see the evidence. Another example is virtual child pornography which , evil as it is, does not harm anyone during its production, yet that is also illegal (in the US). The question is how far will governments go with the instrumentalization of child abuse prevention in order to spy on their own citizens


> We don't even mention the giant logical leap from finding someone possessing CP to automatically considering them a child molester, yet someone posessing an action movie is not considered a murderer.

Filming an action movie does not require actually murdering people.


And neither does producing child porn. Drawn child porn is banned in most countries.


And that's silly. But I'm talking about actual child porn.

I'm not defending Apple. I'm saying it's absurd to act like "merely" distributing explicit photos taken without the subject's consent is a victimless crime.


>We don't even mention the giant logical leap from finding someone possessing CP to automatically considering them a child molester, yet someone posessing ~~an action~~ a murder video is not considered a murderer.

That is because the consumption of child porn induces people to create it. The consumption of action movies does not induce people to murder anyone.


I'm curious to what extent the demand for new child porn content is directly driven by its illegal nature.

Some proportion of existing content has been discovered, tagged, and hashed. The legally and technically savvy members of the audience would likely know this. In such an environment, "old standard" content would be seen as riskier to hold or exchange. A completely fresh, new abuse image is the least likely to trip any automated monitoring system.

I could also easily see warez-ratio style "you've got to share something new to get access to our content" patterns, to try to discourage law-enforcement infiltration of their groups and to ensure "if I go down, I take you with me" legal leverage.


[citation needed]


Windows has been doing this for years. Not with this new tech, but with older methods.


I'm skeptical of the claim that MS has been A: scanning files on local windows machines and B: forwarding anything concerning to law enforcement. That's a fairly extraordinary claim, and I would like to see the evidence.


Microsoft hasn't been scanning files on local machines but Microsoft has been doing essentially the same thing Apple has been for years -- they pioneered it with PhotoDNA back in 2009. It's been in use with OneDrive since back when it was called SkyDrive (https://www.makeuseof.com/tag/unfortunate-truths-about-child...). Apple's implementation is scanning images 1) stored on iCloud or 2) in Messages if parents turn it on.

The first seems pretty arbitrary -- why is it worse to scan files you're sharing with the cloud locally than in the cloud (except potential performance/battery impact, but that seems moot).

If Apple brings this feature to the desktop, it seems likely they'd be using it the same way: files stored in their cloud.


Ok but this is not Windows, Which the op stated


What are you talking about?


It's inevitable that software and hardware will be controlled by law enforcement. It's just too good an opportunity to not use it. Nearly no one will fight it and the apathy of the masses will lead us directly into a system at least as oppressive as china is now.

You can prepare for it, though. Organize some hardware while you still can, teach your children to not trust any device or service, and, more importantly keep your mouth shut. It might be hard for a typical US millennial to grasp the concept, but people that grew up in eastern Europe will be able to understand the concepts.


> Do we really think criminals trade CSAM in plain-sight web sites ?

"In 2018, Facebook (especially Facebook Messenger) was responsible for 16.8 million of the 18.4 million reports worldwide of CSAM"

Apparently they do, yeah.


Also

> Do we really think criminals don’t know mathematics or programming ?

Yes, by and large yes I do. Amon is a great example of this. Surely in the world this author is imagining no criminal would get caught up in a scheme like Amon since there are criminals that know "mathematics or programming", except... no one noticed/blew the whistle and instead the FBI rounded up the users of that device.


Child abuse, or any abuse should be stamped out vigorously.

But this is the job of the police, using standard techniques to track down first, the people making and distributing this content, and then consumers of it.

It is not Apple's job to put a policeman in everyone's phone.


Why stop at scanning photos in your phone?

With lower power sensors, head-mounted devices with always-on-sensors, and whatnot, why not sample real-time hashes that can tip LEO about potential crimes happening?

Then why sacrifice recall in order to achieve high accuracy?

Err on the side of uploading more hashes. Then feed it all into a ML so it can use other data to filter out potential false positives.

Then if in a distance future, any LEO wants to investigate you for whatever reason, the set of potential hashes associated with your account will provide sufficient evidence for any court to authorize further non-hashed data access (it doesn't matter if they were all false positives)


For another thought on how easy this could suffer from scope creep, iOS does (did?) store whole-app screen captures as PNGs for use on the app switcher, etc.

If this is already scanning photo libraries locally, just add the temp dir for app screen captures and they're effectively monitoring your screen too.


It's no coincidence this system launched around the same time the whole NSO scandal broke. The NSO leak shows what government-sponsored exploit analysis against a large tech company may yield. I mean the NSO exploit could've worked the same but been a worm; it could've been absolutely devastating for Apple, imagine something like every phone infected. Something like that was possible with that exploit.

Apple has been a thorn in the side of the IC for a long while. IC probably saw an opportunity to gain a bit of leverage themselves via the whole NSO thing, and likely offered their cyber support in exchange of some support from Apple.

I mean c'mon they've been consistently pressed by IC for tooling like what they just launched; it's the least invasive thing(compared to something like a literal backdoor like that NSA_KEY that MS did for Windows) they can offer in exchange for some cybersecurity support from the gov.

idk if that's what's happened, but it's odd Apple would do this at all, and do it right around the time of the NSO thing.


This system is likely closely related to full encrypted E2E iCloud backups: https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


Were iCloud photos already scanned CSAM? In the on-device system, if you're not using iCloud are the photos scanned?

If that's true, as an iCloud user you are exactly as likely to be charged with a crime based on your photos as you were before, but you now get E2E encryption.

Obviously I'd prefer E2E without any scanning. If I wanted to upload a pirated mp3 to icloud, I wouldn't want the RIAA knocking on my door. However, given that scanning was already in place, is this a step forward?


> If that's true, as an iCloud user you are exactly as likely to be charged with a crime based on your photos as you were before

Is this strictly true? I feel like the evidence that a photo was present on specific device is different from evidence that a photo was uploaded by a specific account (and a specific ip address probably).

It seems like it would be far easier for the government to justify a search warrant if they have evidence the photo they are looking for was on a specific device. Just having evidence that a specific account uploaded a photo seems like far shakier grounds to search a specific device, after all accounts are often stolen to be used for criminal purposes and ip addresses don't map cleanly to people or devices.


Maybe there’s some plausible deniability if a warrant uncovered nothing - but I think they would always be able get a warrant and try to find the device and more evidence based on the upload.

From what I’ve read the on-phone scanning only alerts after multiple photos and is designed to have a 1 in a trillion false positive rate. If the iCloud scan is similar they would have a strong case for getting a warrant based on uploads.


The entire reason to encrypt data before transferring is to preserve privacy of the individual, which is completely irrelevant when you have a system in place to scan everything before it ever gets uploaded.

It's like living in the glass apartments of Yevgeny Zamyatin's We but still thinking we're preserving privacy because we put our items into an opaque box.


But why would Apple not mention this, if that was their intention?


Say that you're an official with the Chinese Communist Party (CCP). You have huge stacks of brochures of anti-CCP materials. You've got them scanned and hashed. Next you call Apple and say, "Please alert us if similar imageries appears in your customers' devices. My assistants will send you weekly updates of the required hashes." Apple would say, "Sure, we're just following your law..."

Hence when a Chinese photographs such brochure "in the wild" using an iPhone, someone from "the government" will knock the next day and "strongly enquire" about yesterday's photo. Likewise when a Chinese minor receives an iMessage containing such brochure.

This is just _one_ example case of "extension" of the CSAM database as seen fit by some regulatory body.


Yep, and now Apple can no longer say "Sorry, we can't put a backdoor into our devices" - one already publicly exists


Technically it's "front door" since it _publicly exists_.


It'll start with protecting children. We all want to protect children, don't we? Why do you want children abused? Are you a child abuser, what do you have to hide?

Next it's elderly people. We don't want our forgetful elders to get lost, do we? What if grandma wanders off but is in someone's picture, surely you want the police to know right that second where she is?

Next up, terrorists! Four adult brown men in an unmarked van are certainly suspicious, especially near a government building. Your Instagram selfies will help the police in the USA shoot even more innocent people for no good reason.

Animal abuse is next. You don't like puppies being abused, do you? Why do you hate puppies? Do you take part in illegal underground dog fights?

Gosh, that video looks like it might have been pirated.

Nice house, but based on your estimated income it's really strange that you have such a big television. Is that safe full of cash? How much cash is on your table?

Is that a bit of dust, flour, or maybe crack cocaine?

Is that person asleep or recently murdered?


> It'll start with protecting children. We all want to protect children, don't we? Why do you want children abused? Are you a child abuser, what do you have to hide?

To be clear, this is continued enforcement for years-old regulation. The feature is only enabled in the US where it is required.

The implementation is changing from cloud-based matching (which requires photos to be readable by their cloud infrastructure) to local based matching with threshold tokens (which would allow them potentially to be in compliance while making the system E2E encrypted w two additional key release mechanisms (key escrow via separate audited HSM systems, the given threshold disclosure of the image encryption key)


> while making the system E2E encrypted w two additional key release mechanisms (key escrow via separate audited HSM systems, the given threshold disclosure of the image encryption key)

such a long sentence to say "backdoor"


>E2E encrypted w two additional key release mechanisms

Aka not E2E and therefore something that Cook should face a fraud charge for saying it is.

Apple needs to make it true e2e yesterday, and tell the FBI that they can either approve of it, or never use an iPhone again.


I’m the one saying it is, not Apple.


Could you point to the regulation that requires this?


See the “Federal CSAM Law” section here:

https://cyberlaw.stanford.edu/blog/2020/01/earn-it-act-how-b...

> Section 2258A of the law imposes duties on online service providers, such as Facebook and Tumblr and Dropbox and Gmail. The law mandates that providers must report CSAM when they discover it on their services, and then preserve what they’ve reported (because it’s evidence of a crime).


The law doesn't require them to actively go around scanning for it.


They’re scanning for it as it enters their system. It’s only scanned if it’s set to go to iCloud.


Yes, but then there's a way to upload it in plaintext. That's the backdoor. That can, and will, eventually be used to exfiltrate any file, anytime, whether it would be going to the cloud or not.


It does set a terrible precedent, but it's possible this is a step towards E2E encryption on iCloud data; a way to comply with the law while preventing law enforcement from being able to subpoena other data.

Apple is being its usual cryptic self about this, which is once again breeding uncertainty, but I still have hope in the end this will work out.


Again though, what is the guarantee this is only going to be used to scan data uploaded to the cloud? If Apple has the ability to exfiltrate data from a phone at whim, they lose any deniability or leverage they still have with authoritarian regimes that want all the data on a phone. It's not just a terrible precedent, it's a dangerous piece of malware.


They of course have the ability to exfiltrate data, they created the hardware and OS.

However since this is part of the opt-in iCloud photos sharing mechanism, it doesn’t appear they have started to exfiltrate data without consent.


>> They of course have the ability to exfiltrate data, they created the hardware and OS.

I don't in know why you're assuming they have a backdoor built in already. The whole point is that they don't, but they're going to add one.


I don’t count this as data exfiltration because it only releases data if you are doing iCloud photos sync, which is opt in. iCloud photo sync has also always been recoverable on the server by government request.

So they have always been one software update from non-opt-in data exfiltration and remain so.


“ which would allow them potentially to be in compliance while making the system E2E encrypted …”

Apples system is not E2E encrypted if a local program scans files prior to upload.


Remember that it only scans against a known database of already found content, and does not try to find new content.


Is this true? The message triggers try to identify nudity within the accounts of minors. Is the notification only going to the parent? Is it stored for possible later use? Are those photos ever reviewed?


Excellent point. The messaging triggers for nudity in child accounts would imply that there is a separate, on-device scanning capability that has nothing to do with hash matching. Taken together with the backdoor for data exfiltration, and considering these come in the same announcement, there's no reason not to consider them part of the same spyware framework.


For now.


Just as "for now" it doesn't try to stop copyright infringement.

Seemingly every aspect of digital technology, from search engines to DNS providers, has been co-opted into the fight against piracy, so I wouldn't be surprised if the media industries started threatening Apple with "contributory infringement" suits if they don't re-purpose this technology for them.


"it would be a shame if AppleTV+ won't get license extension for that popular <insert label/studio> show, isn't it"


There's this Apple TV+, a subscription for Apple's originals. I think in relatively short amount of time it'll turn into "It would be a shame if we banned <insert label/studio> from our Apple TV platform"


Or inversely, the FBI/CIA/CCP went to Apple and said "it'd be a shame it turned out you were a monopoly".

Apple caved to pressure and had to implement this.

Whatever the angle, this isn't about protecting kids whatsoever. It's about power.


I think that’s in essence what the author is arguing, at least the outcomes are the same. The only difference is maybe none of the 3-letter agencies had to come out and explicitly say it, when Apple is perfectly competent at spotting a bone to toss.

In other words, the author thinks with Apple’s back to a wall, they only needed to make the announcement of this feature for the government to see there are advantages to apple having tight control as well. Now they’ll be able to make that very same argument in court in a public sense, but there’s always a behind the scenes sense with 3-letter agencies as well.

Granted all of that is speculation and who knows what is really driving any of this. The author does have a point that if this first step causes bad guys to move on from these services then that will be future justification to move the scanning further and further upstream to the point where it’s baked into the API’s or something. At that level, Apple would really need a “monopoly” to accomplish such a feat.

It’s certainly an interesting and creative perspective.


This could be aimed at pre-emptively de-fanging one argument against end-to-end encryption.


That’s the best case.

But even in the best case, exists then worst abuse cases. That’s the problem. This WILL be abused.


Apple's a $2 trillion company. If not even they have enough legal firepower to stand up to the three letter agencies, what possible chance does any other private citizen have. This should be a wake-up call.

It's time to start dismantling massive chunks of the intelligence community. It no longer works for the citizenry that's supposedly their bosses. (If it ever did.) It's become a power blackhole unto itself.

Even elected officials, up to POTUS, have found themselves unable to control the unelected and unaccountable fiefdoms that make up the intelligence community.


Most people want the TLAs to be going after child abusers and pedophiles. Good luck using this as your argument for dismantling them.


Similarly, many people want the TLAs to be able to go after $2T companies as well.


Do they? What behavior do they want them to go after?


I imagine the IRS for taxes


Very amusing, although these companies pay their taxes.

The problem isn’t with them, it’s with the loopholes that let them pay less than people would like.

The IRS can’t do anything.


What a messed up world we live in where money and power are the only things that matter to the people that get to make a difference. But has it not always been this way? The show must go on..


This is some guy’s theory, and they can’t even spell “anti-trust” (in the headline, no less). It’s not quite enough to lose all trust in society over.


From what I can tell he is Italian, so spelling should not be a reason to judge imo


The biggest question is who is providing the hashes? Since possession of images is illegal, it has to be the government. So we have to trust the government enough that they won’t abuse it and stick in other hashes. Same government that recommended an encryption method that they figure out how to break it.


[flagged]


The vaccine that you've been forced to take?

Please don't turn this into a clown show.

The privacy and openness of our computers and devices is paramount to freedom and a strong democracy.


[flagged]


or just get fired if you're a pilot


Or work at CNN.


That's not quite accurate; you both would end up having a "semblance of a normal life" without getting vaccinated, you just would've had to wait a bit longer.


Good.


How does vaccine give power to anyone?


It gave me the power to confidently shop again.


"Do your own research, the 5G implants from the vaccine have a secret server and reverse proxy to let them implant thoughts into you"


I don't know what's more impressive, the fact someone managed to come up with something so ridiculous, or the fact some people actually believe this level of technology exists.


The average Joe has no idea how computing, and tech works. None.

To such a person, a smartphone is a piece of magic. Expecting them to know what is real tech, and what isn't, is not fair.


There’s no way that’s true. I’d be more likable by now.


lmao


For me the biggest concern is not that Apple is scanning the images for $BAD_STUFF but rather, now that the scanning occurs on my device instead of Apple cloud servers. The trust has been eroded. I can understand Apple scanning images on their servers (although I don’t think Apple should). However, running the scan on device is taking it too far, even if the assertion is that only images that would be eventually uploaded is scanned. Apple promised security and privacy on their devices, and this breaks the trust. Now I question their future software roadmap, such as:

1. Will the scanning come to MacOS? 2. Will the scanning start to include additional $BAD_STUFF, such as political censoring and/or even other files (video, document, etc.)

I really like Apple hardware. The iPads and the M1 Macs are awesome, but this news makes me hesitant to stay in the Apple ecosystem and will be looking at alternatives. I already run Linux desktops, and I’ll probably move to Linux on laptop.


I don't see why we would believe they are not already doing this? Today's news is just that they can now legally take action on it.

Also the whole SWAT scenario is a bit far-fetched, as they will most likely read thru your entire life (don't forget they already have access to it) to make sure they don't look stupid on the news.


Your iphone is already capable to identify nude minors in near real time. Is it really that far-fetched that SWAT team gets triggered into action when iphone detects minor abuse?


Pretty far-fetched, to swat you need a crisis situation, hostage/bomb/shooting


It seems Apple only applies this if you sync photos to iCloud. Isn't the solution just to not use iCloud? I've personally never used iCloud sync for photos, am happy enough managing the files myself. Demanding that Apple allows you to use iCloud to host illegal pictures seems a bit entitled to me.

I am concerned about the slippery slope of if they start scanning files that were never going to go through their servers in the first place though.


One of the major concerns that I've seen arising around here is the question "what's stopping Apple from doing this even without iCloud?". Once you open Pandora's Box, there is not going back.


Recently my IPad’s AI made a montage of my pet cats.

Only, my daughter was included in the montage. Why?

She was wearing a shirt with a cat on it.


Apple and Google need to be under a serious investigation and broken up. Their hardware needs to be accessible to install an OS of your choice. This isn't possible on iPhones. It may take decades for this to happen given the lobbying dollars both spend but by then it will be too late.

We are headed for a China style surveillance state and there is no stopping this train.


Does anyone know about a good write up of this discussion in Japanese?


This is optional. You don’t HAVE to use iCloud for your photos. This is no different that YouTube searching videos you upload for copyrighted music. If you don’t want your photos scanned, don’t upload your images to their servers.


You're 100% correct. I had my images on iCloud because I thought Apple put my privacy first. I was being naive. I've pulled all my photos off and am looking to move the rest of my data too. My iPhone and iPad just got a lot less useful.


At what point do other picture types become surveilled?

Maybe if you take pictures of Trump signs...

Maybe if you take pictures of Red states...

Maybe if you take pictures of cis relationships...

Maybe if you happen to be male and white and take a selfie...

What/when do these get reported to the FBI or other woke agencies in government?

It's a very slippery slope AND WE KNOW there are sociopaths in positions of power who'd gladly do such things without a second thought.


iOS 15 is adversarial and Apple lacks credible neutrality.


Can I just say that I'm fascinated that this is happening under Tim Apple - an openly gay man in his 60s?

I mean this is someone who would know what being on the wrong side of law means - not only federally, but probably quite intimately since Alabama was not exactly known for its acceptance of homosexual people.

Now just imagine if the US still had anti-homosexuality laws (like the majority of the world still does) and your phone is constantly scanning your photos to check for signs of homosexual behaviour. Forgetfully take a selfie with your boyfriend, it gets flagged, sent to the Ministry of Morality, and next thing you know you're being dragged into a van. Best case scenario you're being jail. Worst you're being thrown off a building, stoned, or brutally beaten to death.

That's the future Apple is signing us up for. There is zero chance this stops at CSAM, especially with the Democrats convinced that half the country are the absolute worst people on the planet and not being shy about completely ignoring the rule of law and Constitution to extend the reach of the state to levels that would make a totalitarian blush. This will end terribly.


Your last paragraph ruined an otherwise great comment.


[flagged]


[flagged]


> on the command of a Republican President to hang government officials and overturn an election

Literally never happened. The fact that you're parroting this lie shows how deeply media bubbles have disconnected people from reality.

https://www.npr.org/2021/02/10/966396848/read-trumps-jan-6-s...

https://www.npr.org/sections/insurrection-at-the-capitol/202...


What's more galling is that an anti-authoritarian stance should be non-partisan in a country that sells itself on democracy and liberty


His name is Tim Cook. Good points though


> …your phone is constantly scanning your photos to check for signs of homosexual behaviour. Forgetfully take a selfie with your boyfriend, it gets flagged…

Except that’s not what this is doing.

For all intents and purposes, “scanning” is just hashing content before it’s uploaded and comparing that hash to a known database. So, unless your picture had been hashed and flagged before (And if you just took it how could it have been?) then there’s nothing for it to match.

Don’t just conjure FUD from a contrived worst-case scenario.


The hash matching part of this is a red herring. The real issue is that if a hash matches, there's now a mechanism to upload the image in question from your phone, unencrypted, without your consent. What assurance is there that the hidden upload mechanism can't be used to upload other files on demand? If a mechanism is built into the OS that can exfiltrate files on demand, what's to say there even has to be a hash match? Or that iCloud has to be active? What's to stop any government from going to Apple and saying "we know you have this ability, give us all this user's files." Nothing.

The exfiltration mechanism is the problem. If this were saying a hash match could be used to obtain a search warrant, that would be bad, but it would be far less egregious a breach of security and privacy than Apple adding a backdoor to just grab anything it likes.


It’s not a “hidden upload mechanism”. It’s a hash-and-flag system on top of the extant “upload to iCloud” function.

Apple will already have the data. You’re already consenting to that when you enable iCloud Photo Library (the only place this is being implemented).


But all uploads to iCloud are [edit: will be] encrypted. This system includes another, separate mechanism to upload a file unencrypted, not to iCloud, but to some other system where humans can review it. So it's not using the same functionality as a normal upload. The file doesn't have to have been uploaded to iCloud yet, or ever, to be copied off the phone using some other method. The whole point is that they don't want unencrypted copies being uploaded to iCloud, so they can't review what was uploaded via the iCloud route. Therefore they want some other method of checking your files locally - which means they need a backdoor to copy them without your permission.


[flagged]


Then apple will lose market share and correct their ways.

Conversely, what I've seen does put this top of list as a parent. Will notify me if my child is sending nudes. Will notify me if someone is sending porn to my child. Will notify police if known child porn is on the device.

When folks talk about competition - part of this MUST include the USERS preferences (not as currently done the focus of what I see as largely predatory billers and businesses who I don't care about as a user).

I don't want child porn on my systems. Be very happy if apple helps keep it off them.

Are these hash databases available more broadly for scanning (ie, could I scan all work machines / storage using a tool of some sort)?


I don't think they will ever be able to walk this back. The governments that twisted Apple's arm to get this look-see into everyone's devices will roast them in the court of public opinion (or threaten to, which will be enough). IMO, this will just open up more and more - it will never go back to being what iDevice owners have now.


> The governments that twisted Apple's arm to get this look-see into everyone's devices will roast them in the court of public opinion

Nothing about this technology gives governments a ‘look see’ into everyone’s devices.


The government controls the hash list.


No. A registered charity called The National Center for Missing and Exploited Children controls the hash list.

Yes, they are partly government funded, but I highly doubt they'd let their mission be compromised by allowing the government to inject non-CP hashes. Doing so would compromise all the work they've performed over the last four decades.

These people are (rightfully) very passionate about their work and can't be easily paid off.

There are many, many valid concerns about this Apple initiative. But, "The government" injecting non CP images into the CP databases is not one of them.


> by allowing the government to inject non-CP hashes

I don't think "allowing" is the concern here, because I highly doubt they get to generate the hashes themselves.


I think you misunderstand the Center’s role in this. They review and categorize the images. They maintain the database and provide access to different organizations for the purpose of catching CP. Why wouldn’t they also generate the hashes?


Since they are the only ones legally allowed to possess the images, one can surmise that they must be the ones to create the hashes.


Why not? How can we know this isn’t the case? What technical or practical prevention would there be?


The hash data is secret because if widely known then offenders would know which images were known to law enforcement, and therefore transform or delete only those.


Would this service notify parents if such a thing happened ? I haven’t see it in their official announcement.


Isn't that most of the internet? I would be surprised if, for example, you didn't get reported by Hackernews if you started making criminal threats or sharing CSAM on here. From the legal tab:

>3. SHARING AND DISCLOSURE OF PERSONAL INFORMATION

>In certain circumstances we may share your Personal Information with third parties without further notice to you, unless required by the law, as set forth below:

...

>Legal Requirements: If required to do so by law or in the good faith belief that such action is necessary to (i) comply with a legal obligation, including to meet national security or law enforcement requirements, (ii) protect and defend our rights or property, (iii) prevent fraud, (iv) act in urgent circumstances to protect the personal safety of users of the Services, or the public, or (v) protect against legal liability.


>Isn't that most of the internet?

No, bad analogy.


Insightful reply, thanks.


Well, it was succint and other people got it.

Some website reporting you to the police for doing something illegal is not the same as your hardware/software being stuffed with snakeoil spyware that slows down the UI all for some made up cause.


Sophisticated abuser criminals will not be caught this way. This is for a likely significant percent of the abusers that doesn’t know, forget or make mistakes. Also this system can slow spread CSAM images which can facilitates abuse.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: