In case someone is missing context, this is Google (apparently together with Meta, Microsoft, and Snap) coming out in favour of Chat Control legislation. This is something EU citizens have so far fought tooth and nail to repel. The fact that these US companies known for spying on people and invading privacy in the name of profit are lobbying for the legislation should be a warning to us all to avoid their services.
They're not coming out in favor of Chat Control -- they're coming out in favor of having some option where they can operate without violating the law.
The problem right now is that they can be held liable for hosting CSAM content on their platforms, and, since April 3, they can also be fined if they try to detect that content. It's an impossible situation.
Now, I'm not claiming that these companies always have noble intentions. But there's nothing nefarious here -- they just want regulatory certainty: do X, Y, and Z and you won't be fined or sued.
Interesting way to frame the fact that the members of the european parliament voted 311 to 218 yesterday to reject the companies right to spy on you.
I'm the first person to admit the EU has democratic deficit, but MEPs are directly elected by EU citizens and they chose this in a democratic process. The companies are certainly making a choice with this blogpost.
I dunno, man. If tech companies responded to a failure to extend interim guidance by terminating their CSAM detection programs, and claimed when challenged that the EU made them do it, I'm pretty confident there would be much more outrage about "malicious compliance". If the EU wants companies to stop detecting CSAM until the final guidance arrives, they should say so directly.
What is the false negative rate and total rates? Without those we are missing too much. If the false negative rate (saying fine but it isn't) then the whole thing is useless. If the total cases are a few hundred (either CASM isn't a problem or those doing it use other platforms cause they know they will be caught on these) I don't care much that some are false positives - odds are it didn't get me.
The report you're referring to by the European Commission [1] shows that the mass surveillance of Chat Control 1.0 is probably not very proportional. They even note themselves that "The available data are insufficient to provide a definitive answer to this question".
However, the "13-20%" that you're quoting is a dishonest propaganda number itself. It's the false positive rate a single small company (Yubo) reported. The reported false positive rates of other companies are between 0.32% and 1.5%, which is still a high error rate in absolute numbers.
Just to be clear: the report itself is full of uncertainty, convenient half truths and false causality. They for example completely rely on Big Tech platforms themselves to count false positives when a moderation decision was reversed. Microsoft apparently even claims that no user ever appealed against a decision ("No appeals reported"). There is no independent investigation into the effectiveness of the regulation at all, while it is in direct conflict with fundamental rights and required to be proportional to its goals.
The section about "children identified" is also a complete mess where most countries can't even report the most basic data, and it isn't clear if mass surveillance contributed anything to new cases at all. But somehow they still conclude "voluntary reporting in line with this Regulation appears to make a significant contribution to the protection of a large number of children", which seems extremely baseless.
I'm sure a lot of HN commenters would agree that a CSAM detection system with a 13-20% false positive rate should be terminated, but we're not EU regulators. And you've got a sibling comment saying this would be malicious compliance, so even on HN it's not unanimous. Is there an example of a specific EU official, MEP, etc. explicitly stating that tech companies should not perform hash-based CSAM detection or should not perform CSAM detection at all?
Yes? The Pirate Party has MEPs, it’s not exactly difficult to find their quotes. 3 seconds of searching was enough to find the following quote from MEP Markéta Gregorová:
„We can now finally say with certainty that Chat Control 1.0 will end on April 3 without replacement. The European Parliament has sent a clear signal: it is time to put an end to this ineffective and disproportionate derogation from privacy rules. Under the pretext of protecting children, millions of private messages from innocent citizens were being scanned for years without delivering adequate results. This system simply did not work and had no place in a democratic society.“
It doesn’t have to be unanimous on HN. It wasn’t even unanimous in the EUP.
But what it was is legal and democratic. And the discussion in the parliament explicitly included the fact that the companies will either have to stop, or find a different legal grounding.
The companies in this blog post are effectively admitting they are making a choice to go against the law.
> I'm pretty confident there would be much more outrage about "malicious compliance".
As there should be.
The big tech companies have done that every time the EU passes some consumer protections, and have been spanked in court several times for the disingenuousness.
So just a recap of what happened between the European Commission and the European Parliament and why the regulation has expired:
- In 2021 the European Parliament voted in favor of a temporary regulation that allowed companies to (i.e. voluntarily) scan private communications. Let's call it Chat Control 1.0. They chose to enact this because US companies were already scanning private messages in violation of the ePrivacy Directive which had come into force in the previous year. Instead of enforcing this directive, they chose to (temporarily) legalize the scanning of private messages while preparing more permanent legislation.
- In 2024 Chat Control 1.0 was extended for another 2 years. An amendment was adopted that explicitly noted that after this time "[the regulation] shall lapse permanently".
- From 2022 to 2025 the European Commission together with member states has proposed mandatory scanning, later updated with a proposal for client-side scanning (defeating end to end encryption), AI classification of image and text content, age verification and of lot of other invasive measures. This is what is known as Chat Control 2.0. The European Parliament has again and again voted against this proposal.
- In 2025/2026 the European Commission finally (temporarily) backed down from Chat Control 2.0 and instead proposed to extend Chat Control 1.0 for another 2 years, but has completely failed to negotiate with parliament to adopt a text that explicitly puts fundamental rights up front.
- In response to this, the Civil Liberties Committee of the European Parliament tabled amendments [1] that explicitly limits the regulation to the subject matter and prevents it from being used to weaken end-to-end encryption. Many of these amendments were adopted.
- Consequently, many conservative members of the European Parliament voted down the entire extension of the regulation. They apparently felt that it was better to let the regulation expire so that they gain more negotiation power to adopt a version of the regulation that the has less safeguards or contains measures like in Chat Control 2.0.
I think your recap is missing a pretty large step at the very beginning, which is that AFAIR, the EU Parliament put together this temporary regulation to a posteriori allow the scanning that was already being done, outside of the law, by those US companies on EU citizen messages ; and the temporary regulation was put in place until a proper framework could be agreed upon.
Yes indeed, thanks for the addition. It has been a complex story, and I already forgot that chapter. I edited it into my post (also modified a wrong date of the first derogation), although I'm probably missing more nuances.
The important thing you need to know about EU Chat Control is that the politicians will be exempted from the mass surveillance they are about to build.
Seems like if it were possible to implement end to end encryption where google had no way to decrypt a communication, google could avoid liability for facilitating transmission of CSAM?
Shouldn't this big liability be pushing the big tech firms to do so?
Alright, at least now we can confidently put company symbols next to this incessant push towards Chat Control in EU parliament. Know your enemy, I guess.
BS. It's for control and censorship and data harvesting.
Meta alone spend $2 billion lobbying for age-restriction laws, which they tried to hide by pumping it through third parties. We don't know how much the other tech giants spent.
When you see the behemots of US tech coming together you can be sure it isn’t for anything good! These assholes are supporting and enabling the orange clown (a suspected pedophile) and they want us to believe that they suddenly care about the children.
It is like letting a policeman into your house to make sure you are not committing crimes. The methods (installing an AI module behind your defenses against criminal hackers that is programmed to betray you) are too invasive.
That's not how that works, last I checked. AIUI it's much more fuzzy. Has to be, being scum doesn't automatically make you an idiot, and a single bit change would make plain old hashes entirely useless.
Insert your favourite dystopia to see where that ends up and how companies benefit from it.
"We tried to build an even deeper panopticon to enslave you. Drats, you and your Democratic process. We thought we'd pulled the wool over your eyes claiming it was for the kids. We'll get you next time you peons. It's just a matter of time."
Too accurate... I hate that they will actually keep trying to force it through until they get the outcome they want. You didn't vote correctly this time, time to hold another referendum. Do try to vote more responsibly this time around.
Seconded. Some common features: emotion-laden language; no new insights (let alone facts); low effort (poor punctuation etc). It's clearly a creeping problem here but I'm hopeful that the activist moderation and culling of politics-adjacent posts can keep a lid on it.
Maybe if all of those companies hadn't paid large sums of money to one of the most famous child sex traffickers, their cries of "think of the children" wouldn't be so creepy
>Maybe if all of those companies hadn't paid large sums of money to one of the most famous child sex traffickers
Source? Specifically that they paid "large sums" after it came out they were child sex traffickers? Otherwise you can't (or should) expect companies to be doing private investigations prior to donating.
Larry Page and Mark Zuckerberg, colleagues of Jerry Epstein, are committed to protect your children. From whom? Are they going to scan all emails and use AI to rat on their buddies?
I know people say Apple’s commitment to privacy is all talk, and there are valid criticisms of Apple and their business practices, but they seem better than the other big tech companies like Meta, MS, and Google by a very wide margin when it comes to privacy.
The problem right now is that they can be held liable for hosting CSAM content on their platforms, and, since April 3, they can also be fined if they try to detect that content. It's an impossible situation.
Now, I'm not claiming that these companies always have noble intentions. But there's nothing nefarious here -- they just want regulatory certainty: do X, Y, and Z and you won't be fined or sued.
Implementing end-to-end encryption on relevant communication services could mitigate many risks that come with hosting user content.
It would protect users from Big Tech spying and still allow affected users to report if something sketchy is going on. Best of both worlds.
In any case, it would be a good start.
I'm the first person to admit the EU has democratic deficit, but MEPs are directly elected by EU citizens and they chose this in a democratic process. The companies are certainly making a choice with this blogpost.
EU Commission reported that the false positive rate was 13-20%.
German police reported that 50% of all reports were wrong.
The system is rubbish and the EU MEPs were quite open about wanting it to go away.
However, the "13-20%" that you're quoting is a dishonest propaganda number itself. It's the false positive rate a single small company (Yubo) reported. The reported false positive rates of other companies are between 0.32% and 1.5%, which is still a high error rate in absolute numbers.
Just to be clear: the report itself is full of uncertainty, convenient half truths and false causality. They for example completely rely on Big Tech platforms themselves to count false positives when a moderation decision was reversed. Microsoft apparently even claims that no user ever appealed against a decision ("No appeals reported"). There is no independent investigation into the effectiveness of the regulation at all, while it is in direct conflict with fundamental rights and required to be proportional to its goals.
The section about "children identified" is also a complete mess where most countries can't even report the most basic data, and it isn't clear if mass surveillance contributed anything to new cases at all. But somehow they still conclude "voluntary reporting in line with this Regulation appears to make a significant contribution to the protection of a large number of children", which seems extremely baseless.
[1] https://www.europarl.europa.eu/RegData/docs_autres_instituti...
„We can now finally say with certainty that Chat Control 1.0 will end on April 3 without replacement. The European Parliament has sent a clear signal: it is time to put an end to this ineffective and disproportionate derogation from privacy rules. Under the pretext of protecting children, millions of private messages from innocent citizens were being scanned for years without delivering adequate results. This system simply did not work and had no place in a democratic society.“
It doesn’t have to be unanimous on HN. It wasn’t even unanimous in the EUP.
But what it was is legal and democratic. And the discussion in the parliament explicitly included the fact that the companies will either have to stop, or find a different legal grounding.
The companies in this blog post are effectively admitting they are making a choice to go against the law.
As there should be.
The big tech companies have done that every time the EU passes some consumer protections, and have been spanked in court several times for the disingenuousness.
A) actually being paid in the end and
B) high enough to be of any concern to the concern.
- In 2021 the European Parliament voted in favor of a temporary regulation that allowed companies to (i.e. voluntarily) scan private communications. Let's call it Chat Control 1.0. They chose to enact this because US companies were already scanning private messages in violation of the ePrivacy Directive which had come into force in the previous year. Instead of enforcing this directive, they chose to (temporarily) legalize the scanning of private messages while preparing more permanent legislation.
- In 2024 Chat Control 1.0 was extended for another 2 years. An amendment was adopted that explicitly noted that after this time "[the regulation] shall lapse permanently".
- From 2022 to 2025 the European Commission together with member states has proposed mandatory scanning, later updated with a proposal for client-side scanning (defeating end to end encryption), AI classification of image and text content, age verification and of lot of other invasive measures. This is what is known as Chat Control 2.0. The European Parliament has again and again voted against this proposal.
- In 2025/2026 the European Commission finally (temporarily) backed down from Chat Control 2.0 and instead proposed to extend Chat Control 1.0 for another 2 years, but has completely failed to negotiate with parliament to adopt a text that explicitly puts fundamental rights up front.
- In response to this, the Civil Liberties Committee of the European Parliament tabled amendments [1] that explicitly limits the regulation to the subject matter and prevents it from being used to weaken end-to-end encryption. Many of these amendments were adopted.
- Consequently, many conservative members of the European Parliament voted down the entire extension of the regulation. They apparently felt that it was better to let the regulation expire so that they gain more negotiation power to adopt a version of the regulation that the has less safeguards or contains measures like in Chat Control 2.0.
[1] https://www.europarl.europa.eu/doceo/document/LIBE-AM-784377...
"Reaffirming our commitment to mass surveillance"
That's more like it.
https://fightchatcontrol.eu/
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
Shouldn't this big liability be pushing the big tech firms to do so?
BS. It's for control and censorship and data harvesting.
Meta alone spend $2 billion lobbying for age-restriction laws, which they tried to hide by pumping it through third parties. We don't know how much the other tech giants spent.
They didn't even write this themselves.
Also hash matching is so easily bypassed you can be sure they really want to add some "AI" detector as well
That's a weak argument because they can already do that today with google's play protect and apple's app notarization.
That's not how that works, last I checked. AIUI it's much more fuzzy. Has to be, being scum doesn't automatically make you an idiot, and a single bit change would make plain old hashes entirely useless.
Insert your favourite dystopia to see where that ends up and how companies benefit from it.
Except for that pesky detail of hash collisions
"We tried to build an even deeper panopticon to enslave you. Drats, you and your Democratic process. We thought we'd pulled the wool over your eyes claiming it was for the kids. We'll get you next time you peons. It's just a matter of time."
Fuck you.
FTFY
Source? Specifically that they paid "large sums" after it came out they were child sex traffickers? Otherwise you can't (or should) expect companies to be doing private investigations prior to donating.
I'd say around at least a quarter of the comments in this thread are generic tribal/populist "outrage bait".