Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And yet they wanted to push a proposal where the government would have free access to all digital communication, no judge required. So if it happens through a telephone conversation, you need a judge, while with a digital message, you wouldn't have, since the government would have already collected that information through Chat Control.




The ombudsman will say some strong words and everything will continue as is.

I don't know where you get your information, but that was not in the chat control proposal I read.

Patrick Breyer has some good thoughts on this.[1]

The relevant points I believe to be:

> All citizens are placed under suspicion, without cause, of possibly having committed a crime. Text and photo filters monitor all messages, without exception. No judge is required to order to such monitoring – contrary to the analog world which guarantees the privacy of correspondence and the confidentiality of written communications.

And:

> The confidentiality of private electronic correspondence is being sacrificed. Users of messenger, chat and e-mail services risk having their private messages read and analyzed. Sensitive photos and text content could be forwarded to unknown entities worldwide and can fall into the wrong hands.

[1] https://www.patrick-breyer.de/en/posts/chat-control/


> All citizens are placed under suspicion

> No judge is required to order to such monitoring

That sounds quite extreme, I just can't square that with what I can actually read in the proposal.

> the power to request the competent judicial authority of the Member State that designated it or another independent administrative authority of that Member State

It explicitly states otherwise. A judge (or other independent authority) has to be involved. It just sounds like baseless fear mongering (or worse, libertarianism) to me.


Didn't the proposal involve automated scanning of all instant messages? How isn't that equivalent of having an automated system opening every letter and listening to every phone call looking for crimes?

Not from what I can tell. From what I can read, it only establishes a new authority, under the supervision and at the digression, of the Member state that can, with judicial approval mandate "the least intrusive in terms of the impact on the users’ rights to private and family life" detection activities on platforms where "there is evidence [... ] it is likely, [...] that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material".

That all sounds extremely boring and political, but the essence is that it mandates a local authority to scan messages on platforms that are likely to contain child pornography. That's not a blanket scan of all messages everywhere.


> platforms that are likely to contain child pornography

So every platform, everywhere? Facebook and Twitter/X still have problems keeping up with this, Matrix constantly has to block rooms from the public directory, Mastodon mods have plenty of horror stories. Any platform with UGC will face this issue, but it’s not a good reason to compromise E2EE or mandate intrusive scanning of private messages.

I would not be so opposed to mandated scans of public posts on large platforms, as image floods are still a somewhat common form of harassment (though not as common as it once was).


The proposal is about deploying automated scanning of every message and every image on all messaging providers and email client. That is indisputable.

It therefore breaks EtoE as it intercepts the messages on your device and sends them off to whatever 3rd party they are planning to use before those are encrypted and sent to the recipient.

> It explicitly states otherwise. A judge (or other independent authority) has to be involved. It just sounds like baseless fear mongering (or worse, libertarianism) to me.

How can a judge be involved when we are talking about scanning hundreds of millions if not billions of messages each day? That does not make any sense.

I suggest you re-read the Chat control proposal because I believe you are mistaken if you think that a judge is involved in this process.


> That is indisputable.

I dispute that. The proposal explicitly states it has to be true that "it is likely, despite any mitigation measures that the provider may have taken or will take, that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material;"

> How can a judge be involved

Because the proposal does not itself require any scanning. It requires Member states to construct an authority that can then mandate the scanning, in collaboration with a judge.

I suggest YOU read the proposal, at least once.


You must be trolling.

> it is likely, despite any mitigation measures that the provider may have taken or will take, that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material

That is an absolute vague definition that basically encompasses all services available today including messaging providers, email providers and so on. Anything can be used to send pictures these days. So therefore anything can be targeted, ergo it is a complete breach of privacy.

> Because the proposal does not itself require any scanning. It requires Member states to construct an authority that can then mandate the scanning, in collaboration with a judge.

Your assertion makes no sense. The only way to know if a message contains something inappropriate is to scan it before it is encrypted. Therefore all messages have to be scanned to know if something inappropriate is in it.

A judge, if necessary, would only be participating in this whole charade at the end of the process not when the scanning happens.

This is taken verbatim from the proposal that you can find here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A20...

> [...] By introducing an obligation for providers to detect, report, block and remove child sexual abuse material from their services, .....

It is an obligation to scan not a choice based some someone's opinion like a judge, ergo no one is involved at all in the scanning process. There is no due process in this process and everyone is under surveillance.

> [...] The EU Centre should work closely with Europol. It will receive the reports from providers, check them to avoid reporting obvious false positives and forward them to Europol as well as to national law enforcement authorities.

Again here no judge involved. The scanning is automated and happens automatically for everyone. Reports will be forwarded automatically.

> [...] only take steps to identify any user in case potential online child sexual abuse is detected

To identify a user who may or may not have shared something inappropriate, that means that they know who the sender is, who the recipient was , what bthe essage contained and when it happened. Therefore it s a complete bypass of EtoE.

This is the same exact thing that we are seeing know with the age requirements for social media. If you want to ban kids who are 16 years old and under then you need to scan everyone's ID in order to know how old everyone is so that you can stop them from using the service.

With scanning, it is exactly the same. If you want to prevent the dissemination of CSAM material on a platform, then you have to know what is in each and every message so that you can detect it and report it as described in my quotes above.

Therefore it means that everyone's messages will be scanned either by the services themselves or this task will be outsourced to a 3rd party business who will be in charge of scanning, cataloging and reporting their finding to the authorities. Either way the scanning will happen.

I am not sure how you can argue that this is not the case. Hundreds of security researchers have spent the better part of the last 3 years warning against such a proposal, are you so sure about yourself that you think they are all wrong?


> This is taken verbatim from the proposal that you can find here

You're taking quotes from the preamble which are not legislation. If you scroll down a little you'll find the actual text of the proposal which reads:

> The Coordinating Authority of establishment shall have the power to request the competent judicial authority of the Member State that designated it or another independent administrative authority of that Member State to issue a detection order

You see, a judge, required for a detection order to be issued. That's how the judge will be involved BEFORE detection. The authority cannot demand detection without the judge approving it.

I really dislike you way of arguing. I thought it was important to correct your misconceptions, but I do not believe you to be arguing in good faith.


Let me address your points here and to make it more explicit, let me use Meta/Facebook Messenger as an example.

> You see, a judge, required for a detection order to be issued. That's how the judge will be involved BEFORE detection. The authority cannot demand detection without the judge approving it.

Your interpretation of the judge's role is incorrect. The issue is not if a judge is involved, but what that judge is authorizing.

You are describing a targeted warrant. This proposal creates a general mandate.

Here is the the reality of the detection orders outlined by this proposal:

1: A judicial authority, based on a risk assessment, does not issue a warrant for a specific user John Doe who may be committing a crime. 2: Instead, it issues a detection order to Meta mandating that the service Messenger must be scanned for illegal content. 3: This order legally forces Meta to scan the data from all users on Messenger to find CSAM. It is a blanket mandate, not a targeted one.

This forces Facebook to implement a system to scan every single piece of data that goes through them, even if it means scanning messages before they are encrypted. Meta has now a mandate to scan everyone, all the time, forever.

Your flawed understanding is based on a traditional wiretap.

Traditional Warrant (Your View): Cops suspect Tony Soprano. They get a judge's approval for a single, time-limited wiretap on Tony's specific phone line in his house based on probable cause.

Detection Order: Cops suspect Tony “might” use his house for criminal activity. They get a judge to designate the entire house a "high-risk location." The judge then issues an order compelling the homebuilder to install 24/7 microphones in every room to record and scan all conversations from everyone (Tony, his family, his guests, his kids and so on) indefinitely.

That is the difference that I think you are not grasping here.

With E2E, Meta cannot know if CSAM is being exchanged in a message unless it can see the plain text.

To comply with this proposal, Meta will be forced to build a system that bypasses their own encryption. There is no other way.

This view is shared by security experts, privacy organizations, and legal experts.

You can read this opinion letter from a former ECJ judge who completely disagrees with your view here:

https://www.patrick-breyer.de/wp-content/uploads/2023/11/Vaj...

I am sorry if you think that I am arguing in bad faith. I am not.

While there is nothing I can do to make you like my arguing style, just know that I am simply trying to make you understand your misconceptions about this law.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: