0

EU plan to force messaging apps to scan for CSAM risks millions of false positives, experts warn | TechCrunch

A controversial effort by EU lawmakers to legally require messaging platforms to scan citizens’ private communications for child sexual abuse material (CSAM) could lead to millions of false positives per day, hundreds of thousands of security and privacy risks. Experts have warned. open letter Thursday.

Concern remains over EU proposal after Commission proposes CSAM-scanning scheme two years ago – with independent experts, Lawmakers of the European Parliament Even more Block’s own data protection supervisor Among those who raise the alarm.

The EU proposal would not only require messaging platforms that receive CSAM detection orders to scan known CSAM; They would also have to use unspecified detection scanning techniques to catch undetected CSAM and identify grooming activity – thereby accusing lawmakers of indulging in magical thinking-levels of technological solutionism.

Critics argue that the proposal is technically impossible and would not achieve the stated objective of protecting children from abuse. Instead, they say, it will wreak havoc on Internet security and the privacy of Web users by forcing platforms to deploy risky, unproven technologies like client-side scanning to conduct blanket surveillance of all their users.

Experts say there is no technology that will be able to achieve what the law demands without causing far more harm than good. Yet the EU is moving ahead regardless.

The latest open letter addresses amendments to the draft CSAM-scanning regulation recently proposed by the European Council, which the signatories argue fails to address fundamental flaws in the scheme.

Signatories to the letter – numbering 270 at the time of writing – include hundreds of academics, including well-known security experts such as Harvard Kennedy School’s Professor Bruce Schneier and Johns Hopkins University’s Dr. Matthew D. Green, as well as a handful of Also included. Researchers working for technology companies like IBM, Intel and Microsoft.

a first open letter (last July), signed by 465 academics, warned that the technologies the legislation proposes to adopt are “seriously flawed and vulnerable to attacks”, and provide end-to-end Would significantly weaken the critical protections that are in place. End encrypted (E2EE) communications.

Little traction for counter-offers

last fall, MEPs in the European Parliament united to push back with a largely revised approach – which would limit scanning to individuals and groups already suspected of child sexual abuse; Limit this to known and unknown CSAM, removing the need to scan for grooming; And remove any risks by limiting E2EE to platforms that are not end-to-end-encrypted. But the European Council, the other co-legislative body involved in EU law-making, has not yet taken a stance on the matter, and where it stands will affect the final shape of the legislation.

The latest amendment on the table was put forward in March by the Belgian Council President, who is leading the discussions on behalf of representatives of the governments of EU member states. But in the open letter experts warned that the proposal still fails to deal with the fundamental flaws inherent in the Commission’s approach, arguing that the amendments would still create “unprecedented capabilities to monitor and control Internet users” and “would undermine… a “This could have huge consequences for a secure digital future for our society and for democratic processes in Europe and beyond.”

Changes up for discussion in the revised Council proposal include a suggestion that identification orders could be more targeted by applying risk classification and risk mitigation measures; And cybersecurity and encryption can be protected by ensuring that platforms are not obligated to access decrypted data and that detection techniques are scrutinized. But 270 experts suggest this amounts to a security and privacy crisis.

“From a technical perspective, to be effective, this new proposal would also completely weaken communications and system security”, they warn. While relying on “flawed detection technology” to determine cases of interest to send more targeted detection orders will not reduce the risk of the law ushering in a dystopian era of “mass surveillance” of web users’ messages. Analysis.

The paper also tackles a Council proposal to limit the risk of false positives by defining a “person of interest” as a user who has already shared CSAM or attempted to groom a child Is – which will be envisaged through an automated assessment; Such as waiting for 1 hit for known CSAM or 2 hits for unknown CSAM/grooming before the user is officially identified as a suspect and reported to the EU Centre, which will handle CSAM reports.

Billions of users, millions of false positives

Experts warn that this approach is still likely to generate a large number of false alarms.

“The number of false positives due to detection errors is unlikely to be significantly reduced unless the number of repetitions is so large that detection ceases to be effective. Given the large volume (in the order of billions) of messages sent on these platforms, one can expect a very large amount of false alarms (in the order of millions),” they write, pointing out that the platforms There is a possibility of slapping millions or even billions of users with a detection order, such as Meta-owned WhatsApp.

“Given that there is no public information on the performance of detectors that could be used in practice, let’s imagine that we would have a detector for CSAM and grooming that, as stated in the proposal, would have only 0.1% false positives. With a rate (i.e., one in a thousand times, it incorrectly classifies non-CSAM as CSAM), which is much lower than any detector currently known.

“Given that WhatsApp users send 140 billion messages per day, even if only 1 message out of a hundred was tested by such detectors, there would be 1.4 million false positive messages every day. To bring false positives down to the hundreds, statistically one would have to identify at least 5 iterations using different, statistically independent images or detectors. And this is just for WhatsApp – if we consider other messaging platforms, including email, the number of iterations required will increase significantly to the point of not effectively reducing CSAM sharing capabilities.

In the signatories’ view, another Council proposal to limit detection orders to messaging apps deemed “high-risk” is a useless amendment, as they argue that it would still “indiscriminately subject a large number of people”. Will influence”. Here they explain that CSAM exchanges require only standard features, such as image sharing and text chat – features that are widely supported by many service providers, meaning that the high risk classification is “undoubtedly Many services will be affected.”

They also point out that E2EE adoption is increasing, which they suggest will make services that implement it more likely to be classified as high risk. “This number may increase further with the interoperability requirements introduced by the Digital Markets Act which will result in messages flowing between low-risk and high-risk services. As a result, almost all services can be classified as high risk,” he argues. (NB: message interoperability is a main issue of EU DMA,

a back door to the back door

As far as the security of encryption is concerned, the letter repeats the message that security and privacy experts have repeatedly shouted at lawmakers for years: “Traceability in end-to-end encrypted services is by definition encryption security. weakens.”

“One goal of the new proposal is to ‘protect cyber security and encrypted data, while keeping services that use end-to-end encryption within the scope of detection orders.’ As we have pointed out before, this is a contradiction,” they emphasize. “The security offered by end-to-end encryption implies that no one other than the intended recipient of the communication has any knowledge of the content of such communication.” also not be able to know the information. Enabling identification capabilities, whether for encrypted data or for data before it is encrypted, Violates the definition of privacy provided by end-to-end encryption,

In recent weeks Police chiefs from across Europe have written their own joint statement – ​​expressing concern about the expansion of E2EE and calling on platforms to design their security systems in such a way that they can still identify illegal activity and send reports on message content to law enforcement.

The intervention is widely seen as an attempt to pressure lawmakers to pass laws such as the CSAM-scanning regulation.

The police chief denies that he is seeking to implement backdoor encryption, but has not made clear what technical solutions he wants the platform to implement to enable the “lawful access” demanded. Adopt for. Leveling that circle puts a very oddly shaped ball back in the MPs’ court.

If the EU continues on the current path – assuming the Council fails to change course, as MEPs have urged – the consequences will be “disastrous”, the letter’s signatories warn. “It sets a precedent for filtering the Internet, and prevents people from using the few tools available to protect their right to a private life in the digital sphere; This will have chilling effects, especially on teenagers who rely heavily on online services for their communication. “This will change the way digital services are used around the world and is likely to have a negative impact on democracies around the world.”

An EU source close to the Council was unable to provide insight on current discussions between member states, but pointed out that there is a working group meeting on 8 May where they confirmed that there is a need for a regulation to combat child sexual exploitation. The proposal will be discussed.

eu-plan-to-force-messaging-apps-to-scan-for-csam-risks-millions-of-false-positives-experts-warn-techcrunch