0

Meta pauses plans to train AI using European users’ data, bowing to regulatory pressure | TechCrunch

Meta said Confirmed It will halt plans to train its AI systems using data from its users in the EU and UK

The move follows opposition from the Irish Data Protection Commission (DPC), Meta’s lead regulator in the EU, acting on behalf of multiple data protection authorities across the bloc. The U.K.’s Information Commissioner’s Office (ICO) It was also requested Meta has put its plans on hold until it has addressed the concerns it has raised.

“The DPC welcomes Meta’s decision to halt plans to train its large language models using public content shared by adults on Facebook and Instagram in the EU/EEA,” the DPC said in a statement. statement Friday. “This decision was taken following intensive negotiations between the DPC and Meta. The DPC, together with its fellow EU data protection authorities, will continue dialogue with Meta on this issue.”

While the meta is already Using user-generated content to train your AI Tough times in markets like America, Europe GDPR Regulations Hurdles have arisen for Meta and other companies seeking to improve their AI systems, including large language models, with user-generated training material.

However, Meta started notifying users last month Upcoming changes The company said in a change to its privacy policy that this would give it the right to train its AI using public content on Facebook and Instagram, including the content of comments, interactions with companies, status updates, photos and their associated captions. argued that it was necessary to do so To reflect “the diverse languages, geography and cultural contexts of the peoples of Europe”.

These changes were due to come into effect from June 26 – 12 days from today. plans got a boost Nonprofit privacy activist organizations Noyb (“This has nothing to do with you”) to file 11 complaints with the constituent countries of the European Union, arguing that Meta is violating various aspects of the GDPR. One of them relates to the issue of opt-in versus opt-out, face to face Where personal data is processed, users should first be asked for their consent, rather than being asked to refuse.

Meta, for its part, was relying on a GDPR provision called “legitimate interests” to claim that its actions were compliant with regulations. This is not the first time Meta has used this legal basis in defense, This has been done before To justify the processing of European users for targeted advertising.

It always seemed likely that regulators would at least put a halt to implementation of Meta’s planned changes, especially given how difficult the company made it for users to “opt out” of having their data used. The company said it sent more than 2 billion notifications to users informing them of the upcoming changes, but unlike other important public messages that are pasted to the top of users’ feeds, e.g. call to go out and voteThese notifications appear alongside users’ standard notifications: friends’ birthdays, photo tag alerts, group announcements, and more. So if someone doesn’t check their notifications regularly, it’s very easy to miss all of this.

And those who saw the notification did not automatically know there was a way to object or opt-out, as it simply invited users to click to learn how Meta would use their information. There was nothing to suggest there was a choice here.

Meta: AI Notification
Image Credit: Meta

Furthermore, technically users were not able to “opt out” of having their data used. Instead, they had to fill out an objection form where they had to submit their reasoning as to why they did not want their data to be processed – it was entirely at Meta’s discretion whether to accept this request or not, although the company said it would honor every request.

Facebook "Objection" Form
Facebook “Objection” Form
Image Credit: meta / screenshots

Even though the objection form was linked to the notification itself, it was still very difficult for anyone who was searching for the objection form in their account settings.

On the Facebook website, they first need to profile photo top-right; hit Settings and privacy; Tap Privacy Center; Scroll down and click on Generative AI in Meta section; scroll down again and see a bunch of links to a section titled more resourcesThe first link under this section is “How Meta Generative AI uses information for modelsAnd he had to read about 1,100 words before reaching a separate link to the company’s “right to object” form. It was the same story in the Facebook mobile app.

link to "Right to object" Form
Link to “Right to Objection” form
Image Credit: meta / screenshots

When Meta’s policy communications manager was asked earlier this week why the process required users to register an objection rather than opt-in, he said that matt pollard TechCrunch pointed this out Existing blog postswhich states: “We believe on this legal basis [“legitimate interests”] “This is the best balance for processing public data at the scale needed to train AI models, while respecting people’s rights.”

This meant that making it opt-in would not generate enough “scale” for people to give up their data. So the best way to do this was to issue a single notification among users’ other notifications; hide the objection form behind half a dozen clicks for those who wanted to “opt-out” independently; and then ask them to justify their objection, rather than giving them a straight opt-out.

One Updated blog post On Friday, Stefano Fratta, Meta’s global director of privacy policy engagement, said he was “disappointed” by the request from the DPC.

“This is a step backwards for European innovation, competition in AI development, and a further delay in delivering the benefits of AI to people in Europe,” Fratta wrote. “We are confident that our approach complies with European laws and regulations. AI training is not unique to our services, and we are more transparent than many of our industry counterparts.”

The AI ​​arms race

None of this is new, and meta The AI ​​arms race that has shone Huge spotlight on vast stores of data Big Tech dominates us all.

earlier this year, Reddit reveals it has signed a contract for Over $200 million will be invested in the coming years to license its data to companies Such as ChatGPT-maker OpenAI And Google. and the latter of these companies is already facing a hefty fine For relying on copyrighted news content to train their generative AI models.

But these efforts also highlight the lengths companies will go to to leverage this data within the limits of existing law; “opting in” is rarely on the agenda, and the process to opt out is often unnecessarily difficult. Just last month, Someone noticed some suspicious words The existing Slack privacy policy suggested it would be able to leverage user data to train its AI systems, with users able to opt out simply by emailing the company.

And last year, Google Online publishers were finally given a way out choosing to exclude users from training its models by enabling users to inject a piece of code into their sites. OpenAI, for its part, Building a dedicated device Allowing content creators to opt out of the training of its generative AI smarts; this should be ready by 2025.

While Meta’s efforts to train its AI on users’ public content in Europe are currently on hold, it will likely resurface in some form after consultation with the DPC and ICO – hopefully with a different user-permissioning process.

“To make the most of generative AI and the opportunities it brings, it’s vital that the public can trust that their privacy rights will be respected from the start,” Stephen Almond, the ICO’s executive director for regulatory risk, said in a statement. Statement Friday“We will continue to monitor major developers of generative AI, including Meta, to review the safeguards they have implemented and to ensure that the information rights of UK users are protected.”

meta-pauses-plans-to-train-ai-using-european-users-data-bowing-to-regulatory-pressure-techcrunch