0

ChatGPT’s ‘hallucination’ problem hit with another privacy complaint in EU | TechCrunch

OpenAI is facing another privacy complaint in the European Union. This privacy rights non-profit has been filed by Noyab On behalf of an individual complainant, it targets the inability of its AI chatbot ChatGPT to correct misinformation generated about individuals.

The tendency of GenAI tools to generate information that is simply wrong has been well documented. But it also puts the technology on a collision course with the bloc’s General Data Protection Regulation (GDPR) – which governs how regional users’ personal data can be processed.

Fines for GDPR compliance failures can reach up to 4% of global annual turnover. But more importantly for a resource-rich giant like OpenAI, data protection regulators could order changes to the way information is processed, so GDPR enforcement could reshape how generic AI tools are able to operate in the EU. can give.

OpenAI was already forced to make some changes following the early intervention of Italy’s data protection authority, which briefly shut down ChatGPT locally. back in 2023,

Now Noyb is filing the latest GDPR complaint against ChatGPT with the Austrian data protection authority on behalf of an anonymous complainant who found that the AI ​​chatbot generated an incorrect date of birth for them.

Under the GDPR, individuals in the EU have a number of rights regarding information held about them, including the right to have inaccurate data corrected. Noyb argues that OpenAI is failing to comply with this obligation in relation to the output of its chatbots. It said the company rejected the complainant’s request to correct the wrong date of birth, saying it was technically impossible to correct it.

Instead it offered to filter or block data on certain signals, such as the name of the complainant.

OpenAI Privacy Policy The states say users who discover that an AI chatbot has generated “factually incorrect information about you” can submit a “correction request” through it. privacy.openai.com or by email [email protected], However, it clarifies the line by warning: “Given the technical complexity of how our models work, we may not be able to correct inaccuracies in every case.”

In that case, OpenAI requests users to completely remove their personal information from ChatGPT’s output – by filling out a form Web form,

The problem for the AI ​​giant is that GDPR rights are not a la carte. People in Europe have the right to request improvements. They also have the right to request deletion of their data. But, as Noyeb points out, it is not OpenAI’s job to choose which of these rights are available.

Other elements of the complaint focus on GDPR transparency concerns, with Knoyb arguing that OpenAI is unable to say where the data it generates on individuals comes from, nor what data chatbots generate about people. stores.

This is important because, again, the regulation gives individuals the right to request such information by making a so-called subject access request (SAR). Per the individual, OpenAI did not adequately respond to the complainant’s SAR, failing to disclose any information about the processed data, its sources or recipients.

Commenting on the complaint in a statement, Knaib’s data protection lawyer Maartje de Graaf said: “Creating misinformation is problematic enough in itself. But when it comes to misinformation about individuals, it can have serious consequences. It is clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. Technology must comply with legal requirements, not the other way around.”

The company said it is asking the Austrian DPA to investigate the complaint about OpenAI’s data processing, as well as imposing a fine to ensure future compliance. But it also said it was “likely” the case would be dealt with in cooperation with the EU.

OpenAI is facing similar complaints in Poland. Last September, The local data protection authority then launched an investigation into ChatGPT. Complaint By a privacy and security researcher, who also found that he was unable to correct misinformation about him by OpenAI. That complaint also accuses the AI ​​giant of failing to comply with the regulation’s transparency requirements.

Meanwhile, the Italian data protection authority is still conducting an open investigation into ChatGPT. In January It submitted a draft decision, saying it believed OpenAI breached the GDPR in a number of ways, including the chatbot’s propensity to generate false information about people. The findings also relate to other important issues, such as the lawfulness of processing.

The Italian authority gave OpenAI one month to respond to its findings. The final decision is still pending.

Now, with another GDPR complaint coming over its chatbot, OpenAI is at increased risk of facing a series of GDPR enforcements in different member states.

last fall The company opened a regional office in Dublin – it seems to aim to reduce its regulatory risk by funneling privacy complaints through Ireland’s Data Protection Commission, thanks to a mechanism in the GDPR aimed at funneling cross-border complaints through monitoring. The aim is to streamline them by moving them to a single Member State authority where the company is “mainly established”.

chatgpts-hallucination-problem-hit-with-another-privacy-complaint-in-eu-techcrunch