A Norwegian man, Arve Hjalmar Holmen, was left stunned when ChatGPT falsely identified him as a convicted murderer who had killed two of his children and attempted to murder a third.
The fabricated accusation was detailed, incorporating true elements of his personal life—his real hometown and the correct number and gender of his children—making the falsehood even more disturbing.
The privacy advocacy group Noyb has now filed a formal complaint against OpenAI, arguing that ChatGPT’s ability to generate false and defamatory personal information is a clear violation of the European Union’s General Data Protection Regulation (GDPR).
“The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, a data protection lawyer at Noyb. “If it’s not, users have the right to have it changed to reflect the truth.”
This is not an isolated case. ChatGPT has previously been accused of falsely implicating individuals in corruption, child abuse, and other serious crimes.
In one instance, the AI wrongly linked an Australian mayor to a bribery scandal, and in another, it accused a German journalist of child abuse. Despite these incidents, OpenAI has not provided a way for individuals to correct false information, only offering to block responses related to their names.
The issue at the heart of the complaint is that OpenAI’s AI model generates responses by predicting the most statistically likely sequence of words, without verifying factual accuracy.
The result? Fabricated stories that can cause real reputational harm. Holmen, the Norwegian complainant, summed up his fears: “Some think that ‘there is no smoke without fire.’ The fact that someone could read this output and believe it is true is what scares me the most.”
Under GDPR, companies processing personal data must ensure its accuracy. If they fail, they could face fines of up to 4% of their global annual revenue. OpenAI’s previous run-ins with European regulators have already led to penalties—Italy’s data protection authority fined the company €15 million earlier this year for processing personal data without a legal basis.
Yet, enforcement across Europe has been inconsistent. A similar Noyb-backed complaint filed in Austria in April 2024 was handed over to Ireland’s Data Protection Commission (DPC), which has yet to make a ruling. The Polish data protection authority has also been investigating a complaint since September 2023, with no conclusion in sight.
Noyb is now pushing for Norway’s regulator to take a firm stance, arguing that OpenAI’s U.S. entity, rather than its Irish subsidiary, should be held responsible. Whether this will speed up enforcement remains uncertain.
In response, OpenAI has altered ChatGPT’s behaviour. Instead of generating answers from its internal model alone, it now retrieves information from the internet when asked about individuals. This change appears to have stopped ChatGPT from making false claims about Holmen—but it does not erase the issue that such hallucinations may still be embedded within the system.
Kleanthi Sardeli, another Noyb data protection lawyer, dismissed OpenAI’s approach: “Adding a disclaimer that you do not comply with the law does not make the law go away.” She warned that AI companies “cannot just ‘hide’ false information from users while they internally still process false information.”