AI ‘hallucinations’ are a well-documented phenomenon. As Large Language Models are only making their best guess about which word is most likely to come next and don’t understand things like context, they’re prone to simply making stuff up. Between fake cheese facts and stomach-turning medical advice, disinformation like this may be funny, but is far from harmless. Now, there may actually be legal recourse.
A Norwegian man called Arve Hjalmar Holmen recently struck up a conversation with ChatGPT to see what information OpenAI’s chatbot would offer when he typed in his own name. He was horrified when ChatGPT allegedly spun a yarn falsely claiming he’d killed his own sons and been sentenced to 21 years in prison (via TechCrunch). The creepiest aspect? Around the story of the made up crime, ChatGPT included some accurate, identifiable details about Holman’s personal life, such as the number and gender of his children, as well as the name of his home town.
The privacy rights advocacy group Noyb soon got involved. The organisation told TechCrunch they carried out their own investigation as to why ChatGPT could be outputting these claims, checking to see if perhaps someone with a similar name had committed serious crimes. Ultimately, they could not find anything substantial along these lines, so the ‘why’ behind ChatGPT’s hair-raising output remains unclear.
The chatbot’s underlying AI model has since been updated, and it now no longer repeats the defamatory claims. However Noyb, having previously filed complaints on the grounds of ChatGPT outputting inaccurate information about public figures, was not satisfied to close the book here. The organisation has now filed a complaint with Datatilsynet (the Norwegian Data Protection Authority) on the grounds that ChatGPT violated GDPR.
Under Article 5(1)(d) of the EU law, companies processing personal data have to ensure that it’s accurate–and if it’s not accurate, it must either be corrected or deleted. Noyb makes the case that, just because ChatGPT has stopped falsely accusing Holmen of being a murderer, that doesn’t mean the data has been deleted.
Noyb wrote, “The incorrect data may still remain part of the LLM’s dataset. By default, ChatGPT feeds user data back into the system for training purposes. This means there is no way for the individual to be absolutely sure that this output can be completely erased […] unless the entire AI model is retrained.”
Noyb also alleges that, by its nature, ChatGPT does not comply with Article 15 of GDPR. Simply put, there’s no guarantee that you can call back whatever you feed into ChatGPT–or see whatever data about you has been fed into its dataset. On this point, Noyb shares, “This fact understandably still causes distress and fear for the complainant, [Holmen].”
At present, Noyb are requesting that Datatilsynet order OpenAI to delete the inaccurate data about Holmen, and that the company ensures ChatGPT can’t hallucinate another horror story about someone else. Given OpenAI’s current approach is merely displaying the disclaimer “ChatGPT can make mistakes. Consider checking important information,” in tiny font at the bottom of each user session, this is perhaps a tall order.
Still, I’m glad to see Noyb apply legal pressure to OpenAI, especially as the US government has seemingly thrown caution to the wind and gone all in on AI with the ‘Stargate’ infrastructure plan. When ChatGPT can easily output defamatory claims right alongside accurate, identifying information, a crumb of caution feels like less than the bare minimum.
Source link
Add comment