Games News Hub

User files lawsuit against ChatGPT after discovering it falsely accused them of fabricated crimes upon entering their name.” or “Legal action launched against ChatGPT after it generated baseless criminal allegations in response to a user’s name input.” or “ChatGPT hit with legal complaint after user’s name triggered false accusations of non-existent crimes.

AI ‘hallucinations’ are a well-documented phenomenon. As Large Language Models are only making their best guess about which word is most likely to come next and don’t understand things like context, they’re prone to simply making stuff up. Between fake cheese facts and stomach-turning medical advice, disinformation like this may be funny, but is far from harmless. Now, there may actually be legal recourse.

A Norwegian man called Arve Hjalmar Holmen recently struck up a conversation with ChatGPT to see what information OpenAI’s chatbot would offer when he typed in his own name. He was horrified when ChatGPT allegedly spun a yarn falsely claiming he’d killed his own sons and been sentenced to 21 years in prison (via TechCrunch). The creepiest aspect? Around the story of the made up crime, ChatGPT included some accurate, identifiable details about Holman’s personal life, such as the number and gender of his children, as well as the name of his home town.


Source link

Add comment

Advertisement

Advertisement

Your Header Sidebar area is currently empty. Hurry up and add some widgets.