OpenAI’s highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it. In many cases, these so-called “hallucinations” can seriously damage a person’s reputation: In the past, ChatGPT falsely accused people of corruption, child abuse – or even murder. The latter was the case with a Norwegian user. When he tried to find out if the chatbot had any information about him, ChatGPT confidently made up a fake story that pictured him as a convicted murderer. This clearly isn’t an isolated case. noyb has therefore filed its second complaint against OpenAI. By knowingly allowing ChatGPT to produce defamatory results, the company clearly violates the GDPR’s principle of data accuracy.
ChatGPT hit with privacy complaint over defamatory hallucinations: ChatGPT created a fake child murderer.
Submitted 2 months ago by Tea@programming.dev to technology@lemmy.world
https://noyb.eu/en/ai-hallucinations-chatgpt-created-fake-child-murderer
Eheran@lemmy.world 2 months ago
boonhet@lemm.ee 2 months ago
biofaust@lemmy.world 2 months ago
Despite what others are saying there is indeed an inaccuracy in calling this a privacy complaint. A lot of people outside of the EU conflate privacy with data protection, but they are not the same and GDPR does not concern with privacy but exclusively with personal data protection.
Accuracy, availability and governance of personal data are indeed important criteria for data protection, and this is what this is about.
Regarding people making shit up, if they make such things public, GDPR governs those just as much, while still referring to the normal legislation for the charges for slander.
Uranium_Green@sh.itjust.works 2 months ago
The community this has been posted in for me is Technology, not Privacy
2.And those people should also face scrutiny if they are making up potentially life ruining stuff such as accusing someone being a child murderer. The bit I’d want some context for, is whether this is a one off hallucination, or a consistent one that multiple seperate users could see if they asked about this person.
If it’s a one of hallucination, it’s not good, but nowhere near as bad as a consistent ‘hard baked’ hallucination.
boonhet@lemm.ee 2 months ago
The headline is what says there’s a privacy complaint.
donuts@lemmy.world 2 months ago
OpenAI was hit was a privacy complaint, don’t think the comment was about which community this was in
Telorand@reddthat.com 2 months ago
einkorn@feddit.org 2 months ago
thann@lemmy.dbzer0.com 2 months ago
This is litterally a story of it doing that…
Eheran@lemmy.world 2 months ago
It is literally not. He chatted with it, it always gives some answer. This is not privacy related. It is made up in a private chat.