A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.
According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said.
MagicShel@lemmy.zip 1 week ago
It’s AI. There’s nothing to delete but the erroneous response. There is no database of facts to edit. It doesn’t know fact from fiction, and the response is also very much skewed by the context of the query. I could easily get it to say the same about nearly any random name just by asking it about a bunch of family murders and then asking about a name it doesn’t recognize. It is more likely to assume that person is in the same category as the others and if the one or more of the names have any association (real or fictional) with murder.
surewhynotlem@lemmy.world 1 week ago
I have this gun machine that shoots in all directions randomly. I can’t predict it, so I can’t stop it from shooting you. So sorry. It’s uncontrollable.
MagicShel@lemmy.zip 1 week ago
Yeah but I can just ignore the bullets because they are nerf. And I have my own nerf guns as well.
I mean at some point any analogy fails, but AI is nothing like a gun.
General_Effort@lemmy.world 1 week ago
If creating text is like shooting bullets, we should require a license for text editors.
michaelmrose@lemmy.world 1 week ago
Maybe people need to learn that AI hallucinates
BradleyUffner@lemmy.world 1 week ago
I’m sorry, as an American, I’m not seeing the problem. Don’t you just need a second gun that shoots in random directions to stop the first gun? And then a third gun to shoot the 2nd gun? I mean come on now, this is basic 3rd grade common sense!
FiskFisk33@startrek.website 1 week ago
The fact you chose to make your data storage unreadable, doesn’t relieve you of the responsibilities inherent to storing the data.
Throwing away my car key won’t protect me from paying parking tickets.
DoPeopleLookHere@sh.itjust.works 1 week ago
It’s not unreadable, it doesn’t exist.
The responses are just statistically what sounds vaugly what you want to hear.
They can erase the chat responses, but that won’t stop it from generating it again.
Generative AI doesn’t start with facts and work from there. It’s just statistically what you want to hear.
HK65@sopuli.xyz 1 week ago
From the GDPR’s standpoint, I wonder if it’s still personal information if it is made up bullshit. The thing is, this could have weird outcomes. Like for example, by the letter of the law, OpenAI might be liable for giving the same answer to the same query again.
FiskFisk33@startrek.website 1 week ago
then again
The made up bullshit aside, this should be a quite clear indicator of an actual GDPR breach
rottingleaf@lemmy.world 1 week ago
Funny how everyone around laughs at free speech when it’s for humans, but when it’s a text generator, then suddenly there are some abstract principles preventing everyone to sue the living crap out of all “AI” companies, at least until they are bleeding enough to start putting disclaimers brighter than in Vegas that it’s a word salad machine that doesn’t think, know, claim, dispute, judge or reason.
Petter1@lemm.ee 1 week ago
Isn’t that a great tool to generate nonsense datasets to poison big data of trackers somehow 🤔
MagicShel@lemmy.zip 1 week ago
They can just put in a custom regex to filter out certain things. It’ll be a bit performative since it does nothing to stop novel misinformation, but it would prevent it from saying what it’s legally required not to say.
Well, it wouldn’t really, it would say it and just hide it under a message saying it violates boundaries. It’s all a bunch of performative bullshit, actually.
For example, the things it’s required not to say would actually be perfectly fine in the realm of fiction or satire or a game of Simon says, but that’ll be disallowed, as well, because the model can’t actually tell the difference.
zipzoopaboop@lemmynsfw.com 1 week ago
And it’s llm owners problem to figure out how to fix
CosmoNova@lemmy.world 1 week ago
Which is why OpenAI should compensate anyone they have damaged in some way and yes that would mean it would stop exist overnight. That‘s because a criminal organization shouldn‘t be profitable in the first place.
FundMECFSResearch@lemmy.blahaj.zone 1 week ago
you can tweak the weights though
MagicShel@lemmy.zip 1 week ago
Tweaking weights is no guarantee and can easily affect complete unrelated things.
KeenFlame@feddit.nu 1 week ago
Nobody would sue over a dirty context