“… by accident.” It’s more of an emergent feature than anything done deliberately given the way LLMs work,
Comment on Study finds that Chat GPT will cheat when given the opportunity and lie to cover it up later.
gandalf_der_12te@feddit.de 11 months ago
Bullshit.
It should instead read:
“Humans were stupid and taught a ChatBot how to cheat and lie.”
Lemminary@lemmy.world 11 months ago
merc@sh.itjust.works 11 months ago
No, “cheating” and “lying” imply agency. LLMs are just “spicy autocomplete”. They have no agency. They can’t distinguish between lies and the truth. They can’t “cheat” because they don’t understand rules. It’s just sometimes the auto-generated text happens to be true, other times it happens to be false.
gandalf_der_12te@feddit.de 11 months ago
I disagree. This is no meaningful talking point. It doesn’t help anyone in practice. Sure, it clears legal questions of responsibility (and I’m not even sure about that one in the future), but apart from that, making an artificial distinction between a human and a looks-and-acts-like-human, provides no real-world value.
merc@sh.itjust.works 11 months ago
Sure it does, because assigning agency to LLMs is like “the dice are lucky” or “this coin I’m flipping hates me”. LLMs are massively complex and very good at simulating human-generated text. But, there’s no agency there. As soon as people start thinking there’s agency they start thinking that LLMs are “making decisions”, or “being deceptive”. But, it’s just spicy autocomplete. We know exactly how it works, and there’s no thinking involved. There’s no planning. There’s no consciousness. There’s just spitting out the next word based in an insanely deep training data set.
gandalf_der_12te@feddit.de 11 months ago
I believe that at a certain point, “agency” is an emergent feature. That means that, while all the single bits are well understood probability-wise, the total picture is still more than that.
It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.
barsoap@lemm.ee 11 months ago
The current models that we have, running in inference mode, are t1 systems. Criminal law requires defendants to be able to understand guilt as a prerequisite of having a guilty mind, that’s why asylums for the criminally insane exist because not even all humans can do that. You’re trying to apply that standard to an overcomplicated thermostat.
Karyoplasma@discuss.tchncs.de 11 months ago
If your parrot or budgie picks up some of the words you frequently use and reproduces them with a wrong context, would you consider your pet lying? Because that’s what ChatGPT basically is, a digital parrot.
wildginger@lemmy.myserv.one 11 months ago
Chaptgpt is a very very very very large algorithm that uses language instead of numbers, and runs off of patterns found within the data set that is plugged into the algorithm.
Theres a gulf of meaning between distinguishing between a calculator that uses words instead of numbers and a person.