Comment on Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
sugar_in_your_tea@sh.itjust.works 2 days agoTechnically it’s not, because the LLM doesn’t decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.
That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.
Ulrich@feddit.org 2 days ago
And is that different from the way you make decisions, fundamentally?
petrol_sniff_king@lemmy.blahaj.zone 2 days ago
I don’t think I run on AMD or Intel, so uh, yes.
Ulrich@feddit.org 2 days ago
I didn’t say anything about either.
sugar_in_your_tea@sh.itjust.works 2 days ago
Idk, that’s still an area of active research. I versatile certainly think it’s very different, since my understanding is that human thought is based on concepts instead of denoising noise or whatever it is LLMs do.