Comment on Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

<- View Parent
sugar_in_your_tea@sh.itjust.works ⁨2⁩ ⁨days⁩ ago

Technically it’s not, because the LLM doesn’t decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.

That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.

source
Sort:hotnewtop