Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.

gcheliotis@lemmy.world ⁨1⁩ ⁨month⁩ ago

The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.

source
Sort:hotnewtop