Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
gcheliotis@lemmy.world 1 month ago
The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.
HelloHotel@lemmy.world 1 month ago
the AI “decided” in the same way the dice “decided” to land on 6 and 4 and screw me over. With AI, some people are just using this informal way of speaking while others look at it and genuinely think or want to pretend its alive. You can never really know without asking them directly.
Yes, if the intent is confusion, it is pretty minipulative.
gcheliotis@lemmy.world 1 month ago
Granted, our tendency towards anthropomorphism is near ubiquitous. But it would be disingenuous to claim that it does not play out in very specific and very important ways in how we speak and think about LLMs, given that they are capable of producing very convincing imitations of human behavior. And as such also produce a very convincing impression of agency. As if they actually do decide things. Very much unlike dice.
HelloHotel@lemmy.world 1 month ago
A doll is also designed to be anthropomorphised, to have life projected onto it. Unlike dolls, when someone talks about LLMs as alive, most people have no clue if they are pretending or not. (And marketers take advantage of it!) We are feed a culture that accedentially says “chatGPT + Boston Dynamics robot = Robocop”. Assuming the only fictional part is that we dont have the ability to make it, not that the thing we create wouldn’t be human (or even be need to be human).