Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
wintermute@discuss.tchncs.de 1 month agoExactly. LLMs don’t understand semantically what the data means, it’s just how often some words appear close to others.
Of course this is oversimplified, but that’s the main idea.
vrighter@discuss.tchncs.de 1 month ago
nothing to do with all that. The explanation is simple. The output of the llm is sampled using a random process. A loaded die with probabilities according to the llm’s output. It’s as simple as that. There is literally a random element that is both not part of the llm itself, yet required for its output to be of any use whatsoever.