Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
vrighter@discuss.tchncs.de 1 month agohere’s that same conversation with a human:
“why is X?” “because y!” “you’re wrong” “then why the hell did you ask me for if you already know the answer?”
What you’re describing will train the network to get the wrong answer and then apologize better. It won’t train it to get the right answer
Rivalarrival@lemmy.today 1 month ago
I can see why you would think that, but to see how it actually goes with a human, look at the interaction between a parent and child, or a teacher and student.
“Johnny, what’s 2+2?”
“5?”
“No, Johnny, try again.”
“Oh, it’s 4.”
Turning Johnny into an LLM,nThe next time someone asks, he might not remember 4, but he does remember that “5” consistently gets him a “that’s wrong” response. So does “3”.
But the only way he knows 5 and 3 gets a negative reaction is by training on his own data, learning from his own mistakes.
He becomes a better and better mimic, which gets him up to about a 5th grade level of intelligence instead of a toddler.
vrighter@discuss.tchncs.de 1 month ago
turning jhonny into an llm does not work. because that’s not how the kid learns. kids don’t learn math by mimicking the answers. They learn math by learning the concept of numbers. What you just thought the llm is simply the answer to 2+2. Also, with llms there is no “next time” it’s a completely static model.
Rivalarrival@lemmy.today 1 month ago
It’s only a completely static model if it is not allowed to use it’s own interactions as training data. If it is allowed to use the data acquired from those interactions, it stops being a static model.
vrighter@discuss.tchncs.de 1 month ago
if it’s allowed to use its own interactions as data, it will collapse. This has been studied. Stuff just does not work the way you think it does. Try coding one yourself i d