Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
Rivalarrival@lemmy.today 1 month agoIt needs to be retrained on the responses it receives from it’s conversation partner. It’s previous output provides context for its partner’s responses.
It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite “you’re wrong” feedback from its partners, and it is instructed to minimize such feedback.
vrighter@discuss.tchncs.de 1 month ago
Yeah that implies that the other network(s) can tell right from wrong. Which they can’t. Because if they did the problem wouldn’t need solving.
Rivalarrival@lemmy.today 1 month ago
What other networks?
It currently recognizes when it is told it is wrong: it is told to apologize to it’s conversation partner and to provide a different response. It doesn’t need another network to tell it right from wrong. It needs access to the previous sessions where humans gave it that information.
LillyPip@lemmy.ca 1 month ago
Have you tried doing this? I have, for 6 months, on the more ‘advanced’ pro versions. Yes, it will apologise and try again – and it gets progressively worse over time. There’s been a marked degradation as it progresses, and all the models are worse now at maintaining context and not hallucinating than they were several months ago.
LLMs aren’t the kind of AI that can evaluate themselves and improve like you’re suggesting. Their logic just doesn’t work like that. A true AI will come from an entirely different type of model, not from LLMs.
vrighter@discuss.tchncs.de 1 month ago
here’s that same conversation with a human:
“why is X?” “because y!” “you’re wrong” “then why the hell did you ask me for if you already know the answer?”
What you’re describing will train the network to get the wrong answer and then apologize better. It won’t train it to get the right answer
Rivalarrival@lemmy.today 1 month ago
I can see why you would think that, but to see how it actually goes with a human, look at the interaction between a parent and child, or a teacher and student.
“Johnny, what’s 2+2?”
“5?”
“No, Johnny, try again.”
“Oh, it’s 4.”
Turning Johnny into an LLM,nThe next time someone asks, he might not remember 4, but he does remember that “5” consistently gets him a “that’s wrong” response. So does “3”.
But the only way he knows 5 and 3 gets a negative reaction is by training on his own data, learning from his own mistakes.
He becomes a better and better mimic, which gets him up to about a 5th grade level of intelligence instead of a toddler.