Microsoft’s Dolphin and phi models have used this successfully, and there’s some evidence that all newer models use big LLM’s to produce synthetic data (Like when asked, answering it’s ChatGPT or Claude, hinting that at least some of the dataset comes from those models).
Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
vrighter@discuss.tchncs.de 1 month agoyes it is, and it doesn’t work
theterrasque@infosec.pub 1 month ago
Rivalarrival@lemmy.today 1 month ago
It needs to be retrained on the responses it receives from it’s conversation partner. It’s previous output provides context for its partner’s responses.
It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite “you’re wrong” feedback from its partners, and it is instructed to minimize such feedback.
vrighter@discuss.tchncs.de 1 month ago
Yeah that implies that the other network(s) can tell right from wrong. Which they can’t. Because if they did the problem wouldn’t need solving.
Rivalarrival@lemmy.today 1 month ago
What other networks?
It currently recognizes when it is told it is wrong: it is told to apologize to it’s conversation partner and to provide a different response. It doesn’t need another network to tell it right from wrong. It needs access to the previous sessions where humans gave it that information.
LillyPip@lemmy.ca 1 month ago
Have you tried doing this? I have, for 6 months, on the more ‘advanced’ pro versions. Yes, it will apologise and try again – and it gets progressively worse over time. There’s been a marked degradation as it progresses, and all the models are worse now at maintaining context and not hallucinating than they were several months ago.
LLMs aren’t the kind of AI that can evaluate themselves and improve like you’re suggesting. Their logic just doesn’t work like that. A true AI will come from an entirely different type of model, not from LLMs.
vrighter@discuss.tchncs.de 1 month ago
here’s that same conversation with a human:
“why is X?” “because y!” “you’re wrong” “then why the hell did you ask me for if you already know the answer?”
What you’re describing will train the network to get the wrong answer and then apologize better. It won’t train it to get the right answer
linearchaos@lemmy.world 1 month ago
Alpaca is successfully doing this no?
vrighter@discuss.tchncs.de 1 month ago
from their own site:
Alpaca also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes. Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003.
linearchaos@lemmy.world 1 month ago
So does GPT 3 and 4, it’s still in use and it’s cheaper.
vrighter@discuss.tchncs.de 1 month ago
yeah. what’s your point. I said hallucinations are not a solvable problem with LLMs. You mentioned that alpaca used synthetic data successfully. By their own admissions, all the problems are still there. Some are worse.