Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
vrighter@discuss.tchncs.de 3 months agoalso, what you described has already been studied. Training an llm its own output completely destroys it, not makes it better.
linearchaos@lemmy.world 3 months ago
This is incorrect or perhaps updated. Generating new data, using a different AI method to tag that data, and then training on that data is definitely a thing.
vrighter@discuss.tchncs.de 3 months ago
yes it is, and it doesn’t work
linearchaos@lemmy.world 3 months ago
Alpaca is successfully doing this no?
vrighter@discuss.tchncs.de 3 months ago
from their own site:
Alpaca also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes. Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003.
theterrasque@infosec.pub 3 months ago
Microsoft’s Dolphin and phi models have used this successfully, and there’s some evidence that all newer models use big LLM’s to produce synthetic data (Like when asked, answering it’s ChatGPT or Claude, hinting that at least some of the dataset comes from those models).
Rivalarrival@lemmy.today 3 months ago
It needs to be retrained on the responses it receives from it’s conversation partner. It’s previous output provides context for its partner’s responses.
It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite “you’re wrong” feedback from its partners, and it is instructed to minimize such feedback.
vrighter@discuss.tchncs.de 3 months ago
Yeah that implies that the other network(s) can tell right from wrong. Which they can’t. Because if they did the problem wouldn’t need solving.