Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
linearchaos@lemmy.world 3 months agoThis is incorrect or perhaps updated. Generating new data, using a different AI method to tag that data, and then training on that data is definitely a thing.
vrighter@discuss.tchncs.de 3 months ago
yes it is, and it doesn’t work
linearchaos@lemmy.world 3 months ago
Alpaca is successfully doing this no?
vrighter@discuss.tchncs.de 3 months ago
from their own site:
Alpaca also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes. Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003.
linearchaos@lemmy.world 3 months ago
So does GPT 3 and 4, it’s still in use and it’s cheaper.
theterrasque@infosec.pub 3 months ago
Microsoft’s Dolphin and phi models have used this successfully, and there’s some evidence that all newer models use big LLM’s to produce synthetic data (Like when asked, answering it’s ChatGPT or Claude, hinting that at least some of the dataset comes from those models).
Rivalarrival@lemmy.today 3 months ago
It needs to be retrained on the responses it receives from it’s conversation partner. It’s previous output provides context for its partner’s responses.
It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite “you’re wrong” feedback from its partners, and it is instructed to minimize such feedback.
vrighter@discuss.tchncs.de 3 months ago
Yeah that implies that the other network(s) can tell right from wrong. Which they can’t. Because if they did the problem wouldn’t need solving.
Rivalarrival@lemmy.today 3 months ago
What other networks?
It currently recognizes when it is told it is wrong: it is told to apologize to it’s conversation partner and to provide a different response. It doesn’t need another network to tell it right from wrong. It needs access to the previous sessions where humans gave it that information.