Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
linearchaos@lemmy.world 1 month agoThis is incorrect or perhaps updated. Generating new data, using a different AI method to tag that data, and then training on that data is definitely a thing.
vrighter@discuss.tchncs.de 1 month ago
yes it is, and it doesn’t work
linearchaos@lemmy.world 1 month ago
Alpaca is successfully doing this no?
vrighter@discuss.tchncs.de 1 month ago
from their own site:
Alpaca also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes. Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003.
linearchaos@lemmy.world 1 month ago
So does GPT 3 and 4, it’s still in use and it’s cheaper.
theterrasque@infosec.pub 1 month ago
Microsoft’s Dolphin and phi models have used this successfully, and there’s some evidence that all newer models use big LLM’s to produce synthetic data (Like when asked, answering it’s ChatGPT or Claude, hinting that at least some of the dataset comes from those models).
Rivalarrival@lemmy.today 1 month ago
It needs to be retrained on the responses it receives from it’s conversation partner. It’s previous output provides context for its partner’s responses.
It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite “you’re wrong” feedback from its partners, and it is instructed to minimize such feedback.
vrighter@discuss.tchncs.de 1 month ago
Yeah that implies that the other network(s) can tell right from wrong. Which they can’t. Because if they did the problem wouldn’t need solving.
Rivalarrival@lemmy.today 1 month ago
What other networks?
It currently recognizes when it is told it is wrong: it is told to apologize to it’s conversation partner and to provide a different response. It doesn’t need another network to tell it right from wrong. It needs access to the previous sessions where humans gave it that information.