That’s not how this works at all. The people training these models are fully aware of bad data. There are entire careers dedicated to preserving high quality data. GPT-4 is terrible compared to something like Gemini 3 Pro or Claude Opus 4.5.
Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds
ZoteTheMighty@lemmy.zip 1 week ago
This is why I think GPT 4 will be the best “most human-like” model we’ll ever get. After that, we live in a post-GPT4 internet and all future models are polluted. Other models after that will be more optimized for things we know how to test for, but the general purpose “it just works” experience will get worse from here.
jaykrown@lemmy.world 1 week ago
krooklochurm@lemmy.ca 1 week ago
Most human LLM anyway.
Word on the street is LLMs are a dead end anyway.
Maybe the next big model won’t even need stupid amounts of training data.
BangCrash@lemmy.world 1 week ago
That would make it a SLM
MadPsyentist@lemmy.nz 1 week ago
Will the real SLM Shady pleas stand up!