Comment on Elon Musk wants to rewrite "the entire corpus of human knowledge" with Grok
brucethemoose@lemmy.world 20 hours agoThere’s some nuance.
Using LLMs to augment data, especially for fine tuning (not training the base model), is a sound method. The Deepseek paper using, for instance, generated reasoning traces is famous for it.
Another is using LLMs to generate logprobs of text, and train not just on the text itself but on the *probability a frontier LLM sees in every ‘word.’ This is called distillation, though there’s some variation and complication.
But yes, the “dumb” way, aka putting data into a text box and asking an LLM to correct it, is dumb and dumber, because:
-
You introduce some combination of sampling errors and repetition/overused word issues, depending on the sampling settings
-
You possibly pollute your dataset with “filler”
-
In Musks specific proposition, it doesn’t even fill knowledge gaps the old Grok has.
In other words, Musk has no idea WTF he’s talking about.