That isn’t what most LLMs were designed for, though. It’s just one possible use case.
Comment on [Research] At least 80 million inconsistent facts on Wikipedia – can AI help find them?
Derpenheim@lemmy.zip 1 week ago
My knee jerk is no, because fuck ai, but LLMs are literally made to parse vast amounts of data quickly. The analysis and corrections needs to be done manually, but finding these errors are literally what they were originally made to do
fodor@lemmy.zip 6 days ago
CptBread@lemmy.world 6 days ago
Well it could have the issue of overloading volunteers with issues. Especially bad if the false positive rate is high enough.