Comment on AI finds errors in 90% of Wikipedia's best articles

<- View Parent
AcesFullOfKings@feddit.uk ⁨1⁩ ⁨day⁩ ago

If you read the post it’s actually quite a good method. Having an LLM flag potential errors and then reviewing them manually as a human is actually quite productive.

I’ve done exactly that on a project that relies on user-submitted content; moderating submissions at even a moderate scale is hard, but having an llm look through for me is easy. I can then check through anything it flags and manually moderate. Neither the accuracy nor precision is particularly high, but it’s a low-effort way to find a decent number of the thing you’re looking for. In my case I was looking for abusive submissions from untrusted users; in the OP author’s case they were looking for errors. I’m quite sure this method would never find all errors, and as per the article the “errors” it flags aren’t always correct either. But the effort:reward ratio is high.

source
Sort:hotnewtop