Comment on AI insiders seek to poison the data that feeds them
sobchak@programming.dev 1 day agoI once saw an old lecture where the guy working on Yahoo spam filters noticed that spammers would create accounts to mark their own spam messages as not spam (in an attempt to trick the spam filters; I guess a kind of a Sybil attack), and because the way the SPAM filtering models were created and used, it made the SPAM filtering more effective. It’s possible that wider variety of “poisoned” data can actually help improve models.
algernon@lemmy.ml 19 hours ago
I… have my doubts. I do not doubt that a wider variety of poisoned data can improve training, by implementing new ways to filter out unusable training data. In itself, this would, indeed, improve the model.
But in many cases, the point of poisoning is not to poison the data, but to deny the crawlers access to the real work (and provide an opportunity to poison their URL queue, which is something I can demonstrate as working). If poison is served instead of the real content, that will hurt the model, because even if it filters out the junk, it will have access to less new data to train on.