Comment on AI insiders seek to poison the data that feeds them
Whirling_Ashandarei@lemmy.world 1 week agoThis is awesome, thank you
Comment on AI insiders seek to poison the data that feeds them
Whirling_Ashandarei@lemmy.world 1 week agoThis is awesome, thank you
FauxLiving@lemmy.world 1 week ago
Adversarial noise a fun topic and a DIY AI thing you can do to familiarize yourself with the local-hosting side of things. Image generating networks are lightweight compared to LLMs and are able to be run on a moderately powerful, NVIDIA, gaming PC (most of my work is done on a 3080).
LLM poisoning can also be done if you can insert poisoned text into their training set. An example method would be detecting AI scrapers on your server and sending them poisoned instead of automatically blocking them.
Here is the same kind of training data poisoning attack, but for images that was made by the researchers of University of Chicago into a simple windows application: nightshade.cs.uchicago.edu/whatis.html
Thanks to you comment I realized that my clipboard didn’t have the right link selected so I edited in the link to his github. ( github.com/bennjordan )