They only kinda work but more importantly they need mass adoption to actually poison training data. Most people aren’t going to add another step to their posts so probably the only way to mass adopt it is to have platforms automatically poison uploaded images. I wonder if reposts on a platform like that would start to have noticable artifacts in the images like jpeg but different
Comment on OpenAI and Anthropic are ignoring an established rule that prevents bots scraping online content
Zoboomafoo@slrpnk.net 7 months agoWhatever happened to those “nightshade” images that poison the model?
ArmoredThirteen@lemmy.ml 7 months ago
Womble@lemmy.world 7 months ago
You mean that work that took open source software, closed sourced it and refused to release the source code and the poisoning only worked against one specific open source model (stable diffusion). I don’t think that’s going to come riding to anyone’s rescue.