Comment on This new data poisoning tool lets artists fight back against generative AI

<- View Parent
vidarh@lemmy.stad.social ⁨8⁩ ⁨months⁩ ago

They don’t even need to detect them - once they are common enough in training datasets the training process will “just” learn that the noise they introduce are not features relevant to the desired output. If there are enough images like that it might eventually generate images with the same features.

source
Sort:hotnewtop