Comment on This new data poisoning tool lets artists fight back against generative AI

<- View Parent
RubberElectrons@lemmy.world ⁨1⁩ ⁨year⁩ ago

Just to start with, not very experienced with neural networks at all beyond messing with openCV for my graduation project.

Anyway, that these countermeasures expose “failure modes” in the training isn’t a great reason to stop doing this, e.g. scammers come up with a new technique, we collectively respond with our own countermeasures. From what I understand, the training process itself has a vulnerability to poisoned inputs just because of how long it takes to train on multiple variations of the same datasets.

If the network feedbacks itself, then cool! It has developed its own style, which is fine. The goal is to stop people from outright copying existing artists style.

source
Sort:hotnewtop