Comment on This new data poisoning tool lets artists fight back against generative AI

<- View Parent
realharo@lemm.ee ⁨1⁩ ⁨year⁩ ago

Now you’re just cherry picking some surface-level similarities.

You can see the difference in the process in the results, for example in how some generated pictures will contain something like a signature in the corner, simply because it resembles the training data. Or how it is at least possible to get the model to output something extremely close to the training data - gizmodo.com/ai-art-generators-ai-copyright-stable….

That at least proves that the process is quite different to the process of human learning.

The question is how much those differences matter, and which similarities you want to focus on.

source
Sort:hotnewtop