Yes: Train on more images processed by this.
In other words: If the tool becomes popular it will be self-defeating by producing a large corpus of images teaching future models to ignore the noise it introduces.
Comment on This new data poisoning tool lets artists fight back against generative AI
penix@sh.itjust.works 1 year ago
There is probably a trivial workaround to this.
Yes: Train on more images processed by this.
In other words: If the tool becomes popular it will be self-defeating by producing a large corpus of images teaching future models to ignore the noise it introduces.
It doesn’t even need a work around, it’s not going to affect anything when training a model.
It might make style transfer harder using them as reference images on some models but even that’s fairly doubtful, it’s just noise on an image and everything is already full of all sorts of different types of noise.
The problem is identifying it. If it’s necessary to preprocess every image used for training instead of just feeding it is a model that already makes it much more resources costly
You wouldn’t want to. If you just feed it to the models, then if there are enough of these images to matter the model will learn to ignore the differences. You very specifically don’t want to prevent the model from learning to overcome these things, exactly because if you do you’re stuck with workarounds like that forever, but if you don’t the model will just become more robust to noisy data like this.
FaceDeer@kbin.social 1 year ago
There's trivial workarounds for Glaze, which this is based off of, so I wouldn't be surprised.