When you process an image through the same pipeline multiple times, artifacts will appear and become amplified.
Comment on Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images
General_Effort@lemmy.world 6 months agoNo.
I feel I should explain this but I got nothing. An image is an image. Whether it’s good or bad is a matter of personal preference.
bionicjoey@lemmy.ca 6 months ago
General_Effort@lemmy.world 6 months ago
What’s happening here is just nothing like that. There is no amplifier. Images aren’t run through a pipeline.
bionicjoey@lemmy.ca 6 months ago
The process of training is itself a pipeline
General_Effort@lemmy.world 6 months ago
Yes, but the model is the end of that pipeline. The image is not supposed to come out again. A model can “memorize” an image, but then you wouldn’t necessarily expect an amplification of artifacts. Image generators are not supposed to d lossy compression, though the tech could be used for that.
hyper@lemmy.zip 6 months ago
I’m not so sure about that… if you train an ai on images with disfigured anatomy which it thinks is the “right” way it, it will generate new images with messed up anatomy. It gives a feedback loop, like when a mic picks up its own signal.
General_Effort@lemmy.world 6 months ago
Well, you wouldn’t train on images that you consider bad, or rather you’d use them as examples for what not to do.
Yes, you have to be careful when training a model on its own output. It already has a tendency to produce that, so it’s easy to “overshoot”, so to say. But it’s not a problem in principle. It’s also not what’s happening here. Adobe doesn’t use the same model as Midjourney.
abhibeckert@lemmy.world 6 months ago
Midjourney doesn’t generate disfigured anatomy. You’re think of Stable Diffusion which is a smaller model that can generate an image in 30 seconds on my laptop GPU.