Supplementary synthetic data increases the quality of the model.
Comment on Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images
alexdeathway@programming.dev 7 months ago
why would they do this, doesn’t that reduce the quality of training dataset?
Even_Adder@lemmy.dbzer0.com 7 months ago
SomeGuy69@lemmy.world 7 months ago
Correct. To a certain extend one can add AI data into AI, too much and you add noise, making the result worse, like a copy of a copy.
General_Effort@lemmy.world 7 months ago
Yes, though that’s not what they’re doing. They train on images uploaded to their marketplace and, of course, some of these are AI generated.
Even_Adder@lemmy.dbzer0.com 7 months ago
It’s fine as long as it’s not the majority.
General_Effort@lemmy.world 7 months ago
It doesn’t really matter how much it is. An image is an image.
General_Effort@lemmy.world 7 months ago
No.
I feel I should explain this but I got nothing. An image is an image. Whether it’s good or bad is a matter of personal preference.
hyper@lemmy.zip 7 months ago
I’m not so sure about that… if you train an ai on images with disfigured anatomy which it thinks is the “right” way it, it will generate new images with messed up anatomy. It gives a feedback loop, like when a mic picks up its own signal.
General_Effort@lemmy.world 7 months ago
Well, you wouldn’t train on images that you consider bad, or rather you’d use them as examples for what not to do.
Yes, you have to be careful when training a model on its own output. It already has a tendency to produce that, so it’s easy to “overshoot”, so to say. But it’s not a problem in principle. It’s also not what’s happening here. Adobe doesn’t use the same model as Midjourney.
abhibeckert@lemmy.world 7 months ago
Midjourney doesn’t generate disfigured anatomy. You’re think of Stable Diffusion which is a smaller model that can generate an image in 30 seconds on my laptop GPU.
bionicjoey@lemmy.ca 7 months ago
When you process an image through the same pipeline multiple times, artifacts will appear and become amplified.
General_Effort@lemmy.world 7 months ago
What’s happening here is just nothing like that. There is no amplifier. Images aren’t run through a pipeline.
bionicjoey@lemmy.ca 7 months ago
The process of training is itself a pipeline
cynar@lemmy.world 7 months ago
Depends how it’s done.
Full generative images would definitely start creating a copying error type problem.
However it’s not quite that simple. An AI system can be used to distort an image. The derivatives force the learning AI to notice different things. This can vastly extend the pool of data to learn from, and so improve the end AI.
Adobe obviously decided that the copying errors were worth the extended datasets.