This actually leads to more conformist images with more errors, over time. Basically if an ai takes images from us, its gets loads of creativity, outputs less creativity, and more errors. So do they a couple of rounds and you indeed end up with utter crap.
Comment on Adobe’s ‘Ethical’ Firefly AI Was Trained on Midjourney Images
TheGiantKorean@lemmy.world 7 months ago
AI ingesting the outpt of AI ingesting the output of AI…
phoenixz@lemmy.ca 7 months ago
DarkThoughts@fedia.io 7 months ago
Isn't this causing a huge degradation in quality? It's like compressing an image over and over again. Those "AI" models can only generate things on what they know, and already have a very real issue of looking samey because of it. So if we train models on that, and then another model on the new model, and repeat this over and over again, we'd end up with less and less quality & variety for each model, no?
balder1991@lemmy.world 7 months ago
I suppose the AI images submitted are done so because they turned out good, so there’s still a human selection process there. It’s not as bad at automatically feeding random generated images into the training.
PapstJL4U@lemmy.world [bot] 7 months ago
But are they? The amount must be minuscle as searching and selecting costs time. What impact can thoughtful selected images have?
General_Effort@lemmy.world 7 months ago
Adobe trains on images submitted to their stock image marketplace. Deciding to submit is the first selection step. Then there is some quality control by Adobe; mainly AI powered, I’d guess. Adobe also has the sales data (again, human selection) and additional tracking data; how many people clicked a thumbnail and so on.
What people imagine here about quality loss is completely divorced from reality.
Drewelite@lemmynsfw.com 7 months ago
Well that’s what human knowledge is lol. This is the AI Internet 😂 My guess is they will begin to diverge from human interest/comprehension if they don’t have enough of their training data be human created.
General_Effort@lemmy.world 7 months ago
That’s not what anyone would do in reality, though. In reality, when you train an AI model on AI output you get a quality increase, because the model learns to be better at doing the things it’s supposed to do, while forgetting the irrelevant. Where output looks samey, it’s because different people are chasing the same mainstream taste.
DarkThoughts@fedia.io 7 months ago
How do you get a quality increase if you by definition cut down on the variety of the generative aspects? That doesn't make any sense.
General_Effort@lemmy.world 7 months ago
Put like this, because too much variety is the biggest problem in terms of quality. People don’t want variety in terms of, say, number of limps or fingers. People have something specific in mind when they prompt an AI. They only want very limited and specific variability.
In a sense, limiting variety is the whole point of the AI. There is a vast number of possible images. Most of them would be simply indistinguishable noise to us. The proportion we would consider a sensible picture is tiny. We want to constrain the variety to within this tiny segment.