Comment on [Survey] Can you tell which images are AI generated?
Sekoia@lemmy.blahaj.zone 1 year agoSure, it’s not proof, but it gives a good starting point. Non-overfitted images would still have this effect (to a lesser extent), and this would never happen to a human. And it’s not like the prompts were the image labels, the model just decided to use the stock image as a template (obvious in the case with the painting).
Even_Adder@lemmy.dbzer0.com 1 year ago
This is a bold claim to make with no evidence. When every trained image accounts for less than one byte of data in the model. Even the tiniest images file contain many thousands of bytes. One byte isn’t even enough to store a single character of text, most Latin-based alphabets and some symbols, use two bytes.
There are plenty of artists that get stuck with same-face. Like Sam Yang for instance. Then there are the others who can’t draw disabled people or people of color. If it isn’t a beautiful white female character, they can’t do it. It can take a lot of additional training for people to break out of their rut, some don’t.
I’m not going to tell you that latent diffusion models learn like humans, but they are still learning. arxiv.org/pdf/2306.05720.pdf Have a source.
I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF if you haven’t already. The EFF is a digital rights group who most recently won a historic case: border guards in the US now need a warrant to search your phone.
This guy also does a pretty good job of explaining how latent diffusion models work, You should give this a watch too.
PipedLinkBot@feddit.rocks [bot] 1 year ago
Here is an alternative Piped link(s):
explaining how latent diffusion models work
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.