LLM is the wrong term. That’s Large Language Model. These are generative image models / text-to-image models.
Trurhfully though, while it will be there when the image is trained, it won’t ‘notice’ it unless you distort it significantly (enough for humans to notice as well). Otherwise it won’t make much of a difference because these models are often trained on a compressed and downsized version of the image (in what’s called latent space)
vidarh@lemmy.stad.social 1 year ago
An AI model will “notice them” but ignore them if trained on enough copies with them to learn that they’re not significant.