Comment on *Doesn't look like anything to me.*
LarmyOfLone@lemm.ee 3 weeks agoAnd if it could distinguish better, it could also generate better.
Comment on *Doesn't look like anything to me.*
LarmyOfLone@lemm.ee 3 weeks agoAnd if it could distinguish better, it could also generate better.
Natanael@infosec.pub 3 weeks ago
Not necessarily, but errors would be less obvious or weirder since it would spend more time in training
LarmyOfLone@lemm.ee 3 weeks ago
Weirder? Interesting, like how for example?
Natanael@infosec.pub 3 weeks ago
Weirder in that it gets better at “photorealism” (textures, etc) but subjects might be nonsensical. Only teaching it how to avoid automated detection will not teach it to understand what scenes mean.
LarmyOfLone@lemm.ee 2 weeks ago
I believe most image generating models are too small (like only 4GB RAM). Deepseek R1 is 1.5TB ram (or half or quarter that at reduced precision) to get some semblance of “general knowledge”. So to get the “semantics” of an image right, not just the “syntax” you’d need bigger models and probably more data describing images. Of course, do we really want that?