This is a LemmyShitpost, so, yes absolutely.
Comment on *Doesn't look like anything to me.*
thisbenzingring@lemmy.sdf.org 11 months ago
did you actually get it to generate a picture and then ask if the same picture is generated and it said no?
Comment on *Doesn't look like anything to me.*
thisbenzingring@lemmy.sdf.org 11 months ago
did you actually get it to generate a picture and then ask if the same picture is generated and it said no?
This is a LemmyShitpost, so, yes absolutely.
Indeed.
brucethemoose@lemmy.world 11 months ago
It’s a plausible trap. Depending on the architecture, the image decoder (that “sees”) is bolted onto main model as a more discrete part, and the image generator could be a totally different model. So internally, if it’s not ingesting the “response” image, it possibly has no clue they’re the same.
Of course, we have no idea, because OpenAI is super closed :/
Scrollone@feddit.it 11 months ago
“Open” AI. Yeah, open my ass.