It’s a plausible trap. Depending on the architecture, the image decoder (that “sees”) is bolted onto main model as a more discrete part, and the image generator could be a totally different model. So internally, if it’s not ingesting the “response” image, it possibly has no clue they’re the same.
Of course, we have no idea, because OpenAI is super closed :/
Monstrosity@lemm.ee 2 days ago
This is a LemmyShitpost, so, yes absolutely.
The_Picard_Maneuver@lemmy.world 2 days ago
Indeed.