It wouldn’t be surprising to me if they’ve had this implemented for awhile.
There’s still some question about why their 3.5 model had an apparent sudden drop-off in quality about a year ago, and among the plausible explanations for it could be that they were fucking with their weights in order to watermark the outputs in exactly the way you’re mentioning. They were also fighting against prompt-injection methods and censor disapproved uses at the time, so who the fuck knows.
PenisDuckCuck9001@lemmynsfw.com 2 months ago
So if cheating on homework, use self hosted only then. Cool.
brucethemoose@lemmy.world 2 months ago
You have full control of your logit outputs with local LLMs, so theoretically you could “unscramble” them.
OpenAI (IIRC) very notably stopped giving the logprobs of their models. They did this for many reasons, and most of them boil down to “profits” and “they are anticompetitive jerks”