Interesting stuff, but probably not huge news.
The AI guessing portion is currently playing on ultra-easy mode because there are only a few useful LLMs to compare to, and none of those have given serious thought to strongly securing their tech demo solution communications channel yet.
This is important work though, since someday someone might use an AI for something important and actually want to prevent eavesdropping.
I’m being a little unfair, but it’s early days and let’s try not take ourselves too seriously just yet. If I’m trusting one of these LLMs with something deeply sensitive, it’s not eavesdropping that’s going to get me into deep shit, it’s the LLM’s hallucinations.
This is important research, though. Someday the AI will get good, and I’ll want to chat with it securely.
Thankfully, I suspect mitigation will be quite straightforward. Carefully designed small random changes to token handling should throw this approach way off, while not even being noticed by end users.
The habit of sending tokens right as they generate is a dumb sales gimmick that needs to go away anyway. So if security folks manage to kill it, good riddance.
kevincox@lemmy.ml 8 months ago
This is pretty clever. As I understand it.
This is a good reminder any time you are sending content in small chunks over an encrypted channel, many encrypted channels don’t provide protection against size leaks by default.
It seems there are a few easy solutions to this:
These still all leak the approximate length of the response, but that is probably acceptable.
PlexSheep@feddit.de 8 months ago
That actually is really really interesting. Thanks for giving the tldr. Do token lengths vary that much?
kevincox@lemmy.ml 8 months ago
Absolutely. They are sort of a compression scheme so the tokens contain different numbers of characters based on how frequent that string is. So common words like “the” will typically be one token, or maybe even common phrases like “I am”. On the other hand rare punctuation such as “~” may be its own token. There will also be tokens for many common prefixes and suffixes such as “non” and “n’t”. The tokens of each model are different but they definitely vary in length.