Comment on Google Translate is vulnerable to prompt injection

<- View Parent
TheBlackLounge@lemmy.zip ⁨17⁩ ⁨hours⁩ ago

It’s only an issue with LLMs. And it’s because they’re generative, text completion engines. That is the actual learned task, and it’s a fixed task.

It’s not actually a chat bot. It’s completing a chat log. This can make it do a whole bunch of tasks, but there’s no separation of task description and input.

source
Sort:hotnewtop