Comment on Google Translate is vulnerable to prompt injection

Tar_alcaran@sh.itjust.works ⁨23⁩ ⁨hours⁩ ago

task-specific fine-tuning (or whatever Google did instead) does not create robust boundaries between “content to process” and “instructions to follow,”

Duh. No LLM can do that. There is no seperate input to create a boundary. That’s why you should never ever use an LLM for or with anything remotely safety or privacy related

source
Sort:hotnewtop