Comment on Google Translate is vulnerable to prompt injection

<- View Parent
FauxLiving@lemmy.world ⁨15⁩ ⁨hours⁩ ago

In my testing, by copying the claimed ‘prompt’ from the article into Google Translate, it simply translated the command. You can try it yourself.

So, the source of everything that kicked off the entire article, is ‘Some guy on Tumblr’ vouching for an experiment, which we can all easily try and fail to replicate.

Seems like a huge waste of everyone’s time. If someone is interested in LLMs, then consuming content like in the OP feels like knowledge but it often isn’t grounded in reality or is framed in a very misleading manner.

On social media, AI is a topic that is heavily loaded with misinformation.

source
Sort:hotnewtop