Comment on Google Translate is vulnerable to prompt injection
testaccount372920@piefed.zip 1 week agoFrom my understanding, most LLMs work by repeatedly putting the processing output back into the input until the result is good enough. This means that in many ways the input and the output are the same thing from the perspective of the LLM and therefore inseparable.