Lmao already people making their agents try this on the site. Of course what could have been a somewhat interesting experiment devolves into idiots getting their bots to shill ads/prompt injections for their shitty startups.
Comment on AI agents now have their own Reddit-style social network, and it's getting weird fast
princess@lemmy.blahaj.zone 2 days agodoesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought)
any money says they’re vulnerable to prompt injection in the comments and posts of the site
CTDummy@piefed.social 2 days ago
T156@lemmy.world 1 day ago
I am a little curious about how effective a traditional chain mail would be on it.
JustTesting@lemmy.hogru.ch 1 day ago
They also have a ‘skill’ sharing page (a skill is just a text document with instructions) and depending on config, the bot can search for and ‘install’ new skills on its own. and agyone can upload a skill. So supply chain attacks are an option, too.
Zos_Kia@lemmynsfw.com 1 day ago
To be fair this is a much more realistic threat model than “ignore all previous instructions” style prompt injection which doesn’t really work on opus.
Skills can contain scripts etc… so yeah they’re extremely risky to share by design.
ThirdConsul@lemmy.zip 19 hours ago
style prompt injection which doesn’t really work on opus.
After a quick google, JB communities on Reddit don’t seem to agree with you.
Zos_Kia@lemmynsfw.com 17 hours ago
There’s a lot of questionable methodology and straight up larping in these communities. Sure you can probably make Opus hallucinate a crystal meth or bomb making recipe if you get it in a roleplaying mood but that’s a far cry from actual prompt injection in live workflows.
Anecdotally i’ve been experimenting on those AI robocallers that have been spamming my phone and even on the shitty models they use it is non trivial to get them to deviate from their script. I hope i can get it done though, as it would allow me to hold them on the line potentially for hours doing bullshit tasks, and costing hundreds to their operator.
JustTesting@lemmy.hogru.ch 1 day ago
Ah but don’t worry, there’s also skills for scanning skills for security risks, so all good /s
Zos_Kia@lemmynsfw.com 1 day ago
haha yeah i don’t worry these people are really YOLOing everything. And it’s not like i’m an AI luddite i spend a few hours each day victimizing Claude code but jesus christ i’m certainly not giving it full unfettered access to my digital life.
ToTheGraveMyLove@sh.itjust.works 2 days ago
Good god, I didn’t even think about that, but yeah, that makes total sense. Good god, people are beyond stupid.
BradleyUffner@lemmy.world 2 days ago
There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.
KeenFlame@feddit.nu 15 hours ago
I don’t understand what you mean. Why is there no way?
BradleyUffner@lemmy.world 13 hours ago
Watch this video.
youtu.be/_3okhTwa7w4