No progress as of late.
Comment on [deleted]
BCOVertigo@lemmy.world 1 day agoGenuine question, how confident are we that an LLM can actually be patched like a deterministic system through prompt and weight manipulation? Has the 95% adversarial success rate that was reported actually moved in the past year? I don’t feel like any meaningful progress has been made but I’m admittedly biased so I know I’m not looking in the places that would report success if there was any.
outhouseperilous@lemmy.dbzer0.com 1 day ago
rizzothesmall@sh.itjust.works 1 day ago
They probably can’t completely patched in their training, but using a pipeline which reviews the prompt and response for specific malicious attack vectors has proved very successful if adding some latency and processing expense.
You can, however, only run these when you detect a potentially malicious known exploit. If the prompt contains any semantic similarity to grandma telling a story or how would my grandma have done x, for example, you can add the extra pipeline step to mitigate against the attack.
Ziglin@lemmy.world 1 day ago
One could also completely fix it by knowing what data gets used for training it and removing the instructions for building bombs. If it’s as bad at chemistry as it is programming that should at least make it be wrong about anything it does end up spitting out.
outhouseperilous@lemmy.dbzer0.com 1 day ago
Unless they have improved since january, which i doubt; we can actually confirm that; if you remember how the year started.
Kowowow@lemmy.ca 1 day ago
I’ve been wanting to try and see if t a l k i n g l i k e t h i s gets past any filters