Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds

<- View Parent
supersquirrel@sopuli.xyz ⁨19⁩ ⁨hours⁩ ago

In the realm of LLMs sabotage is multilayered, multidimensional and not something that can easily be identified quickly in a dataset. There will be no easy place to draw some line of “data is contaminated after this point and only established AIs are now trustable” as every dataset is going to require continual updating to stay relevant.

I am not suggesting we need to sabotage all future endeavors for creating valid datasets for LLMs, I am saying sabotage the ones that are stealing and using things you have made and written without your consent.

source
Sort:hotnewtop