Night shade did work on older models. Neural models adapted to prevent poisoning.
This is a new approach.
Comment on A Project to Poison LLM Crawlers
chunes@lemmy.world 6 days ago
Small quantities of poisoned training data can significantly damage a language model.
Source: trust me bro.
Nightshade tried the same thing and it never worked.
Night shade did work on older models. Neural models adapted to prevent poisoning.
This is a new approach.
Ye, nightshade was defeated by a blur and sharpen iirc lol. Still, was a good first step.
ExLisper@lemmy.curiana.net 5 days ago
Here’s your source: www.anthropic.com/research/small-samples-poison