I have only learnt CNN models back in uni (transformers just came into popularity at the end of my last semesters), but CNN models learn more complex features from a pic, depending how many layers you add to it, and with each layer, the img size usually gets decreased by a multiplitude of 2 (usually it’s just 2) as far as I remember, and each pixel location will get some sort of feature data, which I completely forgot how it works tbf.
Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds
_cryptagion@anarchist.nexus 2 days agoAh, yes, the large limage model.
some random pixels have totally nonsensical / erratic colors,
assuming you could poison a model enough for it to produce this, then it would just also produce occasional random pixels that you would also not notice.
PrivateNoob@sopuli.xyz 2 days ago
waterSticksToMyBalls@lemmy.world 2 days ago
That’s not how it works, you poison the image by tweaking some random pixels that are basically imperceivable to a human viewer. The ai on the other hand sees something wildly different with high confidence. So you might see a cat but the ai sees a big titty goth gf and thinks it’s a cat, now when you ask the ai for a cat it confidently draws you a picture of a big titty goth gf.
Lost_My_Mind@lemmy.world 2 days ago
…what if I WANT a big titty goth gf?
phutatorius@lemmy.zip 3 hours ago
You better stay away from mine, Romeo.
TheBat@lemmy.world 2 days ago
Get in line.
waterSticksToMyBalls@lemmy.world 2 days ago
Step 1: poison the ai
_cryptagion@anarchist.nexus 2 days ago
Ok well I fail to see how that’s a problem.
Cherry@piefed.social 2 days ago
Good use for my creativity. I might get on this over Christmas.