The I in LLM stands for “image”.
Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds
PrivateNoob@sopuli.xyz 20 hours agoThere are poisoning scripts for images, where some random pixels have totally nonsensical / erratic colors, which we won’t really notice at all, however this would wreck the LLM into shambles.
turdas@suppo.fi 20 hours ago
PrivateNoob@sopuli.xyz 19 hours ago
Fair enough on the technicality issues, but you get my point. I think just some art poisoing could maybe help decrease the image generation quality if the data scientist dudes do not figure out a way to preemptively filter out the poisoned images (which seem possible to accomplish ig) before training CNN, Transformer or other types of image gen AI models.
onehundredsixtynine@sh.itjust.works 8 hours ago
There are poisoning scripts for images
Link?
partofthevoice@lemmy.zip 12 hours ago
Replace all upper case I with a lower case L and vis-versa. Fill randomly with zero-width text everywhere. Use white text instead of line break (make it weird prompts, too).
killingspark@feddit.org 9 hours ago
Somewhere an accessibility developer is crying in a corner because of what you just typed
onehundredsixtynine@sh.itjust.works 8 hours ago
But seriosuly: don’t do this. Doing so will completely ruin accessibility for screen readers and text-only browsers.
_cryptagion@anarchist.nexus 19 hours ago
Ah, yes, the large limage model.
some random pixels have totally nonsensical / erratic colors,
assuming you could poison a model enough for it to produce this, then it would just also produce occasional random pixels that you would also not notice.
waterSticksToMyBalls@lemmy.world 19 hours ago
That’s not how it works, you poison the image by tweaking some random pixels that are basically imperceivable to a human viewer. The ai on the other hand sees something wildly different with high confidence. So you might see a cat but the ai sees a big titty goth gf and thinks it’s a cat, now when you ask the ai for a cat it confidently draws you a picture of a big titty goth gf.
Lost_My_Mind@lemmy.world 18 hours ago
…what if I WANT a big titty goth gf?
TheBat@lemmy.world 18 hours ago
Get in line.
waterSticksToMyBalls@lemmy.world 17 hours ago
Step 1: poison the ai
Cherry@piefed.social 11 hours ago
Good use for my creativity. I might get on this over Christmas.
_cryptagion@anarchist.nexus 17 hours ago
Ok well I fail to see how that’s a problem.
PrivateNoob@sopuli.xyz 19 hours ago
I have only learnt CNN models back in uni (transformers just came into popularity at the end of my last semesters), but CNN models learn more complex features from a pic, depending how many layers you add to it, and with each layer, the img size usually gets decreased by a multiplitude of 2 (usually it’s just 2) as far as I remember, and each pixel location will get some sort of feature data, which I completely forgot how it works tbf.
dragonfly4933@lemmy.dbzer0.com 1 hour ago
An issue I see with a lot of scripts which attempt to automate the generation of garbage is that it would be easy to identify and block. Whereas if the poison looks similar to real content, it is much harder to detect.
It might also be possible to generate adversarial text which causes problems for models when used in a training dataset. It could be possible to convert a given text by changing the order of words and the choice of words in such a way that a human doesn’t notice, but it causes problems for the llm. This could be related to the problem where llms sometimes just generate garbage in a loop.
Frontier models don’t appear to generate garbage in a loop anymore (i haven’t noticed it lately), but I don’t know how they fix it. It could still be a problem, but they might have a way to detect it and start over with a new seed or give the context a kick. In this case, poisoning actually just increases the cost of inference.