If you’re interested in like this line of attack, you can also use similar techniques to defeat models that are trained to do object detection (like, for example, the ones that detect the location of your license plate) using adversarial noise attacks.
The short version is, if you have a network that does detection, you can run that network on images that have been altered by another network which uses the confidence of the detection network in its loss function. The second model is trained to create noise, which looks innocuous to human eyes, that maximally disrupts the segmentation/object detection process.
You could then print this noise on, say, a transparent overlay and put it on your license plate. Note: Flock is aware of this technique and has lobbied state lawmakers to make putting anything on your plate to disrupt automated reading illegal in some places, check your laws.
Benn Jordan has actually created and trained such a network video here: www.youtube.com/watch?v=Pp9MwZkHiMQ
And also uploaded his code to github: www.youtube.com/watch?v=Pp9MwZkHiMQ
In states where you cannot cover your license plate you’re not restricted from decorating the rest of your car. You could use a similar technique to create bumper stickers that are detected as license plates and place them all over your vehicle. Or, even, as Benn suggested, print them with UV ink so they’re invisible to humans but very visible to AI cameras who often use UV lamps to provide night vision/additional illumination.
You could also, if you were so inclined, generate bumper stickers or a vinyl wrap which could make the detector be unable to even detect a car.
Adversarial noise attacks are one of the bigger vulnerabilities of AI-based systems and they come in many flavors and can affect anything that uses a neural network.
Another example (also from the video) is that you can encode voice commands in plain audio which, to the user is completely transparent but a device (like Alexa or Siri) will hear it as a specific command (“Hey Siri, unlock the front door”). Any user-generated audio that you encounter online can have this kind of attack encoded in it, the potential damage is pretty limited because AI assistants don’t really control critical functions in your life yet… but you should probably not let your assistant listen to TikTok if it can do more than control your home lighting.
Whirling_Ashandarei@lemmy.world 2 weeks ago
This is awesome, thank you
FauxLiving@lemmy.world 1 week ago
Adversarial noise a fun topic and a DIY AI thing you can do to familiarize yourself with the local-hosting side of things. Image generating networks are lightweight compared to LLMs and are able to be run on a moderately powerful, NVIDIA, gaming PC (most of my work is done on a 3080).
LLM poisoning can also be done if you can insert poisoned text into their training set. An example method would be detecting AI scrapers on your server and sending them poisoned instead of automatically blocking them.
Here is the same kind of training data poisoning attack, but for images that was made by the researchers of University of Chicago into a simple windows application: nightshade.cs.uchicago.edu/whatis.html
Thanks to you comment I realized that my clipboard didn’t have the right link selected so I edited in the link to his github. ( github.com/bennjordan )