Comment on TikTok ran a deepfake ad of an AI MrBeast hawking iPhones for $2 — and it's the 'tip of the iceberg'
Asudox@lemmy.world 1 year ago
And that is why we need a pixel poisoner but for videos.
Comment on TikTok ran a deepfake ad of an AI MrBeast hawking iPhones for $2 — and it's the 'tip of the iceberg'
Asudox@lemmy.world 1 year ago
And that is why we need a pixel poisoner but for videos.
KairuByte@lemmy.dbzer0.com 1 year ago
I’m not familiar with the term, and Google shows nothing that makes sense in context. Can you explain the concept?
Omniraptor@lemm.ee 1 year ago
It’s a technique to alter images that makes them distorted for the “perception” by generative neural networks and unusable as training data but still recognizable to a human.
en.wikipedia.org/…/Adversarial_machine_learning#D…
One example of a tool that does this is glaze.cs.uchicago.edu but I have doubts about its imperceptibility
SoaringDE@feddit.de 1 year ago
Yeah I’m at a loss aswell. Is it a way to prove the source of a video?
wildginger@lemmy.myserv.one 1 year ago
Its AI poison. You alter the data in such a way that the image is unchanged to a humans visual eye, but when imaging AI software uses the image within its sample imaging, the alterations ruin its ability to make correlations and recognize patterns.
Its toxic for the entire data set too, so it can damage the AI output of most things as long as its within the list of images used to train the AI.
p03locke@lemmy.dbzer0.com 1 year ago
That seems about as effective as those No-AI pictures artists like to pretend will poison AI data sets. A few pixels isn’t going to fool AI, and anything more than that is going to look like a real image was AI-generated, ironically.