Honestly, I can’t tell if this is sarcasm or not.
Digitally watermark the image for identification purposes. Hash the hardware, MAC ID, IP address, of the creator and have that inserted via steganography or similar means. Just like printers use a MIC for documents printed on that printer, AI generated imagery should do the same at this point. It’s not perfect and probably has some undesirable consequences, but it’s better than nothing when trying to track down deepfake creators.
General_Effort@lemmy.world 8 months ago
yamanii@lemmy.world 8 months ago
Samsung’s AI does watermark their image at the exif, yes it is trivial for us to remove the exif, but it’s enough to catch these low effort bad actors.
General_Effort@lemmy.world 8 months ago
I think all the main AI services watermark their images (invisibly, not in the metadata). A nudify service might not, I imagine.
I was rather wondering about the support for extensive surveillance.
abhibeckert@lemmy.world 8 months ago
The difference is it costs billions of dollars to run a company manufacturing printers and it’s easy for law enforcement to pressure them into not printing money.
It costs nothin to produce an AI image, you can run this stuff on a cheap gaming PC or even some laptops.
And you can do it with open source software. If the software has restrictions on creating abusive material, you can find a fork with that feature disabled. If it has stenography, you can find one with that disabled too.