Google's DeepMind unit is unveiling today a new method it says can invisibly and permanently label images that have been generated by artificial intelligence.
Its invisibility should really help all the laypeople see it clearly.
Submitted 1 year ago by stopthatgirl7@kbin.social to technology@lemmy.world
https://www.axios.com/2023/08/29/google-watermark-ai-generated-images
Google's DeepMind unit is unveiling today a new method it says can invisibly and permanently label images that have been generated by artificial intelligence.
Its invisibility should really help all the laypeople see it clearly.
says so in article:
The watermark is part of a larger effort by Google and other tech giants to develop ways to verify the authenticity of AI-generated images.
Spoiler - they will secretly have all humans in ai generated art have slightly messed up hands.
Mind blown!
cybirdman@lemmy.ca 1 year ago
TBF, I don’t think the purpose of this watermark is to prevent bad people for passing AI as real. It would be a welcome side-effect but that’s not why google wants this. Ultimately this is supposed to prevent AI training data from being contaminated with other AI generated content. You could imagine if the data set for training AI contains a million images with bad quality or mangled fingers and stuff, it would be hard to train a good AI out of that. Garbage in, garbage out.
Echo71Niner@lemm.ee 1 year ago
AI-generated images are becoming increasingly realistic, AI can’t tell them apart anymore.
CheeseNoodle@lemmy.world 1 year ago
iirc AI models becoming worse after being trained with AI generated data is an actual issue right now. Even if we (or the AI) can’t distinguish them from real images there are subtle differences that can be compounded into quite large differences if the AI is fed its own work over several generations and lead to a degraded output.