That’s a win, but it would need to be enforced… Which is harder to do
YouTube now requires creators to disclose when AI-generated content is used in videos
Submitted 8 months ago by Dnew10@lemmy.world to technology@lemmy.world
Comments
RustyNova@lemmy.world 8 months ago
Uvine_Umbra@discuss.tchncs.de 8 months ago
Harder, but in this with mutliple generations of people being trained to question every link and image on screen? Not necessarily impossible.
There will definitely be false flags though
Vilian@lemmy.ca 8 months ago
this only gonna train i.a to not look like i.a
Sabata11792@kbin.social 8 months ago
I'm waiting for the constant big drama when it turns out Big Popular Youtuber of the Week gets accused of using/not using Ai and it turns out the oppsite is true.
jeena@jemmy.jeena.net 8 months ago
That’s good, but soon every video will partially be AI because it’ll be build in into the tools. Just like every photo out there is retouched with Lightroom/Photoshop.
FaceDeer@fedia.io 8 months ago
None of this is AI-specific. Youtube wants you to label your videos if you use "altered or synthetic content" that could mislead people about real people or events. 99% of what Corridor Crew puts out would probably need to be labeled, for example, and they mostly use traditional digital effects.
redcalcium@lemmy.institute 8 months ago
Creators must disclose content that:
Makes a real person appear to say or do something they didn’t do
Alters footage of a real event or place
Generates a realistic-looking scene that didn’t actually occur
So, they want deepfakes to be clearly labeled, but if the entire video was scripted by chatgpt, the AI label is not required?
GamingChairModel@lemmy.world 8 months ago
Generates a realistic-looking scene that didn’t actually occur
Doesn’t this describe, like, every mainstream live action film or television show?
Hexagon@feddit.it 8 months ago
Technically, yes… but if it’s in movie/show, you already know it’s fiction
FaceDeer@kbin.social 8 months ago
Yeah, but this doesn't put any restrictions on stuff, it just adds a label to it.
affiliate@lemmy.world 8 months ago
this is going to be devastating for all the prank youtube channels
HarkMahlberg@kbin.social 8 months ago
Wouldn't this enable, for example, Trump claiming he didn't make the "bloodbath" comment, calling it a deepfake, and telling Youtube to remove all the new coverage of it? I mean, more generally, what stops someone from abusing this system?
CluckN@lemmy.world 8 months ago
It’s a good first step. If claiming your AI video is real gets more views then I’m curious if the risks outweigh the cost of being caught.
canis_majoris@lemmy.ca 8 months ago
You can only really pull that with older people and children. Most of us millennials can spot the patterns AI gen produces, but I’ve seen my dad just consume the content and be largely unaware of the fact that it was artificially generated. He constantly complains those videos say nothing but watches tons of them anyways, mostly related to non-news about sports.
AmidFuror@fedia.io 8 months ago
Will this apply to advertisers, too? They don't block outright scams, so probably not. Money absolves all sins.
Thorny_Insight@lemm.ee 8 months ago
Your YouTube is not working optimally if you’re seeing ads there
AmidFuror@fedia.io 8 months ago
My point was that ads are a big part of the typical user's experience, and it is hypocritical to believe AI needs to be disclosed but not apply that to paid content.
Dudewitbow@lemmy.zip 8 months ago
tbf, a lot of ads are already misleading as it is, so pointing out AI isnt going to change its perception much.