It’s not, people know it’s a deepfake most of the time and don’t claim it’s real
Comment on Teen deepfake victim pushes for federal law targeting AI-generated explicit content
General_Effort@lemmy.world 8 months ago
I would have thought that deepfakes are defamation per se. The push to criminalize this is quite the break with American first amendment traditions.
If I understand correctly, this would put any image hoster, including Lemmy, in hot water because 230 immunity is only for civil suits and not federal criminal prosecution.
Mubelotix@jlai.lu 8 months ago
General_Effort@lemmy.world 8 months ago
It might also be harassment.
If it’s not defamation or harassment, then I’m not sure what the problem is. As broad as this is, it looks unconstitutional to me.
TORFdot0@lemmy.world 8 months ago
the text of the bill exempts service providers from any liabilities as long as they make a good faith attempt to remove it as soon as they are aware of its existence. So if someone makes AI generated revenge porn on your instance as long as you take it down when notified, you want be in trouble.
General_Effort@lemmy.world 8 months ago
Which part says that?
TORFdot0@lemmy.world 8 months ago
So the law requires intent and carves out exceptions for service providers that try to remove it.
You can read the whole text here
General_Effort@lemmy.world 8 months ago
The lower part just says that overeager removal of depictions does not create liability. Say, onlyfans bans the account of a creator because some face recognition AI thought their porn depicted a celebrity. They have no recourse for lost income.
As to the upper part, I am not sure what “reckless disregard” means in this context. I don’t think it means that you only have to act if you happen to receive a complaint. If you see nudes of some non-porn celebrity, then it’s mostly likely a fake. It seems reckless not to remove it immediately. What if there are not enough mods to look at each image. Is it reckless to keep operating?