Comment on This new data poisoning tool lets artists fight back against generative AI
AnonTwo@kbin.social 1 year agoObviously this is using some bug and/or weakness in the existing training process, so couldn’t they just patch the mechanism being exploited?
I'd assume the issue is that if someone tried to patch it out, it could legally be shown they were disregarding people's copyright.
FaceDeer@kbin.social 1 year ago
It isn't against copyright to train models on published art.
AnonTwo@kbin.social 1 year ago
The general argument legally is that the AI has no exact memory of the copyrighted material.
But if that's the case, then these pixels shouldn't need be patched. Because it wouldn't remember the material that spawned them.
Is just the argument I assume would be used.
Maven@lemmy.sdf.org 1 year ago
It’s like training an artist who’s never seen a banana or a fire hydrant, by passing them pictures of fire hydrants labelled “this is a banana”. When you ask for a banana, you’ll get a fire hydrant. Correcting that mistake doesn’t mean “undoing pixels”, it means teaching the AI what bananas and fire hydrants are.
FaceDeer@kbin.social 1 year ago
Well, I guess we'll see how that argument plays in court. I don't see how it follows, myself.
KeenFlame@feddit.nu 1 year ago
What is “patching pixels” and who would do it?
AnonTwo@kbin.social 1 year ago
Is that not answered in the original article?
Jagger2097@lemmy.world 1 year ago
Explain
FaceDeer@kbin.social 1 year ago
In order to violate copyright you need to copy the copyrighted material. Training an AI model doesn't do that.