Beware of an incoming hot take - I don’t see the concept of training AI on published works as much different than a human learning from published works as long as they both go on to make their own original works.
The fact that this is considered a “hot take” is depressing.
Armok_the_bunny@lemmy.world 1 year ago
A standard I could see being applied is one that I think has some precedent, where if the work it is supposed to be similar to is anywhere in the training set then it’s a copyright violation. One of the valid defenses against copyright claims in court is that the defendant reasonably could have been unaware of the original work, and that seems to me like a reasonable equivalent.
AccidentalLemming@lemmy.world 1 year ago
“Similar” is a very hard concept to define, and has previously lead to silly lawsuits. youtu.be/0ytoUuO-qvg
p03locke@lemmy.dbzer0.com 1 year ago
You can’t copyright a style.
ericisshort@lemmy.world 1 year ago
But humans make works that are similar to other works all the time. I just hope that we set the same standards for AI violating copyright as we have for humans. There is a big difference between derivative works and those that violate copyright.
lemmyvore@feddit.nl 1 year ago
Doesn’t this argument assume that AI are human? That’s a pretty huge reach if you ask me. It’s not even clear if LLM are AI, nevermind giving them human rights.
ericisshort@lemmy.world 1 year ago
No, I’m not assuming that. It’s not about concluding AI’s are human. It’s about having concrete standards on which to design laws. Setting a lower standard for copyright violation by LLMs would be like setting a lower speed limit for a self-driving car, and I don’t think it makes any logical sense. To me that would be a disappointingly protectionist and luddite perspective to apply to this new technology.
Saganastic@kbin.social 1 year ago
Machine learning falls under the category of AI. I agree that works produced by LLMs should count as derivative works, as long as they're not too similar.