Comment on The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates
mm_maybe@sh.itjust.works 2 months agoY’all should really stop expecting people to buy into the analogy between human learning and machine learning i.e. “humans do it, so it’s okay if a computer does it too”. First of all there are vast differences between how humans learn and how machines “learn”, and second, it doesn’t matter anyway because there is lots of legal/moral precedent for not assigning the same rights to machines that are normally assigned to humans (for example, no intellectual property right has been granted to any synthetic media yet that I’m aware of).
That said, I agree that “the model contains a copy of the training data” is not a very good critique–a much stronger one would be to simply note all of the works with a Creative Commons “No Derivatives” license in the training data, since it is hard to argue that the model checkpoint isn’t derived from the training data.
VoterFrog@lemmy.world 2 months ago
Not really. First of all, creative commons loosens the copyright restrictions on a work. The strongest license is actually no explicit license i.e. “All Rights Reserved.” No derivatives is already included under full, default, copyright.
Second, derivative has a pretty strict legal definition. It’s not enough to say that the derived work was created using a protected work, or even that the derived work couldn’t exist without the protected work. Some examples: create a word cloud of your favorite book, analyze the tone of news article to help you trade stocks, or produce an image containing the most prominent color in every frame of a movie, create a search index of the words found on all websites on the internet. All of that is absolutely allowed under even the strictest of copyright protections.
Statistical analysis of copyrighted materials, as in training AI, easily clears that same bar.