I think there’s a blurry line here where you can easily train an LLM to just regurgitate the source material by overfitting, and at what point is it “transformative enough”? I think there’s little doubt that current flagship models usually are transformative enough, but that doesn’t apply to everything using the same technology - even though this case will be used as precedence for all of that.
There’s also another issue in that while safeguards are generally in place, without them llms would be very capable of quoting entire pages at least of popular books. And jailbreaking llms isn’t exactly unheard of. They also at least used to really like just verbatim repeating news articles on obscure topics.
What I’m mainly getting at is that LLMs can be transformative, but they also can plagiarize. Much like any human could. The question is then, if training LLMs on copyrighted data is allowed, will the company be held accountable when their LLM does plagiarize, the same way a person would be? Or would the better decision be to prohibit training on copyrighted data because actually transforming it meaningfully can not be guaranteed, and copyright holders actually finding these violations is very hard?
Though idk the case details, if the argument was purely focused on using the material to produce the model, rather than including the ultimate step of outputting text to anyone who asks, it was probably doomed to fail from the start and the decision makes perfect sense. And it doesn’t seem too unlikely to happen because realizing this requires the lawyer making the case to actually understand what training an LLM does.
FatCrab@slrpnk.net 16 hours ago
You are agreeing with the post you responded to. This ruling is only about training a model on legally obtained training data. It does not say it is ok to pirate works–if you pirate a work, no matter what you do with the infringing copy you’ve made, you’ve committed copyright infringement. It does not talk about model outputs, which is a very nuanced issue and likely to fall along similar analyses as music copyright imo. It only talks about whether training a model is intrinsically an infringement of copyright. And it isn’t because anything else is insane and be functionally impossible to differentiate from learning a writing technique by reading a book you bought from an author. Even a model that has overfit training data, it is in no way recognizable to any particular training datum. It’s hyperdimensioned matrix of numbers defining relationships between features and relationships between relationships.