Really gave me a whole new perspective. Thanks for that.
Comment on The New York Times sues OpenAI and Microsoft for copyright infringement
kromem@lemmy.world 10 months agoWhat’s the value of old journalism?
It’s a product where the value curve is heavily weighted towards recency.
In theory, the greatest value theft is when the AP writes a piece and two dozen other ‘journalists’ copy the thing changing the text just enough not to get sued. Which is completely legal, but what effectively killed investigative journalism.
A LLM taking years old articles and predicting them until it can effectively learn relationships between language itself and events described in those articles isn’t some inherent value theft.
It’s not the training that’s the problem, it’s the application of the models that needs policing.
Like if someone took a LLM, fed it recently published news stories, and had it rewrite them just differently enough that no one needed to visit the original publisher.
Even if we have it legal for humans to do that (which really we might want to revisit, or at least create a special industry specific restriction regarding), maybe we should have different rules for the models.
But to try to claim a LLM that’s allowing coma patients to communicate or to problem solve self-driving algorithms or to diagnose medical issues is stealing the value of old NYT articles in its doing so is not really an argument I see much value in.
ChucklesMacLeroy@lemmy.world 10 months ago
jacksilver@lemmy.world 10 months ago
Except no one is claiming that LLMs are the problem, they’re claiming GPT, or more specifically GPTs training data, is the problem. Transformer models still have a lot of potential, but the question the NYT is asking is “can you just takes anyone else’s work to train them”.
kromem@lemmy.world 10 months ago
There’s a similar suit against Meta for Llama.
And yes, we will end up seeing as the dust settles if training a LLM is fair use in case law.