If I want to be able to argue that having any copyleft stuff in the training dataset makes all the output copyleft – and I do – then I necessarily have to also side with the rich chumps as a matter of consistency. It’s not ideal, but it can’t be helped. ¯\_(ツ)_/¯
Comment on Microsoft, OpenAI sued for copyright infringement by nonfiction book authors in class action claim
bassomitron@lemmy.world 11 months ago
I’m not a huge fan of Microsoft or even OpenAI by any means, but all these lawsuits just seem so… greedy?
It isn’t like ChatGPT is just spewing out the entirety of their works in a single chat. In that context, I fail to see how seeing snippets of said work returned in a Google summary is any different than ChatGPT or any other LLM doing the same.
Should OpenAI and other LLM creators use ethically sources data in the future? Absolutely. But to me, these rich chumps like George R. R. Martin complaining that they felt their data was stolen without their knowledge and profited off of just feels a little ironic.
Welcome to the rest of the 6+ billion people on the Internet who’ve been spied on, data mined, and profited off of by large corps for the last two decades. Maybe regulators should’ve put tougher laws and regulations in place long ago to protect all of us against this sort of shit. It’s not like deep learning models are anything new.
grue@lemmy.world 11 months ago
General_Effort@lemmy.world 11 months ago
Wait. I first thought this was sarcasm. Is this sarcasm?
grue@lemmy.world 11 months ago
No. I really do think that all AI output is copyleft if there’s any copyleft in the training dataset.
General_Effort@lemmy.world 11 months ago
Huh. Obviously, you don’t believe that a copyleft license should trump other licenses (or lack thereof). So, what are you hoping this to achieve?
wewbull@feddit.uk 11 months ago
In your mind are the publishers the rich chumps, or Microsoft?
For copyleft to work, copyright needs to be strong.
grue@lemmy.world 11 months ago
I was just repeating the language the parent commenter used (probably should’ve quoted it in retrospect). In this case, “rich chumps” are George R.R. Martin and other authors suing Microsoft.
LWD@lemm.ee 11 months ago
these rich chumps like George R. R. Martin complaining that they felt their data was stolen without their knowledge and profited off of just feels a little ironic.
I welcome a lawsuit from any content creator who has enough money to put into it. That benefits all content creators, especially the ones that can’t afford lawyers, from being exploited by giant corporations.
Does anybody think, for a moment, that the average person who creates art as a side job, who lives paycheck to paycheck, should be the one to fight massive plagiaristic megacorporations like OpenAI? That the battle between those who create and those who take should be fought on the most uneven grounds possible?
Womble@lemmy.world 11 months ago
Its wild to me how so many people seem to have got it into their head that cheering for the IP laws that corporations fought so hard for is somehow left wing and sticking up for the little guy.
LWD@lemm.ee 11 months ago
Can you bring an actual argument to the table, instead of gesturing towards some perceived hypocrisy?
If your ideology brings you to the same conclusions as libertarian techbros who support the theft of content from the powerless and giving it to the powerful, such as is the case with OpenAI shills, I would say you are not, in fact, a leftist. And if all you can do is indirectly play defense for them, there is no difference between a devil’s advocate and a full-throated techbro evangelist.
General_Effort@lemmy.world 11 months ago
Just a heads-up, libertarian is usually understood, in the american sense, as meaning right libertarian, including so-called anarcho-capitalists. It’s understood to mean people who believe that the right to own property is absolutely fundamental. Many don’t believe in intellectual property but some do. Which is to say that in american parlance, the label “libertarian” would probably include you. Just FYI.
Also, I don’t know what definition of “left” you are using, but it’s not a common one. Left ideologies typically favor progress, including technological progress. They also tend to be critical of property, and (AFAIK universally) reject forms of property that allow people to draw unearned rents. They tend to side with the wider interests of the public over an individual’s right to property. The grandfather comment is perfectly consistent with left ideology.
LWD@lemm.ee 11 months ago
[deleted]Womble@lemmy.world 11 months ago
And your argument boils down to “Hitler was a vegetarian, all vegetarians are Fascists”. IP laws are a huge stifle on human creativity designed to allow corporate entities to capture, control and milk innate human culture for profit. The fact that some times some corporate interests end up opposing them when it suits them does not change that.
General_Effort@lemmy.world 11 months ago
Sure. Trickle-down FTW.
CosmoNova@lemmy.world 11 months ago
I hear those kinds of arguments a lot, though usually from the exact same people who claimed nobody would be convicted of fraud for NFT and crypto scams when those were at their peak. The days of the wild west internet are long over.
Theft in the digital space is a very real thing in the eyes of the law, especially when it comes to copyright infringement. It‘s wild to me how many people seem to think Microsoft will just get a freebie here because they helped pioneering a new technology for personal gain. Copyright holders have a very real case here and I‘d argue even a strong one.
Even using user data (that they own legally) for machine learning could get them into trouble in some parts of the developed world because users 10 years ago couldn‘t anticipate it could be used that way and not give their full consent for that.
LWD@lemm.ee 11 months ago
Bit odd how openly hostile to consent all the fans of OpenAI and other mega-corporations are.
General_Effort@lemmy.world 11 months ago
Even using user data (that they own legally) for machine learning could get them into trouble in some parts of the developed world because users 10 years ago couldn‘t anticipate it could be used that way and not give their full consent for that.
Where, for example?
FreeFacts@sopuli.xyz 11 months ago
I fail to see how seeing snippets of said work returned in a Google summary is any different than ChatGPT or any other LLM doing the same.
Just because it was available for the public internet doesn’t mean it was available legally. Google has a way to remove it from their index when asked, while it seems that OpenAI has no way to do so (or will to do so).
LWD@lemm.ee 11 months ago
The SFWA has actually talked about this: when they made their books more accessible, they became easier to scrape.
“Our authors and readers have been asking for this for a long time,” president and publisher Tom Doherty explained at the time. “They’re a technically sophisticated bunch, and DRM is a constant annoyance to them. It prevents them from using legitimately-purchased e-books in perfectly legal ways, like moving them from one kind of e-reader to another.”
But DRM-free e-books that circulate online are easy for scrapers to ingest.
The SFWA submission suggests “Authors who have made their work available in forms free of restrictive technology such as DRM for the benefit of their readers may have especially been taken advantage of.”
patatahooligan@lemmy.world 11 months ago
You are misrepresenting the issue. The issue here is not if a tool just happens to be able to be used for copyright infringement in the hands of a malicious entity. The issue here is whether LLM outputs are just derivative works of their training data. This is something you cannot compare to tools like pencils and pcs which are much more general purpose and which are not built on stole copyright works. Notice also how AI companies bring up “fair use” in their arguments. This means that they are not arguing that they are not using copryighted works without permission nor that the output of the LLM does not contain any copyrighted part of its training data (they can’t do that because you can’t trace the flow of data through an LLM), but rather that their use of the works is novel enough to be an exception. And that is a really shaky argument when their services are actually not novel at all. In fact they are designing services that are as close as possible to the services provided by the original work creators.
bassomitron@lemmy.world 11 months ago
I disagree and I feel like you’re equally misrepresenting the issue if I must be as well. LLMs can do far more than simply write stories. They can write stories, but that is just one capability among numerous.
I’m not a lawyer or legal expert, I’m just giving a layman’s opinion on a topic. I hope Sam Altman and his merry band get nailed to the wall, I really do. It’s going to be a clusterfuck of endless legal battles for the foreseeable future, especially now that OpenAI isn’t even pretending to be nonprofit anymore.
wewbull@feddit.uk 11 months ago
This story is about a non-fiction work.
What is the purpose of a non-fiction work? It’s to give the reader further knowledge on a subject.
Why does an LLM manufacturer train their model on a non-fiction work? To be able to act as a substitute source of the knowledge.
End result is that
So, not only have they stolen their work, they’ve stolen their income and reputation.
bassomitron@lemmy.world 11 months ago
If you’re using an LLM as any form of authoritative source-and literally any LLM specifically warns NOT to do that–then you’re going to have a bad time. No one is using them to learn in any serious capacity. Ideally, the AI should absolutely be citing its sources, and if someone is able to figure out how to do that reliably, they’ll be made quite rich, I’d imagine.
SlopppyEngineer@lemmy.world 11 months ago
There’s a big difference between borrowing inspiration and just using entire paragraphs of text or images wholesale. If GRRM uses entire paragraphs of JK Rowling with just the names changed and uses the same cover with a few different colors you have the same fight. LLM can do the first, but also does the second.
The “in the style of” is a different issue that’s being debated, as style isn’t protected by law. But apparently if you ask in the style of, the LLM can get lazy and produces parts of the (copyrighted) source material instead of something original.
Blue_Morpho@lemmy.world 11 months ago
Just as with the right query you could get a LLM to output a paragraph of copyrighted material, you can with the right query get Google to give you a link to copyrighted material. Does that make all search engines illegal?