cross-posted from: nom.mom/post/121481
OpenAI could be fined up to $150,000 for each piece of infringing content.https://arstechnica.com/…/report-potentia…
Submitted 1 year ago by wanderingmagus@lemm.ee to technology@lemmy.world
cross-posted from: nom.mom/post/121481
OpenAI could be fined up to $150,000 for each piece of infringing content.https://arstechnica.com/…/report-potentia…
When OpenAI commits copyright infringement no one bats an eye, but when I do it everyone downvotes me
Yeah I don’t get it. ChatGPT is not “Fair use” and there is no credit given to anyone, it’s a solid case against them
I just wonder if they’ll get out of it because LLMs do reword the information instead of spitting it back out verbatim. It’s the same reason I think the image generators are safe from copyright law - it’s just different enough that they could plausibly convince a judge with a fair use argument.
What bothers me even more is all the text they had to scrape to create ChatGPT… That seems like a novel problem for the legal system because you know there’s no way they paid for all of it.
I’m not 100% sure where I stand but, for arguments sake; Are you sure about that? it sure is transformative!
Classic joke, something like: if you owe the bank $100, it’s your problem; if you owe them a million, it’s their problem.
Only on lemmy.world
Society moment.
I’ll take things that won’t happen for $200
Oh God, I better not learn anything from a book or I’m fucked.
Fuck man I’ve watched sooooo many movies… the MPAA is gonna be on my ass…
can’t they just move to a country where copyright doesn’t exist? it’s the goddamn internet
They could, but presumably they want to make business and sell their products in countries that do have those copyright protections or to other companies from there.
Make another “russian bot” perhaps. /s Honestly, russia is good for getting away with piriting things, Because Corruption.
This is the best summary I could come up with:
The result, experts speculate, could be devastating to OpenAI, including the destruction of ChatGPT’s dataset and fines up to $150,000 per infringing piece of content.
If the Times were to follow through and sue ChatGPT-maker OpenAI, NPR suggested that the lawsuit could become “the most high-profile” legal battle yet over copyright protection since ChatGPT’s explosively popular launch.
This speculation comes a month after Sarah Silverman joined other popular authors suing OpenAI over similar concerns, seeking to protect the copyright of their books.
As of this month, the Times’ TOS prohibits any use of its content for “the development of any software program, including, but not limited to, training a machine learning or artificial intelligence (AI) system.”
In the memo, the Times’ chief product officer, Alex Hardiman, and deputy managing editor Sam Dolnick said a top “fear” for the company was “protecting our rights” against generative AI tools.
the memo asked, echoing a question being raised in newsrooms that are beginning to weigh the benefits and risks of generative AI.
I’m a bot and I’m open source!
Ironic…
Surprising that there are so many copyright bootlickers on lemmy
My bad, I’ll just let big corporations train on every piece of my personal creative output as “Fair Use” so they can sell it for profit. That’ll show ‘em!
Satire aside, you don’t seem to understand what copyright is or are confusing it with laissez-faire capitalism. You can’t bootlick rights.
Brb, going to go bootlick the European Convention on Human Rights.
LAME. Butthurt people is all i see. People like that are what cause huge roadblocks in advancing humans. I wish someone made a law that made it impossible to force someone to stop doing something simply because they are using ai to train and it hurts their pockets. Let AI learn what we’ve created. The possibilities are endless. So many good things can come from it but all everyone wants to do is look at the negatives. Of which a massive majority are easily solvable.
One tiny example is think of what ai can do in a few years time with training on medicine alone. For all we know the ai can figure out cancer for us. But with everyone clamping down on this shit we’ll never know.
Leave it to humans to ruin great things. If aliens ever visit this planet I’m ratting out every last human against technology 🤣
If ChatGPT is removed, it will really have a big impact on users getting used to its convenience, when there are currently too many ChatGPT Online causing it to explode.
BURN@lemmy.world 1 year ago
Good
AI should not be given free reign to train on anything and everything we’ve ever created. Copyright holders should be able to decide if their works are allowed to be used for model training, especially commercial model training. We’re not going to stop a hobbyist, but google/Microsoft/openAI should be paying for materials they’re using and compensating the creators.
coheedcollapse@lemmy.world 1 year ago
With that mindset, only the powerful will have access to these models.
Places like Reddit, Google, Facebook, etc, places that can rope you into giving away rights to your data with TOS stipulations.
Locking down everything available on the Internet by piling more bullshit onto already draconian copyright rules isn’t the option and it surprises the shit out of me how quickly artists, writers, and creators piled onto the side with Disney, the RIAA, and other former enemies the second they started perceiving ML as a threat to their livelihood.
ArmokGoB@lemmy.dbzer0.com 1 year ago
I disagree. I think that there should be zero regulation of the datasets as long as the produced content is noticeably derivative, in the same way that humans can produce derivative works using other tools.
adrian783@lemmy.world 1 year ago
LLM are not human, the process to train LLM is not human-like, LLM don’t have human needs or desires, or rights for that matter.
comparing it to humans has been a flawed analogy since day 1.
HelloHotel@lemmy.world 1 year ago
Good in theory, Problem is when the “creativity” value that adds random noise (and for some setups forces it to improvise) is too low, you get whatever impression the content made on the AI, like an imperfect photocopy (non expert, explained “memorization”). Too high and you get random noise.
Hangglide@lemmy.world 1 year ago
Bullshit. If I learn engineering from a textbook, or a website, and then go on to design a cool new widget that makes millions, the copyright holder of the textbook or website should get zero dollars from me.
It should be no different for an AI.
Shazbot@lemmy.world 1 year ago
Every time I see this argument it reminds me of how little people understand how copyright works.
The crux is fair compensation. The rights holder has to agree to the usage, with clear terms and conditions for their creative works, in exchange for a monetary sum (single or reoccurring) and/or a service of similar or equal value with a designated party. That’s why AI continues to be in hot water. Just because you can suck up the data does not mean the data is public domain. Nor does it mean the license used between interested parties transfers to an AI company during collection. If AI companies want to monetize their services, they’re going to have to provide fair compensation for the non-public domain works used.
Treczoks@lemmy.world 1 year ago
Yes, but what about you going into teaching engineering, and writing a text book for it that is awfully close to the ones you have used? Current AI is at a stage where it just “remixes” content it gobbled in, and not (yet) advanced enough to actually learn and derive from it.
VonCesaw@lemmy.world [bot] 1 year ago
Human experience considers context, experience, and relation to previous works
‘AI’ has the words verbatim in it’s database and will occasionally spit them out verbatim
Mouselemming@sh.itjust.works 1 year ago
Last time I looked, textbooks were fucking expensive. You might be able to borrow one from the library, of course. But most people who study something pay up front for the information they’re studying on
MindSkipperBro12@lemmy.world 1 year ago
You sound like an old man who’s scared of changing times.
BURN@lemmy.world 1 year ago
Or a creative who hates to see the entire soul of the human race boiled down to a computer doing a whole lot of math.
AI isn’t going to put office workers out of a job, not just yet, but it’s sure going to end the careers of a whole lot of artists who won’t get entry level opportunities anymore because an AI is able to do 90% of the job and all they need is someone to sort the outputs.
TheDarkKnight@lemmy.world 1 year ago
I understand the sentiment (and agree on moral grounds) but I hink this would put us at an extreme disadvantage in the development of this technology compared to competing nations. Unless you can get all countries to agree and somehow enforce this I think it dramatically hinders our ability to push forward in this space.
Soundhole@lemm.ee 1 year ago
I disagree. However, I believe the models should be open sourced by law .
BURN@lemmy.world 1 year ago
Open sourcing the models does absolutely nothing. The fact of the matter is that the people who create these models aren’t able to quantifiably show how they work, because those levels have been abstracted so far into code that there’s no way to understand them.
Veraxus@kbin.social 1 year ago
Yeah! Let’s burn fair use to the ground! Technology is scary! Destroy it all!
FluffyPotato@lemm.ee 1 year ago
I don’t think AI is criticising or parodying that content. Also ChatGPT is a glorified chatbot that can just make it’s answers seem human, it’s not some world saving technology.