I disagree. I think that there should be zero regulation of the datasets as long as the produced content is noticeably derivative, in the same way that humans can produce derivative works using other tools.
Comment on Report: Potential NYT lawsuit could force OpenAI to wipe ChatGPT and start over
BURN@lemmy.world 1 year ago
Good
AI should not be given free reign to train on anything and everything we’ve ever created. Copyright holders should be able to decide if their works are allowed to be used for model training, especially commercial model training. We’re not going to stop a hobbyist, but google/Microsoft/openAI should be paying for materials they’re using and compensating the creators.
ArmokGoB@lemmy.dbzer0.com 1 year ago
adrian783@lemmy.world 1 year ago
LLM are not human, the process to train LLM is not human-like, LLM don’t have human needs or desires, or rights for that matter.
comparing it to humans has been a flawed analogy since day 1.
synceDD@lemmy.world 1 year ago
Llm no desires = no derivative works? Let llm handle your comments they will make more sense
HelloHotel@lemmy.world 1 year ago
Good in theory, Problem is when the “creativity” value that adds random noise (and for some setups forces it to improvise) is too low, you get whatever impression the content made on the AI, like an imperfect photocopy (non expert, explained “memorization”). Too high and you get random noise.
Hangglide@lemmy.world 1 year ago
Bullshit. If I learn engineering from a textbook, or a website, and then go on to design a cool new widget that makes millions, the copyright holder of the textbook or website should get zero dollars from me.
It should be no different for an AI.
Shazbot@lemmy.world 1 year ago
Every time I see this argument it reminds me of how little people understand how copyright works.
- When you buy that book the monetary amount is fair compensation for the contents inside. What you do afterwards is your own business so long as it does not violate the terms within the fine print of the book (no unauthorized reproductions, etc.)
- When someone is contracted for an ad campaign there will be usage rights in the contract detailing the time frame and scope for fair compensation (the creative fee + expenses). If the campaign does well, they can negotiate residuals (if not already included) because the scope now exceeds the initial offer of fair compensation.
- When you watch a movie on TV, the copyright holder(s) of that movie are given fair compensation for the number of times played. From the copyright holders, every artist is paid a royalty. Jackie Chan and Chris Tucker still get royalty checks whenever Rush Hour 2 airs or is streamed, as do all the other obscure actors and contributing artists.
- Deviant Art and ArtStation provide free hosting for artists in exchange for a license that lets them distribute images to visitors. The artists have agreed to fair compensation in the form of free hosting and potential promotion should their work start trending, reaching all front page visitors of the site. Similarly, when the artists use the printing services of these sites they provide a license to reproduce and ship their works, as fair compensation the sites receive a portion of the artists’ asking price.
The crux is fair compensation. The rights holder has to agree to the usage, with clear terms and conditions for their creative works, in exchange for a monetary sum (single or reoccurring) and/or a service of similar or equal value with a designated party. That’s why AI continues to be in hot water. Just because you can suck up the data does not mean the data is public domain. Nor does it mean the license used between interested parties transfers to an AI company during collection. If AI companies want to monetize their services, they’re going to have to provide fair compensation for the non-public domain works used.
Treczoks@lemmy.world 1 year ago
Yes, but what about you going into teaching engineering, and writing a text book for it that is awfully close to the ones you have used? Current AI is at a stage where it just “remixes” content it gobbled in, and not (yet) advanced enough to actually learn and derive from it.
VonCesaw@lemmy.world [bot] 1 year ago
Human experience considers context, experience, and relation to previous works
‘AI’ has the words verbatim in it’s database and will occasionally spit them out verbatim
Maven@lemmy.sdf.org 1 year ago
It doesn’t. The original data is nowhere in its dataset. Words are nowhere in its dataset. It stores how often certain tokens (numbers computationally equivalent to language fragments; not even words, but just a few letters or punctuation, often chunks of words) are found together in sentences written by humans, and uses that to generate human-sounding sentences. The sentences it returns are thereby a massaged average of what it predicts a human would say in that situation.
If you say “It was the best of times,” and it returns “it was the worst of times.”, it’s not because “it was the best of times, it was the worst of times.” is literally in its dataset, it’s because after converting what you said to tokens, its dataset shows that the latter almost always follows the former. From the AI’s perspective, it’s like you said the token string (03)(153)(3181)(359)(939)(3)(10)(108), and it found that the most common response to that by far is (03)(153)(3181)(359)(61013)(12)(10)(108).
HelloHotel@lemmy.world 1 year ago
Impressioning and memorization, it memorised the impression (“sensation”) of what it’s like to have the text in the buffer: “It was the best of times,” and “instinctively” outputs it’s impression “it was the worst of times.” Knowing thats “correct”.
Mouselemming@sh.itjust.works 1 year ago
Last time I looked, textbooks were fucking expensive. You might be able to borrow one from the library, of course. But most people who study something pay up front for the information they’re studying on
MindSkipperBro12@lemmy.world 1 year ago
You sound like an old man who’s scared of changing times.
BURN@lemmy.world 1 year ago
Or a creative who hates to see the entire soul of the human race boiled down to a computer doing a whole lot of math.
AI isn’t going to put office workers out of a job, not just yet, but it’s sure going to end the careers of a whole lot of artists who won’t get entry level opportunities anymore because an AI is able to do 90% of the job and all they need is someone to sort the outputs.
TheDarkKnight@lemmy.world 1 year ago
I understand the sentiment (and agree on moral grounds) but I hink this would put us at an extreme disadvantage in the development of this technology compared to competing nations. Unless you can get all countries to agree and somehow enforce this I think it dramatically hinders our ability to push forward in this space.
Soundhole@lemm.ee 1 year ago
I disagree. However, I believe the models should be open sourced by law .
BURN@lemmy.world 1 year ago
Open sourcing the models does absolutely nothing. The fact of the matter is that the people who create these models aren’t able to quantifiably show how they work, because those levels have been abstracted so far into code that there’s no way to understand them.
Veraxus@kbin.social 1 year ago
Yeah! Let’s burn fair use to the ground! Technology is scary! Destroy it all!
FluffyPotato@lemm.ee 1 year ago
I don’t think AI is criticising or parodying that content. Also ChatGPT is a glorified chatbot that can just make it’s answers seem human, it’s not some world saving technology.
coheedcollapse@lemmy.world 1 year ago
With that mindset, only the powerful will have access to these models.
Places like Reddit, Google, Facebook, etc, places that can rope you into giving away rights to your data with TOS stipulations.
Locking down everything available on the Internet by piling more bullshit onto already draconian copyright rules isn’t the option and it surprises the shit out of me how quickly artists, writers, and creators piled onto the side with Disney, the RIAA, and other former enemies the second they started perceiving ML as a threat to their livelihood.