ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.
And just the other day I had people arguing to me that it simply wasn’t possible for ChatGPT to contain significant portions of copyrighted work in its database.
d3Xt3r@lemmy.nz 11 months ago
“private”. If it’s on the public facing internet, it’s not private.
perviouslyiner@lemm.ee 11 months ago
“We don’t infringe copyright; The model output is an emergent new thing and not just a recital of its inputs”
“so these questions won’t reveal any copyrighted information then?”
(padme stare)
“right?”
QuaternionsRock@lemmy.world 11 months ago
This argument always seemed silly to me. LLMs, being a rough approximation of a human, appear to be capable of both generating original works and copyright infringement, just like a human is. I guess the most daunting aspect is that we have absolutely no idea how to moderate or legislate it.
This isn’t even particularly surprising result. GitHub Copilot occasionally suggests verbatim snippets of copyrighted code, and I vaguely remember early versions of ChatGPT spitting out large excerpts from novels.
Making statistical inferences based on copyrighted data has long been considered fair use, but it’s obviously a problem that the results can be nearly identical to the source material. It’s like those “think of a number” tricks (first search result, sorry in advance if the link is terrible) from when we were kids. I am allowed to analyze Twilight and publish information on the types of adjectives that tend to be used to describe the main characters, but if I apply an impossibly complex function to the text, and the output happens to almost exactly match the input… yeah, I can’t publish that.
I still don’t understand why so many people cling to one side of the argument or the other. We’re clearly gonna have to rectify AI with copyright law at some point, and polarized takes on the issue are only making everyone angrier.
FaceDeer@kbin.social 11 months ago
Indeed. People put that stuff up on the Internet explicitly so that it can be read. OpenAI's AI read it during training, exactly as it was made available for.
Overfitting is a flaw in AI training that has been a problem that developers have been working on solving for quite a long time, and will continue to work on for reasons entirely divorced from copyright. An AI that simply spits out copies of its training data verbatim is a failure of an AI. Why would anyone want to spend millions of dollars and massive computing resources to replicate the functionality of a copy/paste operation?
lemmyvore@feddit.nl 11 months ago
Storing a verbatim copy and using it for commercial purposes already breaks a lot of copyright terms, even if you don’t distribute the text further.
The exceptions you’re thinking about are usually made for personal use, or for limited use, like your browser obtaining a copy of the text on a page temporarily so you can read it. The licensing on most websites doesn’t grant you any additional rights beyond that — nevermind the licensing of books and other stuff they’ve got in there.
NeoNachtwaechter@lemmy.world 11 months ago
A very short sighted idea.
Copyrighted texts exist.
Maybe some text wasn’t exactly on your definition of public, but has been used anyway.
null@slrpnk.net 11 months ago
What does copyright have to do with privacy?
Papergeist@lemmy.world 11 months ago
Perhaps this person didn’t present thier opinion in the best way. I believe I agree with the sentiment they were possibly trying to convey. You should assume anything you post on the Internet is going to be public.
If you post some pictures of youself getting trashed at club, you should know those pictures have a possibility of resurfacing when you’re 40 something and working in a stuffy corporate environment. I doubt I am alone in saying I made the wrong decision because I never saw myself in that sort of workplace. I still might escape it, but it could go either way at this point.
To your point, I believe, there are instances where privacy is absolutely required. I agree with you too. We obviously need some set of unambiguous rules in place at this point.
pntha@lemmy.world 11 months ago
how do we know the ChatGPT models haven’t crawled the publicly accessible breach forums where private data is known to leak? I imagine the crawler models would have some ‘follow webpage-attachments and then crawl’ function. surely they have crawled all sorts of leaked data online but also genuine question bc i haven’t done any previous research.
d3Xt3r@lemmy.nz 11 months ago
We don’t, but from what I’ve seen, those forums either require registration or payment to access the data, and/or some special means to download it (eg: bittorrent). A simple web crawler wouldn’t be able to access it.