Not sure what other people were claiming, but normally the point being made is that it’s not possible for a network to memorize a significant portion of its training data. It can definitely memorize significant portions of individual copyrighted works (like shown here), but the whole dataset is far too large compared to the model’s weights to be memorized.
Comment on Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
TWeaK@lemm.ee 11 months ago
And just the other day I had people arguing to me that it simply wasn’t possible for ChatGPT to contain significant portions of copyrighted work in its database.
KingRandomGuy@lemmy.world 11 months ago
ayaya@lemdro.id 11 months ago
Even then there is no “database” that contains portions of works. The network is only storing the weights between tokens so if it is able to replicate anything verbatim it is just overfitted. Ironically the solution is to feed it even more works so it is less likely to be able to reproduce any single one.
Kbin_space_program@kbin.social 11 months ago
That's a bald faced lie.
E.g. and it can produce copyrighted works.
E.g. I can ask it what a Mindflayer is and it gives a verbatim description from copyrighted material.I can ask Dall-E "Angua Von Uberwald" and it gives a drawing of a blonde female werewolf. Oops, that's a copyrighted character.
KingRandomGuy@lemmy.world 11 months ago
I think what they mean is that ML models generally don’t directly store their training data, but that they instead use it to form a compressed latent space. Some elements of the training data may be perfectly recoverable from the latent space, but most won’t be. It’s not very surprising as a result that you can get it to reproduce copyrighted material word for word.
ayaya@lemdro.id 11 months ago
I think you are confused, how does any of that make what I said a lie?
TimeSquirrel@kbin.social 11 months ago
I can do that too. It doesn't mean I directly copied it from the source material. I can draw a crude picture of Mickey Mouse without having a reference in front of me. What's the difference there?
5BC2E7@lemmy.world 11 months ago
yea this “attack” could potentially sink closedAI with lawsuits.
NevermindNoMind@lemmy.world 11 months ago
This isn’t just an OpenAI problem:
We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT…
If a model uses copyrighten work for training without permission, and the model memorized it, that could be a problem for whoever created it, open, semi open, or closed source.
NaibofTabr@infosec.pub 11 months ago
Well of course not… it contains entire copies of copyrighted works in its database, not just portions.
ayaya@lemdro.id 11 months ago
The important distinction is that this “database” would be the training data, which it only has access to during training. It does not have access once it is actually deployed and running.
It is easy to think of it like a human taking a test. You are allowed to read your textbooks as much as you want while you study, but once you actually start the test you can only go off of what you remember. If would require you to have a perfectly photographic memory (or in the case of ChatGPT, terabytes upon terabytes of RAM) to be able to perfectly remember the entirety of your textbooks.
ignirtoq@kbin.social 11 months ago
It doesn't have to have a copy of all copyrighted works it trained from in order to violate copyright law, just a single one.
However, this does bring up a very interesting question that I'm not sure the law (either textual or common law) is established enough to answer: how easily accessible does a copy of a copyrighted work have to be from an otherwise openly accessible data store in order to violate copyright?
In this case, you can view the weights of a neural network model as that data store. As the network trains on a data set, some human-inscrutable portion of that data is encoded in those weights. The argument has been that because it's only a "portion" of the data covered by copyright being encoded in the weights, and because the weights are some irreversible combination of all of such "portions" from all of the training data, that you cannot use the trained model to recreate a pristine chunk of the copyrighted training data of sufficient size to be protected under copyright law. Attacks like this show that not to be the case.
However, attacks like this seem only able to recover random chunks of training data. So someone can't take a body of training data, insert a specific copyrighted work in the training data, train the model, distribute the trained model (or access to the model through some interface), and expect someone to be able to craft an attack to get that specific work back out. In other words, it's really hard to orchestrate a way to violate someone's copyright on a specific work using LLMs in this way. So the courts will need to decide if that difficulty has any bearing, or if even just a non-zero possibility of it happening is enough to restrict someone's distribution of a pre-trained model or access to a pre-trained model.
fubo@lemmy.world 11 months ago
Sure, which would create liability to that one work’s copyright owner; not to every author. Each violation has to be independently shown: it’s not enough to say “well, it recited Harry Potter so therefore it knows Star Wars too;” it has to be separately shown to recite Star Wars.
It’s not surprising that some works can be recited; just as it’s not surprising for a person to remember the full text of some poem they read in school. However, it would be very surprising if all works from the training data can be recited this way, just as it’s surprising if someone remembers every poem they ever read.
TWeaK@lemm.ee 11 months ago
I don’t think it really matters how accessible it is, what matters is the purpose of use. In a nutshell, fair use covers education, news and criticism. After that, the first consideration is whether the use is commercial in nature.
ChatGPT’s use isn’t education (research), they’re developing a commercial product - even the early versions were not so much prototypes but a part of the same product they have today. Even if it were considered as a research fair use exception, the product absolutely is commercial in nature.
Whether or not data was openly accessible doesn’t really matter - more than likely the accessible data itself is a copyright violation. That would be a separate violation, but it absolutely does not excuse ChatGPT’s subsequent violation. ChatGPT also isn’t just reading the data at its source, it’s copying it into its training dataset, and that copying is unlicensed.
NaibofTabr@infosec.pub 11 months ago
ChatGPT is a large language model. The model contains word relationships - a nebulous collection of rules for stringing word together. The model does not contain information. In order for ChatGPT to answer flexibly answer questions, it must have access to information for reference - information that it can index, tag and sort for keywords.
TWeaK@lemm.ee 11 months ago
The dataset ChatGPT uses to train on contains data copied unlawfully. They’re not just reading the data at its source, they’re copying the data into a training database without sufficient license.
Whether ChatGPT itself contains all the works is debatable - is it just word relationships when the system can reproduce significant chunks of copyrighted data from those relationships? - but the process of training inherently requires unlicensed copying.
In terms of fair use, they could argue a research exemption, but this isn’t really research, it’s product development. The database isn’t available as part of scientific research, it’s protected as a trade secret. Even if it was considered research, it absolutely is commercial in nature.
In my opinion, there is a stronger argument that OpenAI have broken copyright for commercial gain than that they are legitimately performing fair use copying for the benefit of society.
ayaya@lemdro.id 11 months ago
I’m honestly not sure what you’re trying to say here. If by “it must have access to information for reference” you mean while it is running, it doesn’t. Like I said that information is only available during training.
MxM111@kbin.social 11 months ago
That's not true. ChatGPT does not have database - it does not have any memory at all. All it "remembers" is what you type on the screen.
EpicFailGuy@kbin.social 11 months ago
@MxM111
@stopthatgirl7 @TWeaK @NaibofTabr
if it remembers it has to be stored somwhere, if it has to be stored ther's some type of memory with information saved in it. .... call it what you will.
tabarnaski@sh.itjust.works 11 months ago
You remember some dialogue from your favorite movie. Does this mean your neurons store copyrighted work?
NaibofTabr@infosec.pub 11 months ago
OK, so if I ask it a question for reference information, where is it that ChatGPT draws the answer from? Information is not stored in the model itself.
MxM111@kbin.social 11 months ago
There is a memory, a storage, that would not be called a database, which encodes interaction "weights" of neurons. Those parameters where modified during training process and in some sense the information is somehow encoded there. But it is not possible to decode the whole book word to word. It is very similar to our memory in this sense. Do you remember any book word to word?