Comment on Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

<- View Parent
ignirtoq@kbin.social ⁨10⁩ ⁨months⁩ ago

It doesn't have to have a copy of all copyrighted works it trained from in order to violate copyright law, just a single one.

However, this does bring up a very interesting question that I'm not sure the law (either textual or common law) is established enough to answer: how easily accessible does a copy of a copyrighted work have to be from an otherwise openly accessible data store in order to violate copyright?

In this case, you can view the weights of a neural network model as that data store. As the network trains on a data set, some human-inscrutable portion of that data is encoded in those weights. The argument has been that because it's only a "portion" of the data covered by copyright being encoded in the weights, and because the weights are some irreversible combination of all of such "portions" from all of the training data, that you cannot use the trained model to recreate a pristine chunk of the copyrighted training data of sufficient size to be protected under copyright law. Attacks like this show that not to be the case.

However, attacks like this seem only able to recover random chunks of training data. So someone can't take a body of training data, insert a specific copyrighted work in the training data, train the model, distribute the trained model (or access to the model through some interface), and expect someone to be able to craft an attack to get that specific work back out. In other words, it's really hard to orchestrate a way to violate someone's copyright on a specific work using LLMs in this way. So the courts will need to decide if that difficulty has any bearing, or if even just a non-zero possibility of it happening is enough to restrict someone's distribution of a pre-trained model or access to a pre-trained model.

source
Sort:hotnewtop