Comment on Why are people seemingly against AI chatbots aiding in writing code?
tabular@lemmy.world 1 month ago
If the AI was trained on code that people permitted it to be freely shared then go ahead. Taking code and ignoring the software license is largely considered a dick-move, even by people who use AI.
Some people choose a copyleft software license to ensure users have software freedom, and this AI (a math process) circumvents that. [A copyleft license makes it so that you can use the code if you agree to use the same license for the rest of the program - therefore users get the same rights you did]
simplymath@lemmy.world 1 month ago
I hate big tech too, but I’m not really sure how the GPL or MIT licenses (for example) would apply. LLMs don’t really memorize stuff like a database would and there are certain (academic/research) domains that would almost certainly fall under fair use. LLMs aren’t really capable of storing the entire training set, though I admit there are almost certainly edge cases where stuff is taken verbatim.
I’m not advocating for OpenAI by any means, but I’m genuinely skeptical that most copyleft licenses have any stake in this. There’s no static linking or source code distribution happening. Many basic algorithms don’t follow under copyright, and, in practice, stack overflow code is copy/pasted all the time without that being released under any special license.
If your code is on GitHub, it really doesn’t matter what license you provide in the repository – you’ve already agreed to allowing any user to “fork” it for any reason whatsoever.
tabular@lemmy.world 1 month ago
Be it a complicated neural network or database matters not. It output portions of the code used as input by design.
If you can take GPL code and “not” distribute it via complicated maths then that circumvents it. That won’t do, friendo.
simplymath@lemmy.world 1 month ago
For example, if I ask it to produce python code for addition, which GPL’d library is it drawing from?
I think it’s clear that the fair use doctrine no longer applies when OpenAI turns it into a commercial code assistant, but then it gets a bit trickier when used for research or education purposes, right?
I’m not trying to be obtuse-- I’m an AI researcher who is highly skeptical of AI. I just think the imperfect compression that neural networks use to “store” data is a bit less clear than copy/pasting code wholesale.
would you agree that somebody reading source code and then reimplenting it (assuming no reverse engineering or proprietary source code) would not violate the GPL?
If so, then the argument that these models infringe on right holders seems to hinge on the verbatim argument that their exact work was used without attribution/license requirements. This surely happens sometimes, but is not, in general, a thing these models are capable of since they’re using loss-y compression to “learn” the model parameters. As an additional point, it would be straightforward to then comply with DMCA requests using any number of published “forced forgetting” methods.
Then, that raises a further question.
If I as an academic researcher wanted to make a model that writes code using GPL’d training data, would I be in compliance if I listed the training data and licensed my resulting model under the GPL?
I work for a university and hate big tech as much as anyone on Lemmy. I am just not entirely sure GPL makes sense here. GPL 3 was written because GPL 2 had loopholes that Microsoft exploited and I suspect their lawyers are pretty informed on the topic.
tabular@lemmy.world 1 month ago
The corresponding training data is the best bet to see what code an input might be copied from. This can apply to humans too. To avoid lawsuits reverse engineering projects use a clean room strategy: requiring contributors to have never seen the original code. This is to argue they can’t possibility be copying, even from memory (an imperfect compression too.
If it doesn’t include GPL code then that can’t violate the GPL. However, OpenAI argue they have to use copyrighted works to make specific AIs (if I recall correctly). That’s a problem to me.
My understanding is AI generated media can’t be copyrighted as it wasn’t a person being creative - like the monkey selfie copyright dispute.