I would like to take a crack at this. There is this recent trend going around with ghiblifying one’s picture. Its basically converting a picture into ghibli image. If you had trained it on free sources, this is not possible.
Internally an LLM works by having networks which activate based on certain signals. When you ask it a certain question. It creates a network of similar looking words and then gives it back to you. When u convert an image, you are doing something similar. You cannot form these networks and the threshold at which they activate without seeing copyrighted images from studio ghibli. There is no way in hell or heaven for that to happen.
OpenAI trained their models on pirated things just like meta did. So when an AI produces an image in style of something, it should attribute the person from which it actually took it. Thats not whats happening. Instead it just makes more money for the thief.
Carrot@lemmy.today 11 months ago
I think your understanding of generative AI is incorrect. It’s not just “logic and RNG” It is using training data (read as both copyrighted and uncopyrighted material) to come up with a model of “correctness” or “expectedness”. If you then give it a pattern, (read as question or prompt) it checks its “expectedness” model for whatever should come next. If you ask it “how many cups in a pint” it will check the most common thing it has seen after that exact string of words it in its training data: 2. If you ask for a picture of something “in the style of van gogh”, it will spit out something with thick paint and swirls, as those are the characteristics of the pictures in its training data that have been tagged with “Van Gogh”. These responses are not brand new, they are merely a representation of the training data that would most work as a response to your request. In this case, if any of the training data is copyrighted, then attribution must be given, or at the very least permission to use this data must be given by the current copyright holder.
riskable@programming.dev 11 months ago
If it runs on a computer, it’s literally “just logic and RNG”. It’s all transistors, memory, and an RNG.
The data used to train an AI model is copyrighted. It’s impossible for something to exist without copyright (in the past 100 years). Even public domain works had copyright at some point.
This is not correct. Every artist ever has been trained with copyrighted works, yet they don’t have to recite every single picture they’ve seen or book they’ve ever read whenever they produce something.
Carrot@lemmy.today 11 months ago
Sure, but this is a bad faith argument. You can say this about anything. Everything is made up of other stuff, it’s what someone has done to combine or use those elements that matters. You could extend this to anything proprietary. Manufacturing equipment is just a handful of metals, rubbers, and plastics. However, the context in which someone uses those materials is what matters when determining if copyright laws have been broken.
If the data used to train the model was copyrighted data acquired without explicit permission from the data owners, it itself cannot be copyrighted. You can’t take something copyrighted by someone else, put it in a group of stuff that is also copyrighted by others, and claim you have some form of ownership over that collection of works.
You speak confidently, but I don’t think you understand the problem area enough to act as an authority on the topic.
Laws can be different for individuals and companies. Hell, laws of use can be different for two different individuals, and the copyright owner actually gets a say in how their thing can be used by different groups of people. For instance, for a 3d art software, students can use it for free. However, their use agreement is that they cannot profit off of anything they make. Non students have to pay, but can sell their work without consequences. Companies have to pay even more, but often times get bulk discounts if they are buying licenses for their whole team.
Artists have something of value: AI training data. We know this is valuable to AI training companies, because artists are getting reached out to by AI companies, asking to sell them the rights to train their model on their data. If AI companies just use an artist’s AI training data without their permission, it’s stealing the potential revenue they could have made selling it to a different AI company. Taking away revenue potential on someone’s work is the basis for having violated copyright/fair use laws.