An LLM can essentially reproduce a work, and the whole point is to generate derivative works. So by its very nature, it runs into copyright issues.
Derivative works are not copyright infringement. If LLMs are spitting out exact copies, or near-enough-to-exact copies, that’s one thing. But as you said, the whole point is to generate derivative works.
Halosheep@lemm.ee 4 months ago
My brain also takes information and creates derivative works from it.
Shit, am I also a data thief?
sugar_in_your_tea@sh.itjust.works 4 months ago
That depends, do you copy verbatim? Or do you process and understand concepts, and then create new works based on that understanding? If you copy verbatim, that’s plagiarism and you’re a thief. If you create your own answer, it’s not.
Current AI doesn’t actually “understand” anything, and “learning” is just grabbing input data. If you ask it a question, it’s not understanding anything, it just matches search terms to the part of the training data that matches, and regurgitates a mix of it, and usually omits the sources. That’s it.
It’s a tricky line in journalism since so much of it is borrowed, and it’s likewise tricky w/ AI, but the main difference IMO is attribution, good journalists cite sources, AI rarely does.