The issue isn’t that you can coax AI into giving away unaltered copyrighted books out of their trunk, the issue is that if you were to open the hood, you’d see that the entire engine is made of unaltered copyrighted books.
All those “anti hacking” measures are just there to obfuscate the fact that that the unaltered works are being in use and recallable at all times.
mm_maybe@sh.itjust.works 2 months ago
Excuse me, what? You think Huggingface is hosting 100’s of checkpoints each of which are multiples of their training data, which is on the order of terabytes or petabytes in disk space? I don’t know if I agree with the compression argument, myself, but for other reasons–your retort is objectively false.
Hackworth@lemmy.world 2 months ago
Just taking GPT 3 as an example, its training set was 45 terabytes, yes. But that set was filtered and processed down to about 570 GB. GPT 3 was only actually trained on that 570 GB. The model itself is about 700 GB. Much of the generalized intelligence of an LLM comes from abstraction to other contexts.