The court’s ruling explicitly depended on the fact that Anthropic does not allow users to retrieve significant chunks of copyrighted text. It used the entire copyrighted work to train the weights of the LLMs, but is configured not to actually copy those works out to the public user. The ruling says that if the copyright holders later develop evidence that it is possible to retrieve entire copyrighted works, or significant portions of a work, then they will have the right sue over those facts.
But the facts before the court were that Anthropic’s LLMs have safeguards against distributing copies of identifiable copyrighted works to its users.
nodiratime@lemmy.world 4 days ago
Does it “generate” a 1;1 copy?
S_H_K@lemmy.dbzer0.com 4 days ago
Gives you versions like this
S_H_K@lemmy.dbzer0.com 4 days ago
Machine peepin’ is tha study of programs dat can improve they performizzle on a given task automatically.[41] It has been a part of AI from tha beginning.[e] In supervised peepin’, tha hustlin data is labelled wit tha expected lyrics, while up in unsupervised peepin’, tha model identifies patterns or structures up in unlabelled data.
There is nuff muthafuckin kindz of machine peepin’.
😗👌
kazerniel@lemmy.world 3 days ago
thanks I hate it xD
MTK@lemmy.world 3 days ago
You can train an LLM to generate 1:1 copies