Good in theory, Problem is when the “creativity” value that adds random noise (and for some setups forces it to improvise) is too low, you get whatever impression the content made on the AI, like an imperfect photocopy (non expert, explained “memorization”). Too high and you get random noise.
Comment on Report: Potential NYT lawsuit could force OpenAI to wipe ChatGPT and start over
ArmokGoB@lemmy.dbzer0.com 1 year agoI disagree. I think that there should be zero regulation of the datasets as long as the produced content is noticeably derivative, in the same way that humans can produce derivative works using other tools.
HelloHotel@lemmy.world 1 year ago
adrian783@lemmy.world 1 year ago
LLM are not human, the process to train LLM is not human-like, LLM don’t have human needs or desires, or rights for that matter.
comparing it to humans has been a flawed analogy since day 1.
synceDD@lemmy.world 1 year ago
Llm no desires = no derivative works? Let llm handle your comments they will make more sense