Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not
nednobbins@lemmy.zip 1 week agoHuman learning requires understanding, which AI is not capable of.
How could anyone know this?
Is there some test of understanding that humans can pass and AIs can’t? And if there are humans who can’t pass it, do we consider then unintelligent?
We don’t even need to set the bar that high. Is there some definition of “understanding” that humans meet and AIs don’t?
gaja@lemm.ee 1 week ago
It’s literally in the phrase “statically optimized.” This is like arguing for your preferred deity. It’ll never be proven but we have evidence to make our own conclusions. As it is now, AI doesn’t learn or understand the same way humans do.
nednobbins@lemmy.zip 1 week ago
So you’re confident that human learning involves “understanding” which is distinct from “statistical optimization”. Is this something you feel in your soul or can you define the difference?
gaja@lemm.ee 1 week ago
Yes. You learned not to touch a hot stove either from experience or a warning. That fear was immortalized by your understanding that it would hurt. An AI will to you not to touch a hot stove (most of the time) because the words “hot” “stove” “pain” etc… pop up in its dataset together millions of times. As things are, they’re barely comparable. The only reason people keep arguing is because the output is very convincing. Go and download pytorch and read some stuff, or Google it. I’ve even asked deepseek for you:
Can AI learn and understand like people?
1. Learning: Similar in Some Ways, Different in Others
2. Understanding: AI vs. Human Cognition
3. Strengths & Weaknesses
✔ AI Excels At:
❌ AI Falls Short At:
4. Current AI (Like ChatGPT) is a "Stochastic Parrot"
5. Future Possibilities (AGI)
Conclusion:
AI can simulate learning and understanding impressively, but it doesn’t experience them like humans do. It’s a powerful tool, not a mind.
Would you like examples of where AI mimics vs. truly understands?
nednobbins@lemmy.zip 1 week ago
That’s a very emphatic restatement of your initial claim.
I can’t help but notice that, for all the fancy formatting, that wall of text doesn’t contain a single line which actually defines the difference between “learning” and “statistical optimization”. It just repeats the claim that they are different without supporting that claim in any way.
Nothing in there, precludes the alternative hypothesis; that human learning is entirely (or almost entirely) an emergent property of “statistical optimization”. Without some definition of what the difference would be we can’t even theorize a test