A Math PhD will eventually make a simple arithmetic mistake if you ask them to do enough problems. That doesn’t invalidate more difficult proofs they have published in papers
Comment on Zuckerberg hailed AI ‘superintelligence’. Then his smart glasses failed on stage | Matthew Cantor
UnderpantsWeevil@lemmy.world 18 hours agoI don’t need LLM’s to count letters
If I can’t rely on a system to perform simple tasks I can easily validate, I’m not sure why I’d trust it to perform complex tasks I would struggle to verify.
Imagine a calculator that reported “1+1=3”. It seems silly to use such a machine to do long division.
iopq@lemmy.world 18 hours ago
UnderpantsWeevil@lemmy.world 18 hours ago
A Math PhD will eventually make a simple arithmetic mistake if you ask them to do enough problems.
Which is why we don’t designate a single Math PhD as a definitive source for all mathematical wisdom.
That doesn’t invalidate more difficult proofs
If I’m handed a proof with a simple arithmetic mistake in the logic, that absolutely invalidates it
iopq@lemmy.world 10 hours ago
But you didn’t say that. You said you can’t trust something that makes basic mistakes. Humans make them all the time. You can’t trust any human?
UnderpantsWeevil@lemmy.world 2 hours ago
Beginning to think I’m arguing with a bot
circuscritic@lemmy.ca 18 hours ago
That’s my point, I don’t use LLMs for those operations, and I’m aware of their faults, but that doesn’t mean they’re useless.
So yeah, I look forward to the AI bubble popping, but I’m still going to use LLMs for type of tasks they’re actually suited for.
I don’t think many people on Lemmy are under the the spell of AI hype, but plenty of people here are knowledgeable enough to know when, and when not, to leverage this useful, but dangerously overhyped and oversold, piece of technology.