Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies
Zozano@aussie.zone 1 day agoWhat makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. ChatGPT is more consistent, less biased, and applies reasoning patterns that outperform the average human by miles.
Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn’t “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.
So yes, it can evaluate truth. Not perfectly, but often better than the average person.
dzso@lemmy.world 20 hours ago
I’m not saying humans are infallible at recognizing truth either. That’s why so many of us fall for the untruths that AI tells us. But we have access to many tools that help us evaluate truth. AI is emphatically NOT the right tool for that job. Period.
Zozano@aussie.zone 16 hours ago
Right now, the capabilities of LLM’s are the worst they’ll ever be. It could literally be tomorrow that someone drops and LLM that would be perfectly calibrated to evaluate truth claims. But right now, we’re at least 90% of the way there.
The reason people fail to understand the untruths of AI is the same reason people hurt themselves with power tools, or use a calculator wrong.
You don’t blame the tool, you blame the user. LLM’s are no different. You can prompt GPT to intentionally give you bad info, or lead it to give you bad info by posting increasingly deranged statements. If you stay coherent, well read and make an attempt at structuring arguments to the best of your ability, the pool of data GPT pulls from narrows enough to be more useful than anything else I know.
I’m curious as to what you regard as a better tool for evaluating truth?
Period.
dzso@lemmy.world 16 hours ago
You don’t understand what an LLM is, or how it works. They do not think, they are not intelligent, they do not evaluate truth.
Zozano@aussie.zone 16 hours ago
I do understand what an LLM is. It’s a probabilistic model trained on massive corpora to predict the most likely next token given a context window. I know it’s not sentient and doesn’t “think,” and doesn’t have beliefs. That’s not in dispute.
But none of that disqualifies it from being useful in evaluating truth claims. Evaluating truth isn’t about thinking in the human sense, it’s about pattern-matching valid reasoning, sourcing relevant evidence, and identifying contradictions or unsupported claims. LLMs do that very well, especially when prompted properly.
Your insistence that this is “dangerous naïveté” confuses two very different things: trusting an LLM blindly, versus leveraging it with informed oversight. I’m not saying GPT magically knows truth, I’m saying it can be used as a tool in a truth-seeking process, just like search engines, logic textbooks, or scientific journals. None of those are conscious either, yet we use them to get closer to truth.
If you’re worried about misuse, and so am I. But claiming the tool is inherently useless because it lacks consciousness is like saying microscopes can’t discover bacteria because they don’t know what they’re looking at.
So again: if you believe GPT is inherently incapable of aiding in truth evaluation, the burden’s on you to propose a more effective tool that’s publicly accessible, scalable, and consistent. I’ll wait.