Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

<- View Parent
Zozano@aussie.zone ⁨4⁩ ⁨days⁩ ago

No, I’m specifically describing what an LLM is. It’s a statistical model trained on token sequences to generate contextually appropriate outputs. That’s not “tools it uses,” that is the model. When I said it pattern-matches reasoning and identifies contradictions, I wasn’t talking about external plug-ins or retrieval tools, I meant the LLM’s own internal learned representation of language, logic, and discourse.

You’re drawing a false distinction. When GPT flags contradictions, weighs claims, or mirrors structured reasoning, it’s not outsourcing that to some other tool, it’s doing what it was trained to do. It doesn’t need to understand truth like a human to model the structure of truthful argumentation, especially if the prompt constrains it toward epistemic rigor.

Now, if you’re talking about things like code execution, search, or retrieval-augmented generation, then sure, those are tools it can use. But none of that was part of my argument. The ability to track coherence, cite counterexamples, or spot logical fallacies is all within the base LLM. That’s just weights and training.

So unless your point is that LLMs aren’t humans, which is obvious and irrelevant, all you’ve done is attack your own straw man.

source
Sort:hotnewtop