Mostly, what I’ve shown is if you hold a gun to its head (“argue from ONLY these facts or I shoot”) certain classes of LLMs (like the Qwen 3 series I tested; I’m going to try IBM’s Granite next) are actually pretty good at NOT hallucinating, so long as 1) you keep the context small (probably 16K or less? Someone please buy me a better pc) and 2) you have strict guard-rails. And - as a bonus - I think (no evidence; gut feel) it has to do with how well the model does on strict tool calling benchmarks. Further, I think abliteration makes that even better.
If that’s true (big IF), the we can reasonably quickly figure out (by proxy) which LLM’s are going to be less bullshitty when properly shackled.
I’ll keep squeezing the stone until blood pours out. Stubbornness opens a lot of doors.
Are all outputs hallucinations? It’s just some happen to be correct and some aren’t. It doesn’t know and can’t tell unless it’s specifically told (hence the guard rails).
But if I’ve gotta build so many hand rails (instructions) then is it really “AI”?
Point 1 - no. LLM outputs are not always hallucinations (generally speaking - some are worse than others) but where they might veer off into fantasy, I’ve reinforced with programming. Think of it like giving your 8yr old a calculator instead of expecting them to work out 7532x565 in their head. And a dictionary. And encyclopedia. And Cliff’s notes. And watch. And compass. And a … you get the idea.
The role of the footer is to show you which tool it used (its own internal priors, what you taught it, calculator etc) and what ratio the answer is based on those. Those are router assigned. That’s just one part of it though.
Point 2 is a mis-read. These aren’t instructions or system prompts telling the model “don’t make things up” - that works about as well as telling a fat kid not to eat cake.
Instead, what happens is the deterministic elements fire first. The model gets the answer, on which the model then builds context on. That’s not guardrails on AI, that’s just not using AI where AI is the wrong tool. Whether that’s “real AI” is a philosophy question - what I do know and can prove is that it leads to fewer wrong answers.
SuspciousCarrot78@lemmy.world 3 weeks ago
Well…no. But also yes :)
Mostly, what I’ve shown is if you hold a gun to its head (“argue from ONLY these facts or I shoot”) certain classes of LLMs (like the Qwen 3 series I tested; I’m going to try IBM’s Granite next) are actually pretty good at NOT hallucinating, so long as 1) you keep the context small (probably 16K or less? Someone please buy me a better pc) and 2) you have strict guard-rails. And - as a bonus - I think (no evidence; gut feel) it has to do with how well the model does on strict tool calling benchmarks. Further, I think abliteration makes that even better.
If that’s true (big IF), the we can reasonably quickly figure out (by proxy) which LLM’s are going to be less bullshitty when properly shackled.
I’ll keep squeezing the stone until blood pours out. Stubbornness opens a lot of doors.
how_we_burned@lemmy.zip 5 days ago
Are all outputs hallucinations? It’s just some happen to be correct and some aren’t. It doesn’t know and can’t tell unless it’s specifically told (hence the guard rails).
But if I’ve gotta build so many hand rails (instructions) then is it really “AI”?
SuspciousCarrot78@lemmy.world 5 days ago
Point 1 - no. LLM outputs are not always hallucinations (generally speaking - some are worse than others) but where they might veer off into fantasy, I’ve reinforced with programming. Think of it like giving your 8yr old a calculator instead of expecting them to work out 7532x565 in their head. And a dictionary. And encyclopedia. And Cliff’s notes. And watch. And compass. And a … you get the idea.
The role of the footer is to show you which tool it used (its own internal priors, what you taught it, calculator etc) and what ratio the answer is based on those. Those are router assigned. That’s just one part of it though.
Point 2 is a mis-read. These aren’t instructions or system prompts telling the model “don’t make things up” - that works about as well as telling a fat kid not to eat cake.
Instead, what happens is the deterministic elements fire first. The model gets the answer, on which the model then builds context on. That’s not guardrails on AI, that’s just not using AI where AI is the wrong tool. Whether that’s “real AI” is a philosophy question - what I do know and can prove is that it leads to fewer wrong answers.
andallthat@lemmy.world 2 weeks ago
is “potato frontier” an auto-correct fail for Pareto or a real term? Because if it’s not a real term, I’m 100% going to make it one!
SuspciousCarrot78@lemmy.world 2 weeks ago
No, it’s real. I’m running on a Quadro P1000 with 4GB vram (or a Tesla P4 with 8GB). My entire raison d’être is making potato tier computing a thing.
openwebui.com/…/vodka_when_life_gives_you_a_potat…
That and because fuck Chatgpt
I refuse to believe in no win scenarios.
Giblet2708@lemmy.sdf.org 2 weeks ago
Obligatory reference since you mention AI and no win scenarios: www.msn.com/en-in/news/India/…/ar-AA1YqyEY