yeah it’s sick. it’s not AI, but it will destroy the world. I kinda think that’s the point of it.
Comment on Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids
thatsnothowyoudoit@lemmy.ca 1 year ago
It’s all hallucinations.
Some (many) just happen to be very close to factual.
It’s sad to see that the marketing of these tools has been so effective that few realize how they work and what they do.
melpomenesclevage@lemmy.dbzer0.com 1 year ago
rottingleaf@lemmy.world 1 year ago
State propaganda works by gaslighting you to think everyone around thinks some way, or at least a select set of people, and that you should adjust your behavior accordingly. It’s more complicated, some people are conformists, some are contrarians, but it works, there’s their won kind of working trap for everyone.
But it still has efficiency that can be improved.
With LLMs all your interactions are by default through such influence. They are averaging the bullshit, and information produced by them is fed to us all. That’s the opposite of what any talented or just useful person does, useful people try to increase the entropy, LLMs kill it.
It’s a dream of thieves, bullies, useless people, politicians, that kind of crap.
Basically “Us”, “1984” and whatever else has been written is being attempted via this tool. It’s not misdirected I think, but I also think it’ll fail, because evolution works in shorter feedback loops and those doing such things succeed in them, but fail in other directions which could use energy.
OK, I should stop writing such texts, they repeat, don’t help with migraines, they are obvious and probably wrong.
melpomenesclevage@lemmy.dbzer0.com 1 year ago
to clarify: shannon entropy not thermodynamic entropy, which is kind of the opposite?
i hate language sometimes.
I think it will succeed at buying them time to build their doom bunkers without us doing a revolution, and then retreat into them.
they’ll die in there. closed systems don’t work and these people cannot cope with, much less manage, ecology, but we’ll die first.
zipzoopaboop@lemmynsfw.com 1 year ago
It doesn’t matter how it works. Is the output acceptable?
Sounds like no, and it’s the company’s problem to fix it
thatsnothowyoudoit@lemmy.ca 1 year ago
Ok hear me out: the output is all made up. In that context everything is acceptable as it’s just a reflection of the whole of the inputs.
Again, I think this stems from a misunderstanding of these systems. They’re not like a search engine (though, again, the companies would like you to believe that).
We can find the output offensive, off putting, gross , etc. but there is no real right and wrong with LLMs the way they are now. There is only statistical probably that a) we’ll understand the output and b) it approximates some currently held truth.
Put another way; LLMs convincingly imitate language - and therefore also convincing imitate facts. But it’s all facsimile.
AwesomeLowlander@sh.itjust.works 1 year ago
Yes, the problem lies in companies marketing it as more than that, hence the company being sued right now
Vegeta@lemmy.ca 1 year ago
It really is sad. I often hear, “I even asked ChatGPT and it said…” as if that means their response is valid. I’ve heard people say it who I thought would know better, too.
pewgar_seemsimandroid@lemmy.blahaj.zone 1 year ago
😎👉👉 zoop!
pogmommy@lemmy.ml 1 year ago
The number of times I’ve heard that by people expecting it to win them arguments is incredibly discouraging.
rottingleaf@lemmy.world 1 year ago
Infuriating. It’s like an oracle. Except in late antique literature you can see that nobody that firmly believed in what oracles say (that’d be disciples making notes according to some procedure kept secret, probably involving mind-affecting substances, but also mathematics - you can already see how this is similar to LLMs), it was like visiting a known attraction, interesting - wow, I’ve been at the Delphi oracle, I’ve received an advice there.
And today those herds of unbelievable fools are less sane that that antique public.
pyre@lemmy.world 1 year ago
hallucinations
It’s called libel.
thatsnothowyoudoit@lemmy.ca 1 year ago
Surely you jest because it’s so clearly not if you understand how LLMs work (at the core it’s a statistic model - and therefore all approximation to a varying degree).
But great can come out of this case.
Imagine the ilk of OpenAI, Google, Anthropic, XAI, etc. being forced to admit that an LLM can’t actually do anything but generate approximations of language. That these models (again LLMs in particular) produce approximations of language that are so good they’re often indistinguishable from the versions our brains approximate.
But at the core they cannot produce facts because the way they are made includes artificially injected randomness layered on-top of mathematically encoded values that merely get expressed as tiny pieces of language (tokens) - ones that happen to be close to each other in a massively multidimensional vector space.
TLDR - they’d be forced to admit the emperor has no clothes and that’s a win for everyone (except maybe this one guy).
Also it’s worth noting I use LLMs for work almost daily and have studied them quite a bit. I’m not a hater on the tech. Only the capitalists trying to force it down everyone’s throat in such a way that we blindly adopt it for everything.
redwattlebird@lemmings.world 1 year ago
Could we move away from calling it hallucinations as that would imply thinking? We should call it for what it is - bullshit.
eleitl@lemm.ee 1 year ago
Confabulation is a more appropriate term.
pyre@lemmy.world 1 year ago
this is confusing. did you think I meant you’re engaging in libel against llms or something? that’s the only way I can make sense of your reply.
thatsnothowyoudoit@lemmy.ca 1 year ago
Really? I read your reply as saying the output is libellous - which it cannot be because it is not based in fact.
ameancow@lemmy.world 1 year ago
Seriously, you have no idea. I have spent some time delving into the current models, human psychology, neurology and evolution and how people engage with each other or other entities, and the problem is already worse than we realize, and it’s going to get so, so much worse, because our species has major vulnerabilities in our entire conscious experience, these things are going to reshape the way people engage with reality itself at some point and we should all be a lot more concerned and I’m an old man yelling on the street corner with a cardboard sign huh.