And yet people still use those bullshit generators and call their bullshit „hallucinations”.
We broke the internet for this.
Submitted 5 months ago by jeffw@lemmy.world to technology@lemmy.world
https://www.theatlantic.com/technology/archive/2024/07/searchgpt-openai-error/679248/
And yet people still use those bullshit generators and call their bullshit „hallucinations”.
We broke the internet for this.
The internet was broke before.
me when paywalls
👍
what does a web browser have to do with a search engine?
Edge comes pre-enabled with a ton of microsoft’s crappy AI - Bing chat, copilot, etc.
Bell@lemmy.world 5 months ago
The hallucinations will continue until the training data is absolutely perfect
hendrik@palaver.p3x.de 5 months ago
That's not correct btw. AI is supposed to be creative and come up with new text/images/ideas. Even with perfect training data. That creativity means creativity. We want it to come up with new text out of thin air. And perfect training data is not going to change anything about it. We'd need to remove the ability to generate fictional stories and lots of other answers, too. Or come up with an entirely different approach.
bjorney@lemmy.ca 5 months ago
AI isn’t supposed to be creative, it’s isn’t even capable of that. It’s meant to min/max it’s evaluation criterion against a test dataset
It does this by regurgitating the training data associated with a given input as closely as possible
NeoNachtwaechter@lemmy.world 5 months ago
In order to get perfect training data, they cannot use any human output.
I’m afraid it is not going to happen anytime soon :)
kokesh@lemmy.world 5 months ago
I’ve started editing my answers/questions on StackExchange. Few characters at a time. I’m doing my part.
Orbituary@lemmy.world 5 months ago
What other output do you propose?
hedgehog@ttrpg.network 5 months ago
Hallucinations are an unavoidable part of LLMs, and are just as present in the human mind. Training data isn’t the issue. The issue is that the design of the systems that leverage LLMs uses them to do more than they should be doing.
I don’t think that anything short of being able to validate an LLM’s output without running it through another LLM will be able to fully prevent hallucinations.