I currently treat any positive interaction with an LLM as a “while the getting’s good” experience. It probably won’t be this good forever, just like Google’s search.
Comment on Grok’s “white genocide” obsession came from “unauthorized” prompt edit, xAI says
DreamAccountant@lemmy.world 3 weeks ago
Yeah, billionaires are just going to randomly change AI around whenever they feel like it.
That AI you’ve been using for 5 years? Wake up one day, and it’s been lobotomized into a trump asshole. Now it gives you bad information constantly.
Maybe the AI was taken over by religious assholes, now telling people that gods exist, manufacturing false evidence?
Who knows who is controlling these AI. Billionaires, tech assholes, some random evil corporation?
otacon239@lemmy.world 3 weeks ago
SpaceNoodle@lemmy.world 3 weeks ago
Pretty sad that the current state would be considered “good”
spankmonkey@lemmy.world 3 weeks ago
With accuracy rates declining over time, we are at the ‘as good as it gets’ phase!
SpaceNoodle@lemmy.world 3 weeks ago
If that’s the case, where’s Jack Nicholson?
applemao@lemmy.world 3 weeks ago
Yep, I knew this from the very beginning. Sadly the hype consumed the stupid, as it always will. And we will suffer for it, even though we knew better. Sometimes I hate humanity.
SpaceNoodle@lemmy.world 3 weeks ago
Joke’s on you, LLMs really give us bad information
ilinamorato@lemmy.world 3 weeks ago
Sure, but unintentionally. I heard about a guy whose small business (which is just him) recently had someone call in, furious because ChatGPT told them that he was having a sale that she couldn’t find. The customer didn’t believe him when he said that the promotion didn’t exist. Once someone decides to leverage that, and make a sufficiently-popular AI model start giving bad information on purpose, things will escalate.
Even now, I think Elon could put a small company out of business if he wanted to, just by making Grok claim that its owner was a pedophile or something.
knightly@pawb.social 3 weeks ago
“Unintentionally” is the wrong word, because it attributes the intent to the model rather than the people who designed it.
Hallucinations are not an accidental side effect, they are the inevitable result of building a multidimensional map of human language use. People hallucinate, lie, dissemble, write fiction, misrepresent reality, etc. Obviously a system that is designed to map out a human-sounding path from a given system prompt to a particular query is going to take those same shortcuts that people used in its training data.
spankmonkey@lemmy.world 3 weeks ago
Unintentionally is the right word because the people who designed it did not intend for it to be bad information. They chose an approach that resulted in bad information because of the data they chose to train and the steps that they took throughout the process.
ilinamorato@lemmy.world 2 weeks ago
You misunderstand me. I don’t mean that the model has any intent at all. Model designers have no intent to misinform: they designed a machine that produces answers.
True answers or false answers, a neural network is designed to produce an output. Because a null result (“there is no answer to that question”) is very, very rare online, the training data doesn’t include it; meaning that a GPT will almost invariably produce any answer; if a true answer does not exist in its training data, it will simply make one up.
But the designers didn’t intend for it to reproduce misinformation. They intended it to give answers. If a model is trained with the intent to misinform, it will be very, very good at it indeed; because the only training data it will need is literally everything except the correct answer.