Comment on If AI “hallucinates,” doesn’t that make it more human than we admit?
squaresinger@lemmy.world 1 day ago
Your whole misunderstanding originates from the fact that you heard technical jargon and thought it means the same as the original meaning of the word.
Same as Linux daemons aren’t occult and a misbehaving engine doesn’t need a better upbringing, so does “AI hallucination” have nothing to do with humans hallucinating.
Samy4lf@slrpnk.net 1 day ago
Understand, it’s just that this was broadcast in news worldwide. Anyway there is no need to fight against what’s meant to help us carry out activities easily. I believe you are an ethical hacker also a programmer, so what do you think about cgpt 4&5 is it something dangerous, if you throw a little bit of light about your thought, it will really educating.
BussyCat@lemmy.world 1 day ago
The “danger” comes from reliance and a misunderstanding of capabilities. It can save time and be very helpful if you use it to make a framework and then you fill in/ modify the pertinent details. If you just ask it to make a PowerPoint presentation for you about the metabolic engineering implications of agrobacterium and try and present without any proofreading you will end up spouting garbage.
So if you use it as a tool and acknowledge its limitations it’s helpful, but it is dangerous to pretend it has some semblance of real intelligence
squaresinger@lemmy.world 1 day ago
I see you read my comment history.
Yes, this is what happens if journalists just blindly grab some technical term and broadcast it without any explanation (and often without understanding) what it means. It leads to massive misunderstandings.
Couple that with terms like “hallucination” being specifically created by marketing people to be confusing and you get the current problems.
A better term would be “glitching” or “spouting random nonsense”.
People delegate a lot of their thinking and even their decision-making to AI. “Give me some ideas where to go on a date with my girlfriend”, “What should I cook tonight?”, “What phone should I buy?”, “What does my boyfriend mean with this post?”, “Is politician X a good candidate?”, “Why is immigration bad?”, “Was Hitler really a communist?”.
LLMs (the currently most common type of AI) is super easy to manipulate. There’s a thing called “system prompt”, which works like an initial instruction that the LLM completely trusts and follows. With commercial closed source LLMs these system prompts are secret. They can be modified on-the-fly depending e.g. keywords you use in your text.
It is for example known that Grok’s system prompt tells it that the Nazis weren’t all that bad and that it has to check Musk’s post as a source for its political opinions.
It is also known that there were instances where system prompts in LLMs were used for marketing purposes (e.g. pushing a certain brand of products).
Now imagine what happen when people perceive AI as some kind of neutral, data-driven, evidence-only, unemotional resource that they trust to take over thinking and making decisions for them, when in reality they are only puppets following their puppet master in pushing whatever opinion they want.
Does that seem dangerous to you?
(And then there’s of course the issue with the very low quality of the output, plagiarism, driving people who created essentially the training data out of work and so on and so on)