It definitely has specific blocks on what it will output.
Ulrich@feddit.org 1 week ago
Pretty sure you can get it to say just about anything with the right prompts.
People need to stop acting like AI is a sentient being.
lepinkainen@lemmy.world 1 week ago
silence7@slrpnk.net 1 week ago
It was modified to make these kinds of responses much more likely
notarobot@lemmy.zip 1 week ago
I do not think so. Answers are statistical. Anything can come up. Except that whenever someone get something odd. It gets reported like AI is the president
Biyoo@lemmy.blahaj.zone 1 week ago
Statistical doesn’t mean it can’t spit out what they want.
They can train or fine tune the AI for praising Hitler, they can alter the default prompt to hit a more right wing dataset, they can have filters that retry when the answers are not what Musk expects…
There are a ton of ways to get fascist output from an AI.
notarobot@lemmy.zip 1 week ago
Look at that. It was you who didn’t understood the word. So much so that what you just said does not contradict what I said.
Yes. An AI can be tuned to praise Hitler. But I think it’s more likely that someone by chance got a fascist output or that they purposely promoted it to provide a fascist output and then went “OMG. I can believe it produce a fascist output”
I’m not defending musk nor grok. My basis for that statement is that it is a pattern that the “let’s report and AI output” is a pattern you see for every AI.
EvilBit@lemmy.world 1 week ago
You literally don’t know what the word “statistical” means.
notarobot@lemmy.zip 1 week ago
Really? Please. Enlighten me.
96VXb9ktTjFnRi@feddit.nl 1 week ago
I am all in favor of hating Musk and all his products, but I think you’re right. It seems rather unlikely that they would instruct their LLM to go out of it’s way to give fascist replies. That’s not to say that it shouldn’t be instructed to not give fascist output. Sadly, increasingly people form their view of the world based on the output of LLMs, so it would be helpful if these LLMs would help create worldviews that are beneficial to humanity at large. Which begs the question, who is to decide what is helpful and what is not. Musks answer is probably ‘freedom of speech, if I want to spoonfeed hate to little children, that is my freedom’. Which seems to me to be an example of when ideas of freedom turn into nihilism. But where they’re right is that government should also not be the one who tells people how to view the world. It’s people who should tell government, and the reverse, though perhaps well intended, is itself rahter dangerous. I think the solution, as per usual, is to free it all up, make FOSS LLMs, and let people choose the limitations which they deem proper. I would certainly not want my kids on ‘freedom of speech’ unrestricted AI.