I do not think so. Answers are statistical. Anything can come up. Except that whenever someone get something odd. It gets reported like AI is the president
Statistical doesn’t mean it can’t spit out what they want.
They can train or fine tune the AI for praising Hitler, they can alter the default prompt to hit a more right wing dataset, they can have filters that retry when the answers are not what Musk expects…
There are a ton of ways to get fascist output from an AI.
Look at that. It was you who didn’t understood the word. So much so that what you just said does not contradict what I said.
Yes. An AI can be tuned to praise Hitler.
But I think it’s more likely that someone by chance got a fascist output or that they purposely promoted it to provide a fascist output and then went “OMG. I can believe it produce a fascist output”
I’m not defending musk nor grok. My basis for that statement is that it is a pattern that the “let’s report and AI output” is a pattern you see for every AI.
You replied to “It was modified to make these kinds of responses much more likely” with the rebuttal that it’s “statistical”. When something has a varying degree of likelihood, that’s exactly what statistical means.
I am all in favor of hating Musk and all his products, but I think you’re right. It seems rather unlikely that they would instruct their LLM to go out of it’s way to give fascist replies. That’s not to say that it shouldn’t be instructed to not give fascist output. Sadly, increasingly people form their view of the world based on the output of LLMs, so it would be helpful if these LLMs would help create worldviews that are beneficial to humanity at large. Which begs the question, who is to decide what is helpful and what is not. Musks answer is probably ‘freedom of speech, if I want to spoonfeed hate to little children, that is my freedom’. Which seems to me to be an example of when ideas of freedom turn into nihilism. But where they’re right is that government should also not be the one who tells people how to view the world. It’s people who should tell government, and the reverse, though perhaps well intended, is itself rahter dangerous. I think the solution, as per usual, is to free it all up, make FOSS LLMs, and let people choose the limitations which they deem proper. I would certainly not want my kids on ‘freedom of speech’ unrestricted AI.
notarobot@lemmy.zip 2 weeks ago
I do not think so. Answers are statistical. Anything can come up. Except that whenever someone get something odd. It gets reported like AI is the president
Biyoo@lemmy.blahaj.zone 2 weeks ago
Statistical doesn’t mean it can’t spit out what they want.
They can train or fine tune the AI for praising Hitler, they can alter the default prompt to hit a more right wing dataset, they can have filters that retry when the answers are not what Musk expects…
There are a ton of ways to get fascist output from an AI.
notarobot@lemmy.zip 2 weeks ago
Look at that. It was you who didn’t understood the word. So much so that what you just said does not contradict what I said.
Yes. An AI can be tuned to praise Hitler. But I think it’s more likely that someone by chance got a fascist output or that they purposely promoted it to provide a fascist output and then went “OMG. I can believe it produce a fascist output”
I’m not defending musk nor grok. My basis for that statement is that it is a pattern that the “let’s report and AI output” is a pattern you see for every AI.
Biyoo@lemmy.blahaj.zone 2 weeks ago
I see, I did misread your comment.
You meant something like : it’s not more racist than before, it’s just a random fascist output that got blown out of proportion.
And there has to be fascist output since it’s statistical and there is fascism in the training data.
So I have no idea, I don’t use grok not sure if they edited their AI for that further. Elon seems to say yes but he lies all the time.
EvilBit@lemmy.world 2 weeks ago
You literally don’t know what the word “statistical” means.
notarobot@lemmy.zip 2 weeks ago
Really? Please. Enlighten me.
EvilBit@lemmy.world 1 week ago
You replied to “It was modified to make these kinds of responses much more likely” with the rebuttal that it’s “statistical”. When something has a varying degree of likelihood, that’s exactly what statistical means.
96VXb9ktTjFnRi@feddit.nl 2 weeks ago
I am all in favor of hating Musk and all his products, but I think you’re right. It seems rather unlikely that they would instruct their LLM to go out of it’s way to give fascist replies. That’s not to say that it shouldn’t be instructed to not give fascist output. Sadly, increasingly people form their view of the world based on the output of LLMs, so it would be helpful if these LLMs would help create worldviews that are beneficial to humanity at large. Which begs the question, who is to decide what is helpful and what is not. Musks answer is probably ‘freedom of speech, if I want to spoonfeed hate to little children, that is my freedom’. Which seems to me to be an example of when ideas of freedom turn into nihilism. But where they’re right is that government should also not be the one who tells people how to view the world. It’s people who should tell government, and the reverse, though perhaps well intended, is itself rahter dangerous. I think the solution, as per usual, is to free it all up, make FOSS LLMs, and let people choose the limitations which they deem proper. I would certainly not want my kids on ‘freedom of speech’ unrestricted AI.