Comment on Just a little... why not?
Zetta@mander.xyz 2 days agoWow that next word guesser picks the next words it looks like you want based off your message when it’s not censored. This is not unexpected behavior
Comment on Just a little... why not?
Zetta@mander.xyz 2 days agoWow that next word guesser picks the next words it looks like you want based off your message when it’s not censored. This is not unexpected behavior
MTK@lemmy.world 2 days ago
That’s the point though…
Without censorship it just does what it thinks would be best fitting. It means that if the AI thinks that encouraging you to take drugs, suicide, murder, etc would fit best, then it will do that.
Electricd@lemmybefree.net 1 day ago
Yea without safeguards, LLMs just tell you what you want to heard, but they get “dumber” with safeguards as well
Zetta@mander.xyz 2 days ago
Brother I’m aware of how it works, most uncensored models made by the community like the one you used are made for sexual role playing, or at least thats the largest crowd of home users of uncensored llms IMO. I’m not arguing with you on why or what the model does, I’m saying its intended design for these models. No its probably not great for wackos to play around with, but freedom is scary.
MTK@lemmy.world 2 days ago
I agree. I guess my point was that people need to be aware of how crazy AI models can be and always be careful about sensitive topics with them.
If I were to use an LLM as a therapist, I would be extremely sceptical of anything it says, and doubly so when it confirms my own beliefs.
Zetta@mander.xyz 2 days ago
Fair enough. I wouldn’t even consider seeing a therapist that used an llm in any capacity, let alone letting an llm be the therapist. Sadly I think the people that would make the mistake of doing just that probably wont be swayed, but fair enough to raise awareness.