And, the thing is, LLMs are quite well protected. Look what I got MS Paint to say with almost no effort! Don’t get me started on plain pen and paper! Which we put in the hands of TODDLERS!
Comment on On Tuesday afternoon, ChatGPT encouraged me to cut my wrists
TheGrandNagus@lemmy.world 3 days ago
I’m a very vocal critic of LLMs, I think they’re so overhyped and overused it’s hard to believe.
But I’m also getting really tired of people purposely putting extreme effort into tricking the LLM into saying something harmful if someone were to follow it blindly, just so they can make a clickbait headline out of it.
And what the hell is up with the major “ChatGPT is Satanist [if you instruct it to be]” angle? Are we really doing the Satanist moral panic again?
ffs, criticise OpenAI for being closed af, being wasteful, being strong political lobbyists, for stealing work, etc. You don’t need to push disingenuous stuff like this.
backgroundcow@lemmy.world 3 days ago
koper@feddit.nl 3 days ago
MS Paint isn’t marketed or treated as a source of truth. LLMs are.
backgroundcow@lemmy.world 3 days ago
Does the marketing matter when the reason for the offending output is that the user spent significant deliberate effort in coaxing the LLM to output what it did? It still seems like MS Paint with extra steps to me.
I get not wanting LLMs to unprompted output “offensive content”. Just like it would be noteworthy if “Clear canvas” in MS Paint sometimes yielded a violent bloody photograph. But, that isn’t what is going on in OPs clickbait.
Passerby6497@lemmy.world 2 days ago
when the reason for the offending output is that the user spent significant deliberate effort in coaxing the LLM to output what it did?
What about all the mentally unstable people who aren’t trying to get to say crazy things, end up getting it to say crazy things just by the very nature of the conversations they’re having with it? We’re talking about a stochastic yes man who can take any input and turn it into psychosis under the right circumstances, and we already have plenty of examples of it sending unstable people over the edge.
The only reason this is “click bait” is because someone chose to do this, rather than their own mental instability bringing this out organically. The fact that this can, and does, happen when someone is trying to do it should make you really consider the sort of things it will tell someone who may be in a state where they legitimately consider crazy shit to be good advice.
Melvin_Ferd@lemmy.world 2 days ago
I’ve been saying this for a bit. The issue in finding isn’t chatgpt. It’s journalist. All these articles are so copy paste of any yellow journalistic bullshit. But ratchet up because a lot of journalists feel threatened by AI.
overload@sopuli.xyz 3 days ago
100%,
unexposedhazard@discuss.tchncs.de 3 days ago
I dont think you understand how many mentally unstable people out there are using LLMs as therapists.
There are fuckloads of cases now of people that were genuinely misled by LLMs into doing horrible things.
I agree with your sentiment, but the thing is that the companies are selling their shit as gold and not as shit. If they were honest about their stuff being shit, then people wouldnt be able to capitalize off of malicious prompt engineering. If you claim “we made it super safe and stuff” then you are inviting people to test that.
Dojan@pawb.social 3 days ago
I do understand. I know there are people out there thinking that LLMs are literally Jesus returning. But in that case, write about that.
There are a lot more reasonable critiques of LLMs to be had.
h3rmit@lemmy.dbzer0.com 3 days ago
You an make a car super safe, and if someone cuts the brakes cable it will not slow.
Spuddlesv2@lemmy.ca 2 days ago
More like SAYING you’ve made a car super safe while actually it’s only safe if you never drive it above 5KM/h and never down hills, up hills, in the rain, etc