ganryuu
@ganryuu@lemmy.ca
- Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims 3 days ago:
Very fair. Thank you!
- Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims 3 days ago:
I agree with the part about unintended use, yes an LLM is not and should never act as a therapist. However, concerning your example with search engines, they will catch the suicide keyword and put help sources before any search result. Google does it, DDG also. I believe ChatGPT will start with such resources also on the first mention, but as OpenAI themselves say, the safety features degrade with the length of the conversation.
About this specific case, I need to find out more, but other comments on this thread say that not only the kid was in therapy, suggesting that the parents were not passive about it, but also that ChatGPT actually encouraged the kid to hide what he was going through. Considering what I was able to hide from my parents when I was a teenager, without such a tool available, I can only imagine how much harder it would be to notice the depth of what this kid was going through.
In the end I strongly believe that the company should put much stronger safety features, and if they are unable to do so correctly, then my belief is that the product should just not be available to the public. People will misuse tools, especially a tool touted as AI when it is actually a glorified autocomplete.
(Yes, I know that AI is a much larger term that also encompasses LLMs, but the actual limitations of LLMs are not well enough known by the public, and not communicated enough by the companies to the end users)
- Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims 3 days ago:
I’m honestly at a loss here, I didn’t intend to argue in bad faith, so I don’t see how I moved any goal post
- Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims 3 days ago:
Can’t argue there…
- Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims 3 days ago:
That seems way more like an argument against LLMs in general, don’t you think? If you cannot make it so it doesn’t encourage you to suicide without ruining other uses, maybe it wasn’t ready for general use?
- Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims 3 days ago:
I get the feeling that you’re missing one very important point about GenAI: it does not, and cannot (by design) know right from wrong. The only thing it knows is what word is statistically the most likely to appear after the previous one.
- Comment on when ur higher than sagan 5 days ago:
Are you telling me that I should have diluted some bullet material, instead of trying to start by shooting myself with a small caliber and work up my immunity from that? All this work, wasted!
- Comment on when ur higher than sagan 5 days ago:
To add to your point, it used to be that the village idiot was just that, known for it, and shamed or shunned. Now that they can connect to other village idiots, they can find a community of like minded idiots that reinforces their beliefs.
- Comment on leading ai company 6 days ago:
I’ll be the first to admit that I initially fell for his (initially) near perfect PR that crafted the industry genius image he’s still coasting on to this day. Of course that took a nosedive when he started calling a rescuer “pedo” for pointing out the stupidity of his rescue submarine idea. But it wasn’t until he started talking about IT that I finally started to understand he wasn’t an average CEO manipulating public opinion to his advantage, but an absolute moron who actually never had any idea of what he was talking about. Yes the dude is that stupid, but good PR is actually hard to completely take down.
- Comment on Why LLMs can't really build software 2 weeks ago:
Probably why they talked about looking at a stack trace, you’ll see immediately that you made a typo in a variable’s name or language keyword when compiling or executing.
- Comment on Lemmy be like 2 weeks ago:
Yes! Will people stop with their sloppy criticisms?
- Comment on Schools are using AI to spy on students and some are getting arrested for misinterpreted jokes and private conversations 3 weeks ago:
Even when we go per capita the US stays a shithole, it’s not like they were trying to actively misinform people.
- Comment on Google Gemini struggles to write code, calls itself “a disgrace to my species” 3 weeks ago:
I’d say that it’s simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don’t know if they know the answer, they just say what’s the most statically probable thing to say given your message and their prompt.
- Comment on Google Gemini struggles to write code, calls itself “a disgrace to my species” 3 weeks ago:
You’re giving way too much credit to LLMs. AIs don’t “know” things, like “humans lie”. They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot “lie” because they do not even understand what it is they are writing.