narr1
@narr1@lemmy.ml
I have a trauma-based personality disorder, which sometimes manifests itself in episodes of often uncontrollable bouts of verbal violence. I prefer to direct this to people on the internet (as opposed to actual people), as I don’t wish to be violent towards people I actually care about.
- Comment on [deleted] 15 hours ago:
Now now, don’t get so triggered. Just tried to be funny, that’s all. Also your analogy about cars doesn’t hold water for shit :–D
But uh, maybe there are some power management options micro$oft allows you to fiddle with? Or ask the onboard AI, maybe that can hallucinate some answers for you?
- Comment on [deleted] 16 hours ago:
Have you tried switching to Linux?
- Comment on Scientists in Japan develop plastic that dissolves in seawater within hours 22 hours ago:
huh. happy to know we’ll never hear from this again! thanks capitalism!
- Comment on Reddit sues Anthropic for allegedly not paying for training data | TechCrunch 1 day ago:
No such thing exists.
- Comment on The american crypto mafia 1 day ago:
oh hey, a clear list of targets for some Mario brother!
- Comment on What could go wrong? 2 weeks ago:
Yeah, I realize the most important part of the point I was trying to make kinda got glossed over in my own reply (whoops); these LLMs nowadays are programmed to sound empathetic, more than any human can ever continuously achieve because we get tired and annoyed and other human stuff. This combined with the point of not every “emergency” really being an actual emergency leads me to believe that the idea of “therapeutic” AI chatbots could work, but I wouldn’t advocate using any of those that exist nowadays for this, at least if the user has any regards to online privacy. But having a hotline to a being that has all the resources to help you calm yourself down; a being that is always available, never tired, never annoyed, never sick or otherwise out of office; that seems to know you and remember things you have told it before - that sounds compelling as an idea. But then again, a part of that compel probably comes from the abysmal state of psychiatric healthcare that I and many others have witnessed, and this hotline should be integrated into that care. So I don’t know, maybe it’s just wishful thinking on my part, sorry I came across as needlessly hostile.
- Comment on What could go wrong? 2 weeks ago:
I’m not saying that’s ok, did you even read my reply or are you just being needlessly contrarian? Or was I just being unclear in my message, because if so I’m sorry. It tends to happen to me.
- Comment on What could go wrong? 2 weeks ago:
Yeah, well, that’s just, like, your opinion, man. And if you remove the very concept of capital gain from your “important point”, I think you’ll find your point to be moot.
I’m also going to assume you haven’t been in such a situation as I described with the whole mental health emergency? Because I have. At best I went to the emergency and calmed down before ever seeing a doctor, and at worst I was committed to inpatient care (or “the ward” as it’s also known) before I calmed down, taking resources from the treatment of people who weren’t as unstable as I was, a problem which could’ve been solved with a chatbot. And I can assure you there are people who live outside the major metropolitan areas of North America, it isn’t an extremely rare case as you claim.
Anyway, my point stands.
- Comment on What could go wrong? 2 weeks ago:
There are ways that LLMs can be used to better one’s life (apparently in some software dev circles these can be and are used to make workflow more efficient) and this can also be one of them, because the part that sucks most about therapy (after the whole monetary thing) is trying to find the form of therapy that works for you, and finding a therapist that you can work with. Every human is different, and that contains both the patient and the therapist, and not everyone can just start working together right off the bat. Not to mention how long it takes for a new therapist to actually get to know you to improve the odds of the cooperation working.
Obviously I’m not saying “replace all therapists with AIs controlled by racist capitalist pigs with ulterior motives”, but I have witnessed people in my own life who have had some immediate help from a fucking chatbot, which is kinda ridiculous. So in times of distress (say a borderline having such an anxiety attack that they can’t calm themselves because they don’t know what to do to the vicious cycle of thought and emotional response) and for immediate help a well-developed, non-capitalist LLM might be of invaluable help, especially if an actual human can’t be reached if for an example (in this case) the borderline lives in a remote area and it is the middle of the night, as I can tell from personal experience it very often is. And though not every mental health emergency requires first responders on the scene or even a trip to the hospital, there is still a possibility of both being needed eventually. So a chatbot with access to necessary information in general (like techniques for self-soothing e.g. breathing exercises and so forth) and possibly even personal information (like diagnostic and medication history, though this would raise more privacy concerns to be assessed) and the capability to parse and convey them in a non-belittling way (as some doctors and nurses can be real fucking assholes at times) could/would possibly save lives.
So the problem here is capitalism, surprising no-one.
- Comment on Palantir CEO Alex Karp praises Saudi engineers and takes a swipe at Europe, saying it has 'given up' on AI 3 weeks ago:
despite what my lightning fast intellect, general tendency to hallucinate, and inability to work properly because of capitalism may lead you to believe, i am not chatgpt
- Comment on Palantir CEO Alex Karp praises Saudi engineers and takes a swipe at Europe, saying it has 'given up' on AI 3 weeks ago:
right on, thanks!
- Comment on Palantir CEO Alex Karp praises Saudi engineers and takes a swipe at Europe, saying it has 'given up' on AI 3 weeks ago:
there was this microblogmeme where the joke was like “u need the ai to do your work because you’re too shit? do u need it to do your art for you because you can’t? u need it to fuck your wife for u also? fucking loser” and i think it’d fit here but couldn’t find it :(
- Comment on Palantir CEO Alex Karp praises Saudi engineers and takes a swipe at Europe, saying it has 'given up' on AI 3 weeks ago:
this dude should really choke on it.