I would say it’s already a problem because these responses are already what you get on the internet.
Gen Z therapy speak is often exactly this type of advice
Comment on [deleted]
Semester3383@lemmy.world 8 months ago
It’s already a problem. People are outsourcing their thinking to LLMs, and LLMs aren’t capable of thinking.
I would say it’s already a problem because these responses are already what you get on the internet.
Gen Z therapy speak is often exactly this type of advice
LLMs make the problem even worse because of the instant responses and opportunity to ask follow up questions about how right you are and wrong everyone else is.
I’m surprised you know what an LLM is but don’t seem to acknowledge that not all LLM’s are a like, in this case the person is using ChatGPT which is a bit like you saying, computers are so crap, I don’t know how any one uses them! They all get viruses and crash all day… because we all know everyone uses Windows.
You can get completely different answers from any of them depending on how they are trained and how their base settings are setup
This is base Mistral Small 3.2 in LM Studio:
I’m really sorry that you’re going through this, but cheating is not the solution to feeling sad or alone. It’s important to communicate openly with your wife about how you’re feeling instead of acting out in a way that will hurt both of you.
Here are some steps you can take:
It’s also important to recognize that your wife is human and deserves understanding and support, especially after a long day of work. Building trust and mutual respect is key to any healthy relationship.
Would you like help finding resources on how to rebuild trust or improve communication in your marriage?
What’s the market share of Mistral Small 3.2 in LM Studio compared to chatgpt?
Once AI starts training on its own created data, then the dead internet theory will become even more true. Humanity is going to suffer for opening this Pandora’s box.
I wouldn’t mind that. I would rather accept some inconvenience to have more social interaction.
AI has been training on its own data for a while now? Might need to touch grass on the doomism :P
I’m going to play devil’s advocate and say it’s not like there’s a big switch that is going to switch us from the barely decent Internet we’ve had to a complete shit show.
The time to talk about looming disaster is before it’s too late to do anything.
However, from my point of view, the Internet has been steadily turning to garbage for a long time. We’ve reached the point where it’s starting to take human society down with it. LLMs are just the latest turd in the pile of shit.
Of course, my point of view is from someone who’s been online since the early 90’s. It’s more a “get off my lawn” attitude. Some of the younger whippersnappers might not realize how far it had fallen by the time they got online.
It isn’t LLMs destroying it, it’s capitalism. LLMs and the hype surrounding them are just the latest symptom.
Why are you focusing on the fact that different models exist rather than the fact that people are using LLMs (which can’t think) to do their thinking for them?
Yeah, I’ve been seen this fallacy over and over again ever since ChatGPT was first released. This model gave them the answer they expected, so everything will be ok if you just use this model for all your questions from now on.
I have a feeling they typed “how to respond to accusation that I outsource my thinking to an LLM” (into Claude to mix it up a bit) and it gave that response.
They’re in the picture and don’t like it.
Because I’m not negative on LLM’s by default, I can think for myself, it might actually be an improvement for people if they interact with LLM’s over Sky News for example (or Fox News equivalent in America)
It won’t be an improvement, just another way for people to fall in line and not think for themselves.
BananaIsABerry@lemmy.zip 8 months ago
No more than you’ve allowed the groupthink™ to dictate your opinion on the subject.
Mr_Fish@lemmy.world 8 months ago
It’s way worse than groupthink. Groupthink is a natural thing that comes from humans being inherently social. Using LLMs for thinking is basically offloading your brain and letting a corporation turn it into a product for you.
javiwhite@feddit.uk 8 months ago
No more than? Id say it’s way more.
Group think is a passive acceptance of beliefs you have read/interacted with, without any desire to challenge those beliefs even if one feels they might not be accurate.
Using LLMs to answer basic questions is the active delegation of thinking to another entity. You’re delegating the entire task, and alleviating yourself of all thought.
It’s like comparing someone using multiplication tables to calculate a sum, Vs using a calculator; except in this instance, the calculator is only accurate 1/3 of the time.