Because I’m not negative on LLM’s by default, I can think for myself, it might actually be an improvement for people if they interact with LLM’s over Sky News for example (or Fox News equivalent in America)
Comment on [deleted]
AmbiguousProps@lemmy.today 1 day agoWhy are you focusing on the fact that different models exist rather than the fact that people are using LLMs (which can’t think) to do their thinking for them?
Eyekaytee@aussie.zone 1 day ago
AmbiguousProps@lemmy.today 1 day ago
It won’t be an improvement, just another way for people to fall in line and not think for themselves.
Eyekaytee@aussie.zone 1 day ago
AmbiguousProps@lemmy.today 1 day ago
My point is that it’s just going to add to the slop, even when LLMs “source” data, they can be very wrong. Just because slop exists elsewhere doesn’t mean adding more slop will suddenly create a positive. It just creates more slop.
skisnow@lemmy.ca 1 day ago
Yeah, I’ve been seen this fallacy over and over again ever since ChatGPT was first released. This model gave them the answer they expected, so everything will be ok if you just use this model for all your questions from now on.
criss_cross@lemmy.world 1 day ago
I have a feeling they typed “how to respond to accusation that I outsource my thinking to an LLM” (into Claude to mix it up a bit) and it gave that response.
They’re in the picture and don’t like it.