Perspectivist
@Perspectivist@feddit.uk
Freedom is the right to tell people what they do not want to hear.
- George Orwell
- Comment on New study sheds light on ChatGPT’s alarming interactions with teens 3 hours ago:
I hear you - you’re reacting to how people throw around the word “intelligence” in ways that make these systems sound more capable or sentient than they are. If something just stitches words together without understanding, calling it intelligent seems misleading, especially when people treat its output as facts.
But here’s where I think we’re talking past each other: when I say it’s intelligent, I don’t mean it understands anything. I mean it performs a task that normally requires human cognition: generating coherent, human-like language. That’s what qualifies it as intelligent. Not generally so, like a human, but a narrow/weak intelligence. The fact that it often says true things is almost accidental. It’s a side effect of having been trained on a lot of correct information, not because it knows what’s true.
So yes, it just responds with statistical accuracy but that is intelligent in the technical sense. It’s not understanding. It’s not reasoning. It’s just really good at speaking.
- Comment on New study sheds light on ChatGPT’s alarming interactions with teens 5 hours ago:
A linear regression model isn’t an AI system.
The term AI didn’t lose its value - people just realized it doesn’t mean what they thought it meant. When a layperson hears “AI,” they usually think AGI, but while AGI is a type of AI, it’s not synonymous with the term.
- Comment on New study sheds light on ChatGPT’s alarming interactions with teens 5 hours ago:
I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.
An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.
- Comment on New study sheds light on ChatGPT’s alarming interactions with teens 8 hours ago:
What does history have to do with it? We’re talking about the definition of terms - and a machine learning system like an LLM clearly falls within the category of Artificial Intelligence. It’s a system capable of performing a cognitive task that’s normally done by humans: generating language.
- Comment on New study sheds light on ChatGPT’s alarming interactions with teens 8 hours ago:
The chess opponent on Atari is AI too. I think the issue is that when most people hear “intelligence,” they immediately think of human-level or general intelligence. But an LLM - while intelligent - is only so in a very narrow sense, just like the chess opponent. One’s intelligence is limited to playing chess, and the other’s to generating natural-sounding language.
- Comment on New study sheds light on ChatGPT’s alarming interactions with teens 9 hours ago:
AI is an extremely broad term which LLMs fall under. You may avoid calling it that but it’s the correct term nevertheless.
- Comment on ‘We didn’t vote for ChatGPT’: Swedish Prime Minister under fire for using AI 1 day ago:
You opened with a flat dismissal, followed by a quote from Reddit that didn’t explain why horseshoe theory is wrong - it just mocked it. That’s not an argument, that’s posturing.
From there, you shifted into responding to claims I never made. I didn’t argue that AI is flawless, inevitable, or beyond criticism. I pointed out that reflexive, emotional overreactions to AI are often as irrational as the blind techno-optimism they claim to oppose. That’s the context you ignored.
You then assumed what I must believe, invited yourself to argue against that imagined position, and finished with vague accusations about me “pushing acceptance” of something people “clearly don’t want.” None of that engages with what I actually said.
- Comment on ‘We didn’t vote for ChatGPT’: Swedish Prime Minister under fire for using AI 1 day ago:
I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.
The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.
- Comment on ‘We didn’t vote for ChatGPT’: Swedish Prime Minister under fire for using AI 1 day ago:
Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.
- Comment on It must have been a whole lot more difficult to design and build tall buildings before computers existed 2 days ago:
Here’s the part that covers it.
- Comment on It must have been a whole lot more difficult to design and build tall buildings before computers existed 2 days ago:
In case you’re curious about what would be the last remaining structures left on earth after everything else has been ground to dust:
spoiler
Channel tunnel between England and France and the stone faces on Mount Rushmore.
- Comment on The number of times a person mentions ChatGPT in a random conversation might work as a rule of thumb to measure her intelligence (inverse proportion, of course) 2 days ago:
I too think that the people who like things that I don’t are stupid.
- Comment on It must have been a whole lot more difficult to design and build tall buildings before computers existed 2 days ago:
In the book “The World Without Us” the author states that old steel bridges would be among the last human made structures left thousands of years after humas have dissapeared for the reason that they didn’t have strenght calculations back then which they solved by simply overbuilding everything.
- Comment on AI chatbots are becoming popular alternatives to therapy. But they may worsen mental health crises, experts warn 2 days ago:
Doesn’t get around the fact that telling lonely people to “just go find someone to talk to” is a pretty ignorant thing to say.
- Comment on AI chatbots are becoming popular alternatives to therapy. But they may worsen mental health crises, experts warn 4 days ago:
Just pull yourself up by your bootstraps, right?
- Comment on AI chatbots are becoming popular alternatives to therapy. But they may worsen mental health crises, experts warn 4 days ago:
LLM chatbots are designed as echo chambers.
They’re designed to generate natural sounding language. It’s a tool. What you put in is what you get out.
- Comment on AI chatbots are becoming popular alternatives to therapy. But they may worsen mental health crises, experts warn 4 days ago:
One is 25 €/month and on-demand, and the other costs more than I can afford and would probably be at inconvenient times anyway. Ideal? No, probably not. But it’s better than nothing.
I’m not really looking for advice either - just someone to talk to who at least pretends to be interested.
- Comment on Instagram now requires users to have at least 1,000 followers to go live | TechCrunch 4 days ago:
I doubt it. They just think others do.
- Comment on Instagram now requires users to have at least 1,000 followers to go live | TechCrunch 4 days ago:
Sure - it’s just missing every single one of my friends.
- Comment on Instagram now requires users to have at least 1,000 followers to go live | TechCrunch 4 days ago:
I wish I had Elon Musk money so I could buy this platform and turn it back to pictures only with the main focus on professional and hobbyis photographers - not pictures of food and selfies. It used to be one of the few social media platforms I actually liked.
- Comment on If you were reincarnated, wouldn't it be elsewhere in the universe? 5 days ago:
The level of consciousness in something like a brain parasite or a slug is probably so dim that it barely feels like anything to be one. So even if you were reincarnated as one, you likely wouldn’t have much of a subjective experience of it. The only way to really experience a new life after reincarnation would be to come back as something with a complex enough mind to actually have a vivid sense of existence. Not that it matters much - it’s not like you’d remember any of your past lives anyway.
- Comment on YouTube's new AI age verification is coming soon — here's what's going to change 5 days ago:
I mean, honestly this is one of the better uses for machine learning. Not that this age checking is a good thing but if you’re going to do it on a mass scale then this seems like the right approach. I imagine that especially for a relatively heavy user this is going to be extremely accurate and far better than the alternative of providing a selfie let alone picture of an ID.
- Comment on Why do neurotypicals like AI slop? 6 days ago:
Is “AI slop” synonymous with AI content in general? I’ve always thought it to mean bad AI content specifically.
I don’t consider myself neurotypical yet I see our current AI progress as net-positive. I don’t like AI slop either in the sense that I understand the term but I’ve encountered a lot of good AI generated content.
- Comment on YSK Iranian developers have created an open-source censorship bypass solution that works on desktop and mobile. 6 days ago:
Someone want to explain to a muggle in plain english what this does and how it’s different from simply using a VPN?
- Comment on [deleted] 1 week ago:
Looking back, I realize I was pretty immature at 22. It didn’t feel that way at the time, but it sure does now. These days, 18‑year‑olds look like kids to me.
I didn’t want kids back then, and I still don’t - but my perspective has shifted a little. When I see parents now, there’s a slight melancholic feeling that comes with knowing that’s something I’ll probably never experience.
So yeah, if you’re 30 and don’t want kids, that’s probably not going to change. Before that, though, there’s always a chance.
- Comment on Meta touts 'superintelligence' for all as it splurges on AI 1 week ago:
Maybe so, but we already have an example of a generally intelligent system that outperforms our current AI models in its cognitive capabilities while using orders of magnitude less power and memory: the human brain. That alone suggests our current brute‑force approach probably won’t be the path a true AGI takes. It’s entirely conceivable that such a system improves through optimization - getting better while using less power, at least in the beginning.
- Comment on Meta touts 'superintelligence' for all as it splurges on AI 1 week ago:
I personally think the whole concept of AGI is a mirage. In reality, a truly generally intelligent system would almost immediately be superhuman in its capabilities. Even if it were no “smarter” than a human, it could still process information at a vastly higher speed and solve in minutes what would take a team of scientists years or even decades.
And the moment it hits “human level” in coding ability, it starts improving itself - building a slightly better version, which builds an even better version, and so on. I just don’t see any plausible scenario where we create an AI that stays at human-level intelligence. It either stalls far short of that, or it blows right past it.
- Comment on Billionaire Mark Zuckerberg writes a manifesto on bringing "personal superintelligence" to everyone to improve humanity, but doesn't even define what superintelligence means. 1 week ago:
If AI ends up destroying us, I’d say it’s unlikely to be because it hates us or wants to destroy us per se - more likely it just treats us the way we treat ants. We don’t usually go out of our way to wipe out ant colonies, but if there’s an anthill where we’re putting up a house, we don’t think twice about bulldozing it. Even in the cartoonish “paperclip maximizer” thought experiment, the end of humanity isn’t caused by a malicious AI - it’s caused by a misaligned one.
- Comment on Billionaire Mark Zuckerberg writes a manifesto on bringing "personal superintelligence" to everyone to improve humanity, but doesn't even define what superintelligence means. 1 week ago:
Superintelligence doesn’t imply ethics. It could just as easily be a completely unconscious system that’s simply very, very good at crunching data.
- Comment on Billionaire Mark Zuckerberg writes a manifesto on bringing "personal superintelligence" to everyone to improve humanity, but doesn't even define what superintelligence means. 1 week ago:
If you’re genuinely interested in what “artificial superintelligence” means, you can just look it up. Zuckerberg didn’t invent the term - it’s been around for decades, popularized by Nick Bostrom’s book Superintelligence.
The usual framing goes like this: Artificial General Intelligence (AGI) is an AI system with human-level intelligence. Push it beyond human level and you’re talking about Artificial Superintelligence - an AI with cognitive abilities that surpass our own. Nothing mysterious about it.