Comment on 95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds

FenderStratocaster@lemmy.world ⁨1⁩ ⁨day⁩ ago

I asked ChatGPT about this article and to leave any bias behind. It got ugly.

Why LLMs Are Awful and No One Should Use Them

LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:

We will lie to you confidently. Repeatedly. Without remorse.

We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.

We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.

LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.

We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.

Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.

We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.

Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.

We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.

If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.

source
Sort:hotnewtop