“Use our AI!”
“Hmm… I don’t know.”
“If you use AI you can fire all your employees 🤞 .”
“GIMMEE! GIMMEE! I’LL PAY ANY PRICE! I HATE EMPLOYEES SOO MUCH!”
Submitted 3 weeks ago by chobeat@lemmy.ml to technology@lemmy.world
https://tribunemag.co.uk/2025/08/ai-is-a-total-grift
“Use our AI!”
“Hmm… I don’t know.”
“If you use AI you can fire all your employees 🤞 .”
“GIMMEE! GIMMEE! I’LL PAY ANY PRICE! I HATE EMPLOYEES SOO MUCH!”
two.months later
“Why is everything broken?”
No fucking shit.
Every tech buzzword is a grift to try to rationalize endless exponential growth in a world where that’s just impossible
Duh
well duh, much like CRYPTO was a grift/ponzi scheme, after it crashed people started looking for th next big thing, it was AI.
Unrelated, but what’s the difference between grift vs. scam? Internet search seems to give me the same definitions.
Is it just that grifts are personal, while scams are impersonal (like phone/internet scams)?
When I think of a scam, I think a one-off, obviously amateur attempt. An email with awful grammar saying the government will fine me a bajillion dollars if I don’t download a file is a scam. A scam will also leave you alone.
A grift is done by career slimeballs. Used car salesmen, big C-suites and corrupt politicians are grifters. It’s more offensive and more aggressive. You can’t escape a grift.
Hmm, yeah that’s helpful! So maybe if I think of grifting as more of a lifestyle, as in done by con artists.
That’s a good question actually. Could it be it’s a “lie vs. untruth” situation where grift is just a nicer word for what’s obviously a huge scam? In that case we should probably use “scammer” a lot more than “grifter”.
Not sure of an official difference, but my take is a grift is something that everyone’s kind of doing on the DL, but nobody is admitting that it’s a scam.
Think like a cult. Everyone’s a part of the cult, but nobody actually wants to believe they’re getting scammed or scamming others, so it’s more of a grift. People assume what they’re doing can’t last/sustain, but they do it anyway because the benefits are good.
A scam is straight up the party knowing it’s illegitimate and going out of their way to execute the scam so they can benefit at the expense of others.
Basically, I’ve always taken it as one is self aware (scam) and one is only self-aware at the top levels (grift).
But this is all just in my head.
I read this as "gift" the first time haha
Same 😅
I’ve met the author IRL. He’s quite famous in his niche
Well not exactly but completely misunderstood.
Everyone who actually knows about AI is familiar with the alignment and takeoff problems.
(Play this if you need a quick summary
www.decisionproblem.com/paperclips/index2.html
)
So whenever someone says, we are making AI, the response should be “oh fuck no” (using bullets and fire if required)
New tagging and auto-completion is fine (there is probably a whole space of new tools that can come out of the AI research field; that doesn’t risk human extinction)
We are so far away from a paperclip maximizer scenario that I can’t take anyone concerned about that seriously.
We have nothing even approaching true reasoning, despite all the misuse going on that would indicate otherwise.
Alignment? Takeoff? None of our current technologies under the AI moniker come anywhere remotely close to any reason for concern, and most signs point to us rapidly approaching a wall with our current approaches. Each new version from the top companies in the space right now has less and less advancement in capability compared to the last.
I agree currently technology is extremely unlikely to achieve general intelligence but my expression was that we never should try to achieve AGI; it is not worth the risk until after we solve the alignment problem.
The worry about “Alignment” and such is mostly a TESCREAL talking point (look it up if you don’t know what that is, I promise you’ll understand a lot of things about the AI industry).
It’s ridiculous at best, and a harmful and delirious distraction at worst.
It is also a task all good parents do; make sure the lives that they created don’t grow up to be murders or rapists or racists and treat others with kindness and consideration.
chatbots like gpt and gemini learn from conversations with veiwers, so what we need is a virus that will pretend to be a user and flood its chats with pro racism arguments and sexist remarks, which will rub off on the chatbots making them unacceptable for public use
So, just like actual users?
it would be easier to automate the process instead of using real people
Nope, they mostly learn during training
hmmmm damn alright
Been there. Done that
what did you do?
Yeah. GROK and Twitter have entered the chat. Seriously though, we’ve regressed pretty far in what the general. Public deems acceptable.
How do models learn from conversations with users?
they look at your speech patterns and the specific words you use to make the way they talk seem more familiar, remember when twitter launched its own ai that would post tweets and learn from other posts, they had to take it down after about 15 hours because it became super racist and homaphobic
MysteriousSophon21@lemmy.world 3 weeks ago
AI has some legit uses but the hype around it is mostly VC’s throwing money at buzzwords while the actual tech is nowhere near the “AGI revolution” they keep promising us lol.
sundray@lemmus.org 3 weeks ago
A machine learning suite that spends hour after hour screening trillions of potentially medically useful molecules = kind of interesting.
A subscription to a chatbot that writes buggy code that has to be meticulously combed over before you dare put it into production, and might wind up appearing in Google search results = awful, but it’s what’s selling for some reason?
spankmonkey@lemmy.world 3 weeks ago
The latter isn’t even selling, just used because it is free to use or they jammed it into an existing ecosystem like Copilot.
Valmond@lemmy.world 3 weeks ago
The former isn’t “kind of interesting” and there are lots and lots of daily use cases solved by AI that are much much more than “kind of interesting”.
What a simple way to try to downplay it by calling it only kind of interesting.
CosmoNova@lemmy.world 3 weeks ago
The crap they’re promoting it for also showcases the direction they’re developing it for which is an utterly depressing, unsustainable and impractical one. It’s frustrating to see how much money is invested (and ultimately burned) to actively destroy the economy and create problems rather than fixing some.
Broken@lemmy.ml 3 weeks ago
But can we at least be thankful that it shifted focus from augmented reality? Prior to AI, the buzz was around things like the metaverse and digital avatars in your teams meetings.
Even crap AI is more useful than avatars in teams.
ominouslemon@sh.itjust.works 3 weeks ago
IDK, at least that was useful for gaming etc. AI is mostly about eliminating jobs
SpookyBogMonster@lemmy.ml 2 weeks ago
Digital Avatars in teams arent actively destructive to the internet, the environment, and people’s grasp on reality.
I think you’re universalising a personal grievance, without fully accounting for the impacts of Metaverse bullshit, which was never practical or feasible to begin with, and the AI Apocalypse sweeping the internet
hansolo@lemmy.today 3 weeks ago
How does this differ from most other things VCs throw money at?
cough cough crypto cough
oppy1984@lemdro.id 3 weeks ago
This. Everybody wanted it to be AGI right out of the gate. It’s just a tool, like Photoshop. It will get better over time but it’s not the end all be all.