AI Companies And Advocates Are Becoming More Cult-Like::How one writer’s trip to the annual tech conference CES left him with a sinking feeling about the future.
Becoming? 🤯
Submitted 9 months ago by L4s@lemmy.world [bot] to technology@lemmy.world
https://www.rollingstone.com/culture/culture-features/ai-companies-advocates-cult-1234954528/
AI Companies And Advocates Are Becoming More Cult-Like::How one writer’s trip to the annual tech conference CES left him with a sinking feeling about the future.
Becoming? 🤯
Accelerationism is “F U I’ve got AI” combined with “you’ve got to burn the world down to rebuild it, so let’s start that fire”
Singularitarianism is basically the Christian Rapture but with super intelligent AI.
These ideas have been around some time in tech circles.
I regularly see people on Lemmy talk about AGI run countries and governments as though it’s only a couple years away. Bruh it still struggles with fingers. You really think that’s where it will be in a couple of years?
Fashion with passion
This is the best summary I could come up with:
I was watching a video of a keynote speech at the Consumer Electronics Show for the Rabbit R1, an AI gadget that promises to act as a sort of personal assistant, when a feeling of doom took hold of me.
Specifically, about a term first defined by psychologist Robert Lifton in his early writing on cult dynamics: “voluntary self-surrender.” This is what happens when people hand over their agency and the power to make decisions about their own lives to a guru.
At Davos, just days ago, he was much more subdued, saying, “I don’t think anybody agrees anymore what AGI means.” A consummate businessman, Altman is happy to lean into that old-time religion when he wants to gin up buzz in the media, but among his fellow plutocrats, he treats AI like any other profitable technology.
As I listened to PR people try to sell me on an AI-powered fake vagina, I thought back to Andreessen’s claims that AI will fix car crashes and pandemics and myriad other terrors.
In an article published by Frontiers in Ecology and Evolution, a research journal, Dr. Andreas Roli and colleagues argue that “AGI is not achievable in the current algorithmic frame of AI research.” One point they make is that intelligent organisms can both want things and improvise, capabilities no model yet extant has generated.
What we call AI lacks agency, the ability to make dynamic decisions of its own accord, choices that are “not purely reactive, not entirely determined by environmental conditions.” Midjourney can read a prompt and return with art it calculates will fit the criteria.
The original article contains 3,929 words, the summary contains 266 words. Saved 93%. I’m a bot and I’m open source!
Rabbit could order pizza for you, telling it “the most-ordered option is fine,” leaving his choice of dinner up to the Pizza Hut website.
I feel like we wouldn’t need the language model as a translation layer between 2 machines, if there were proper APIs everywhere…
Oh I see we’ve been watching behind the bastards and Steve’s rants.
They joke and speculate a lot, for sure. But what did he make up that has any bearing on his argument?
A few areas, but largely around the use cases for something like rabbit is the example that sticks in my mind.
Yes the current iteration is garbage, but then he (and I forget the guests name atm) go off intoa rant about how nobody would want to have a device plan a vacation for them.
I find his comments often lack a wide perspective. I like him, be here gets crap wrong often.
How to tell when someone hasn’t opened the article
👌
Cagi@lemmy.ca 9 months ago
Tech VCs did the same with block chain and the cloud before that. It’s an industry that loves it’s fads and fashions.