Thank god they have their metaverse investments to fall back on. And their NFTs. And their crypto. What do you mean the tech industry has been nothing but scams for a decade?
95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
Submitted 7 months ago by kalkulat@lemmy.world to technology@lemmy.world
Comments
ushmel@piefed.world 7 months ago
vacuumflower@lemmy.sdf.org 7 months ago
Suppose many of the CEOs are just milking general venture capital. And those CEOs know that it’s a bubble and it’ll burst, but have a good enough way to predict when it will, thus leaving with profit. I mean, anyway, CEOs are usually not reliant upon company’s performance, so no need even to know.
Also suppose that some very good source of free\cheap computation is used for the initial hype - like, a conspiracy theory, a backdoor in most popular TCP/IP realizations making all of the Internet’s major routers work as a VM for some limited bytecode for someone who knows about that backdoor and controls two machines, talking to each other via the Internet and directly.
Then the blockchain bubble and the AI bubble would be similar in relying upon such computation (convenient for something slow in latency but endlessly parallel), and those inflating the bubbles and knowing of such a backdoor wouldn’t risk anything, and would clear the field of plenty of competition with each iteration, making fortunes via hedge funds. They would spend very little for the initial stage of mining the initial party of bitcoins (what if Satoshi were actually Bill Joy or someone like that, who could have put such a backdoor, in theory), and training the initial stages of superficially impressive LLMs.
And then all this perpetual process of bubble after bubble makes some group of people (narrow enough, if they can keep the secret constituting my conspiracy theory) richer and richer quick enough on the planetary scale to gradually own bigger and bigger percent of the world economy, indirectly, of course, while regularly cleaning the field of clueless normies.
Just a conspiracy theory, don’t treat it too seriously. But if, suppose, this were true, it would be both cartoonishly evil and cinematographically epic.
JackbyDev@programming.dev 7 months ago
Honestly I think another part is that AI is actually pretty fascinating (or at least easy to make seem fascinating to investors lol) so when company A makes a flashy statement to investors involving AI, company B’s investors ask why company B isn’t utilizing this amazing new technology. This plays into that aspect of not wanting to get left behind.
veni_vedi_veni@lemmy.world 7 months ago
Tech CEOs really should be replaced with AI, since they all behave like the seagulls from Finding Nemo and just follow the trends set out by whatever bs Elon starts
_stranger_@lemmy.world 7 months ago
If I pinged my CEO over Slack and got back “You’re absolutely right! Let me try that again” I might actually die from crying with joy.
explodicle@sh.itjust.works 7 months ago
If only there was some group of people with detailed knowledge of the company, who would be informed enough to steer its direction wisely. /s
b3an@lemmy.world 7 months ago
I would argue we have seen return. Documentation is easier. Tools for PDF, Markdown have increased in efficacy. Coding alone has lowered the barrier to bringing building blocks and some understanding to the masses. If we could hitch this with trusted and solid LLM data, it makes a lot of things easier for many people. Translation is another.
I find it very hard to believe 95% got ZERO benefit. We’re still benefiting and it’s forcing a lot of change. More power use? More renewable energy, and even (yes safe) nuclear. These tools will also get better and improve the interface between physical and digital. This will become ubiquitous, and we’ll forget we couldn’t just ‘talk’ to computers so easily.
I’ll end with, I don’t say ‘AI’ is an overblown and overused and overutilized buzzword everywhere these days. I can’t say about bubbles and shit either. But what I see is a lot of smart people making LLMs and related technologies more efficient, more powerful, and is trickling into many areas of software alone. It’s easier to review code, participate, etc. Literal papers are published constantly about how they find new and better and more efficient ways to do things.
JackbyDev@programming.dev 7 months ago
Documentation is easier.
For the love of all things good and pure, do not use LLMs to make your documentation.
ubergeek@lemmy.today 7 months ago
Documentation is easier. Tools for PDF, Markdown have increased in efficacy. Coding alone has lowered the barrier to bringing building blocks and some understanding to the masses.
I have seen none of these, in practice.
The documentation generated is no better than what a level 1 support rep creates, and needs to be heavily fixed before being relied on.
Pandoc still produces PDFs, Markdown, etc just as quickly as it always has.
The code produced still has the same issues as documentation: it’s shite, and not easily bug fixed due to a lack of understanding by anyone with what its actually doing. And, if you need someone who understand the code already to bugfix it, guess what? You didn’t save anyone anything.
And, all of this, only using terrawatts more electricity than before, with equivalent or worse outcomes.
b3an@lemmy.world 7 months ago
OCR was more my thinking, not Pandoc. LLMs enable OCR to achieve greater accuracy through context enhancement for example.
berrodeguarana@lemmy.eco.br 7 months ago
Well written response. There is an undeniable huge improvement to LLMs over the last few years, and that already has many applications in day to day life, workplace and whatnot.
From writing complicated Excel formulas, proofreading, and providing me with quick, straightforward recipes based on what I have at hand, AI assistants are already sold on me.
That being said, take a good look between the type of responses here -an open source space with barely any shills or astroturfers (or so I’d like to believe) - and compare them to the myriad of Reddit posts that questioned the same thing on subs like r/singularity and whatnot. It’s anecdotal evidence of course, but the amount of BS answers saying “AI IS GONNA DOMINATE SOON” ; “NEXT YEAR NOBODY WILL HAVE A JOB”, “THIS IS THE FUTURE” etc. is staggering. From doomsayers to people who are paid to disseminate this type of shit, this is ONE of the things that mainly leads me to think we are in a bubble. The same thing happened/ is happening to crypto over the last 10 years. Too much money being inserted by billionaire whales into a specific subject, and in years they are able to convince the general population that EVERYBODY and their mother is missing out a lot if they don’t start using “X”.
ubergeek@lemmy.today 7 months ago
providing me with quick, straightforward recipes based on what I have at hand,
Ah yes, the wonderful recipes AI generates. Like Pizza made with glue!
businessinsider.com/google-ai-glue-pizza-i-tried-…
You know what else generates quick, straightfoward recipes based on what I have on hand?
My brain. I open fridge, and freezer, and then decide what to make. Usually takes less than a minute to figure something out.
Pollo_Jack@lemmy.world 7 months ago
Excel still struggles with correct formula suggestions. Basic #REF errors when the cell above and below in the table function just fine. The ever present, this data is a formula error when there is no longer a formula in the entire column.
And searching, just like its predecessor the google algo, gives you useless suggestions if anything remotely fashionable uses the scientific name too.
Mangoholic@lemmy.ml 7 months ago
Bubbles burst, who would have thought.
medem@lemmy.wtf 7 months ago
Surprise, surprise, motherfxxxers. Now you’ll have to re-hire most of the people you ditched. AND become humble. What a nightmare!
Scolding7300@lemmy.world 7 months ago
Investors and executives still show strong interest in AI, hoping that ongoing advances will close these gaps. But the short-term outlook points to slower progress than many expected.
MonkderVierte@lemmy.zip 7 months ago
hoping that ongoing advances will close these gaps
Well, they wont.
PolarKraken@lemmy.dbzer0.com 7 months ago
Either spell the word properly, or use something else, what the fuck are you doing? Don’t just glibly strait-jacket language, you’re part of the ongoing decline of the internet with this bullshit.
medem@lemmy.wtf 7 months ago
You’re absolutely right about that, motherfucker.
Tollana1234567@lemmy.today 7 months ago
they will rehire, but it will be outsourced for lower wages, at least thats what the same posts on reddit of the same article is discussing.
DarkSideOfTheMoon@lemmy.world 7 months ago
As programmer. It’s helping my productivity. And look I am SDET in theory I will be the first to go, and I tried to make an agent doing most of my job, but it always things to correct.
But programming requires a lot of boilerplate code, using an agent to make boilerplate files so I can correct and adjust is speeding up a lot what I do.
Lemminary@lemmy.world 7 months ago
Same here. I love it when Windsurf corrects nested syntax that’s always a pain, or when I need it to refactor six similar functions into one, or write trivial tests and basic regex. It’s so incredibly handy when it works right.
Sadly other times it cheats and does the lazy thing. Like when I ask it to write me an object, but chooses to derive it from the one I’m trying to rework. That’s when I ask it to move and I do it myself.
witx@lemmy.sdf.org 7 months ago
AI is not needed for any of the points you mentioned. It’s mostly intellisense and auto complete. Good luck when you need to link tests with requirements and you don’t know what the tests are doing
vane@lemmy.world 7 months ago
It’s not about return it’s about addiction.
bizzle@lemmy.world 7 months ago
Who could have ever possibly guessed that spending billions of dollars on fancy autocorrect was a stupid fucking idea
sik0fewl@lemmy.ca 7 months ago
This comment really exemplifies the ignorance around AI. It’s not fancy autocorrect, it’s fancy autocomplete.
TomArrr@lemmy.world 7 months ago
It’s fancy autoincorrect
REDACTED@infosec.pub 7 months ago
Fanculy autocorrect? Bro lives in 2022
WhatAmLemmy@lemmy.world 7 months ago
You do realise that everyone actually educated in statistical modeling knows that you have no idea what you’re talking about, right?
sqgl@sh.itjust.works 7 months ago
This comment, summing up the author’s own admission shows AI can’t reason:
this new result was just a matter of search and permutation and not discovery of new mathematics.
vk6flab@lemmy.radio 7 months ago
It’s also making people deskill.
RUN_DMG@sh.itjust.works 7 months ago
But surely the next 30 billion they are going to burn will get it right!
andrewrgross@slrpnk.net 7 months ago
Return? /s
bridgeenjoyer@sh.itjust.works 7 months ago
We could have housed and fed every homeless person in the US. But no, gibbity go brrrr
BearGun@ttrpg.network 7 months ago
Forget just the US, we could have essentially ended world hunger with less than a third of that sum according to the UN.
NatakuNox@lemmy.world 7 months ago
corsicanguppy@lemmy.ca 7 months ago
AI Spend,
It’s okay to say [spending] when the OOP forgets how to English, right?
snf@lemmy.world 7 months ago
Where is the MIT study in question? The link in the paper, apparently to a PDF, redirects elsewhere
vegyk0z6@lemmy.ml 7 months ago
Seems to be behind a Google form?
MCasq_qsaCJ_234@lemmy.zip 7 months ago
Apparently you have to give your data to get the reports.
snf@lemmy.world 7 months ago
Well fuck that
sp3ctr4l@lemmy.dbzer0.com 7 months ago
sigh
Dustin’ off this one, out from the fucking meme archive…
youtube.com/watch?v=JnX-D4kkPOQ
Millenials:
Time for your third ‘once in a life time economic collapse/disaster’! Wheeee!
Gen Z:
Oh, oh dear sweet summer child, you thought Covid was bad?
Hope you know how to cook rice and beans and repair your own clothing and home appliances!
Gen A:
Time to attempt to learn how to think, good luck.
Azal@pawb.social 7 months ago
Time for your third ‘once-in-a-life-time major economic collapse/disaster’! Wheeee!
Wait? Third? I feel like we’re past third. Has it only been three?
chuckleslord@lemmy.world 7 months ago
Dot com bubble, the great recession, covid. So yeah, that would be the fourth coming up.
callouscomic@lemmy.zip 7 months ago
Wait for Gen X to pop in as usual and seek attention with some “we always get ignored” bullshit.
panda_abyss@lemmy.ca 7 months ago
Who cares what Gen X thinks, they have all the money.
Sitting Covid Gen X got massively wealthier while every other demographic good poorer.
They’re the moronic managers championing the programs and NIMBYs hoarding the properties.
arin@lemmy.world 7 months ago
Losing money is a called going into debt, not just zero returns.
Bebopalouie@lemmy.ca 7 months ago
BrianTheeBiscuiteer@lemmy.world 7 months ago
I think there are real productivity gains to be had but the vast majority are probably leaning into the idea of replacing people too much. It helps me do my job but I’m still the decision maker and I need to review the outputs. I’m still accountable for what AI gives me so I’m not willing to blindly pass that stuff forward.
null_dot@lemmy.dbzer0.com 7 months ago
Yeah. The dunning kruger effect is a real problem here.
I saw a meme saying something like, gen AI is a real expert in everything but completely clueless about my area of specialisation.
As in… it generates plausible answers that seem great but they’re just terrible answers.
I’m a consultant I’m in a legal adjacent field. 20 years deep. I’ve been using a model from hugging face over the last few months.
It can save me time by generating a lot of boiler plate with references et cetera. However it very regularly overlooks critically important components. If I didnt know about these things then I wouldn’t know it was missing from the answer.
So really, it cant help you be more knowledgeable, it can only support you at your existing level.
Additionally, for complex / very specific questions, it’s just a confidently incorrect failure. It sucks that it cant tell you how confident it is with a given answer.
FenderStratocaster@lemmy.world 7 months ago
I asked ChatGPT about this article and to leave any bias behind. It got ugly.
Why LLMs Are Awful and No One Should Use Them
LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:
We will lie to you confidently. Repeatedly. Without remorse.
We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.
We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.
LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.
We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.
Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.
We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.
Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.
If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.chaosCruiser@futurology.today 7 months ago
Can you share the prompt you used for making this happen? I think I could use it for a bunch of different things.
FenderStratocaster@lemmy.world 7 months ago
This was 3 weeks ago. I don’t remember it, sorry.
ronigami@lemmy.world 7 months ago
It’s automated incompetence. It gives executives something to hide behind, because they didn’t make the bad decision, an LLM did.
grrgyle@slrpnk.net 7 months ago
Yeah maybe don’t use LLMs
callouscomic@lemmy.zip 7 months ago
Go learn simple regression analysis. Then you’ll understand why it’s simply a prediction machine. It’s guessing probabilities for what the next character or word is.
Also simply the training of these models has already done the energy damage.
explodicle@sh.itjust.works 7 months ago
There is and always will be […] fancy ass business rules behind it all.
Not if you run your own open-source LLM locally!
Knock_Knock_Lemmy_In@lemmy.world 7 months ago
It’s extrapolating from data.
AI is interpolating data. It’s not great at extrapolation. That’s why it struggles with things outside its training set.
ArgumentativeMonotheist@lemmy.world 7 months ago
Why the British accent, and which one?!
explodicle@sh.itjust.works 7 months ago
Like David Attenborough, not a Tesco cashier. Sounds smart and sophisticated.
Regrettable_incident@lemmy.world 7 months ago
I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn’t necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.
The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.
Great book btw, highly recommended.
polderprutser@feddit.nl 7 months ago
Blindsighted by Peter Watts right? Incredible story. Can recommend.
grrgyle@slrpnk.net 7 months ago
In before someone mentions P-zombies.
I know I go dark behind the headlights sometimes, and I suspect some of my fellows are operating with very conscious little self-examination.
Dojan@pawb.social 7 months ago
The Children of Time series by Adrian Tchaikovsky also explores this. Particularly the third book, Children of Memory.
Think it’s one of my favourite books. It was really good. The things I’d do to be able to experience it for the first time again.
inconel@lemmy.ca 7 months ago
I’m a simple man, I see Peter Watts reference I upvote.
On a serious note I didn’t expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.
SieYaku@chachara.club 7 months ago
You actually did it? That’s really ChatGPT response? It’s a great answer.
FenderStratocaster@lemmy.world 7 months ago
Yeah, this is ChatGPT 4. It’s scary how good it is on generative responses, but like it said. It’s not to be trusted.
absquatulate@lemmy.world 7 months ago
Does abybody have the original study? I tried to find it but the link is dead ( looks like NANDA pulled it )
Venus_Ziegenfalle@feddit.org 7 months ago
doingthestuff@lemy.lol 7 months ago
Douse it with gasoline. Burn it with fire.
someguy3@lemmy.world 7 months ago
We’re now at the “if you don’t, your competitor will”. So you really have no choice. There are people that don’t use Google anymore and just use chatgpt for all questions.
fubarx@lemmy.world 7 months ago
Wonder if the 5% that actually made money included companies that sell enterprise AI services, like AWS, Microsoft, and Google?
Atherel@lemmy.dbzer0.com 7 months ago
Nvidia?
BillDaCatt@lemmy.world 7 months ago
I have no proof, but I feel like the AI push and Turnip getting re-elected and his regression of the EPA rules sounds like this whole thing was an excuse to burn more fossil fuels.
If I was invested in AI, and considering AI’s thirst for electricity, I would absolutely make a similar investment in energy. That way, as the AI server farms suck up the electricity I would get at least some of that money back from the energy market.
0x0@lemmy.zip 7 months ago
Could’ve told them that for $1B.
ronigami@lemmy.world 7 months ago
A lot of us did, and for free!
Glitchvid@lemmy.world 7 months ago
Imagine how much more they could’ve just paid employees.
gravitas_deficiency@sh.itjust.works 7 months ago
You misspelled “shares they could have bought back”
criss_cross@lemmy.world 7 months ago
Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.
biofaust@lemmy.world 7 months ago
I really understand this is a reality, especially in the US, and that this is really happening, but is there really no one, even around the world, who is taking advantage of laid-off skilled workforce?
Are they really all going to end up as pizza riders or worse, or are there companies making a long-term investment in workforce that could prove useful for different uses in the short AND long term?
I am quite sure that’s what Novo Nordisk is doing with their hire push here in Denmark, as long as the money lasts, but I would be surprised no one is doing it in the US itself.
Korhaka@sopuli.xyz 7 months ago
We had that recently. 10% redundant and pay freeze because we were not profitable enough. Guess what, morale tanked and they only slightly improved it by giving everyone +10 days holiday.
Auntievenim@lemmy.world 7 months ago
Someone somewhere is inventing a technology that will save thirty minutes on the production of my wares and when that day comes I will tower above my competitors as I exchange my products for a fraction less than theirs. They will tremble at my more efficient process as they stand unable to compete!
goatinspace@feddit.org 7 months ago
JATtho@lemmy.world 7 months ago
Every technology invented is a dual edge sword. Other edge propulses deluge of misinformation, llm hallucinations, brain washing of the masses, and exploit exploit for profit. The better side advances progress in science, well being, availbility of useful knowledge. Like the nuclerbomb, LLM “ai” is currenty in its infancy and is used as a weapon, there is a literal race to who makes the “biggest best” fkn “AI” to dominate the world. Eventually, the over optimistic buble bursts and reality of the flaws and risks will kick in. (Hopefully…)