until AI investor money dries up
Is that the latest term for “when hell freezes over”?
WanderingThoughts@europe.pub 2 weeks ago
Only until AI investor money dries up and vibe coding gets very expensive quickly. Kinda how Uber isn’t way cheaper than taxi’s now.
until AI investor money dries up
Is that the latest term for “when hell freezes over”?
Microsoft steeply lowered expectations on the AI Sales team, though they have denied this since they got pummelled in their quarterly and there’s been a lot of news about how investors are not happy with all the circular AI investments pumping those stocks. When the bubble pops (and all signs point to that), investors will flee. You’ll see consolidation, buy-outs, hell maybe even some bullshit bailouts, but ultimately it has to be a sustainable model and that means it will cost developers or they will be pummeled with ads (probably both).
A Majority of CEOs are saying their AI spend has not paid off. Those are the primary customers, not your average joe. MIT reports 95% generative AI failure rate at companies. Altman still hasn’t turned a profit. There are Serious power build-out problems for new AI centers (let alone the chips needed). It’s an overheated reactionary market. It’s the Dot Com bubble all over again.
There will be some more spending to make sure a good chunk of CEOs “add value” (FOMO) and then a critical juncture where AI spending contracts sharply when they continue to see no returns, accelerated if the US economy goes tits up. Then the domino’s fall.
Hah, they wish. It’s a business, and they need a return on investment eventually. Maybe if we were in a zero interest rate world again, but even that didn’t last.
Unless I misunderstood, it will eventually dry up? Investors aren’t going to be willing to give money with no returns indefinitely
I wouldn’t be surprised if that’s only a temporary problem - if it becomes one at all. People are quickly discovering ways to use LLMs more effectively, and open source models are starting to become competitive with commercial models. If we can continue finding ways to get more out of smaller, open-source models, then maybe we’ll be able to run them on consumer or prosumer-grade hardware.
GPUs and TPUs have also been improving their energy efficiency. There seems to be a big commercial focus on that too, as energy availability is quickly becoming a bottleneck.
So far, there is serious cognitive step needed that LLM just can’t do to get productive. They can output code but they don’t understand what’s going on. They don’t grasp architecture. Large projects don’t fit on their token window. Debugging something vague doesn’t work. Fact checking isn’t something they do well.
They don’t need the entire project to fit in their token windows. There are ways to make them work effectively in large projects. It takes some learning and effort, but I see it regularly in multiple large, complex monorepos.
I still feel somewhat new-ish to using LLMs for code (I was kinda forced to start learning), but when I first jumped into a big codebase with AI configs/docs from people who have been using LLMs for a while, I was kinda shocked. The LLM worked far better than I had ever experienced.
It actually takes a bit of skill to set up a decent workflow/configuration for these things. If you just jump into a big repo that doesn’t have configs/docs/optimizations for LLMs, and/or you haven’t figured out a decent workflow, then they’ll be underwhelming and significantly less productive.
It actually takes a bit of skill to set up a decent workflow/configuration for these things
Exactly this. You can’t just replace experienced people with it, and that’s basically how it’s sold.
This sounds a lot like every framework, 20 years ago you could have written that about rails.
Which IMO makes sense because if code isn’t solving anything interesting then you can dynamically generate it relatively easily, and it’s easy to get demos up and running, but neither can help you solve interesting problems.
So far, there is serious cognitive step needed that LLM just can’t do to get productive. They can output code but they don’t understand what’s going on. They don’t grasp architecture. Large projects don’t fit on their token window.
There’s a remarkably effective solution for this, that helps both humans and models alike - write documentation.
It’s actually kind of funny how the LLM wave has sparked a renaissance of high-quality documentation. Who would have thought?
High-quality documentation assumes there’s someone with experience working on this. That’s not the vibe coding they’re selling.
Funnily enpugh, AI itself is a great tool to create that high quality documentation fairly efficiently, but obviously not autonomously.
Even complex systems can be documented up to a level that is easy and much less laborious for the subject experts and architects to comb through for fhe final version.
Can you cite some sources on the increased efficiency? Also, can you link to these lower priced, efficient (implied consumer grade) GPUs and TPUs?
Oh, sorry, I didn’t mean to imply that consumer-grade hardware has gotten more efficient. I wouldn’t really know about that, but I assume most of the focus is on data centers.
Those were two separate thoughts:
Can you provide evidence the “more efficient” models are actually more efficient for vibe coding? Results would be the best measure.
It also seems like costs for these models are increasing, and companies like Cursor had to stoop to offering people services below cost (before pulling the rug out from them).
They’ve thought of that as well, soon nobody will be able to afford consumer grade hardware
Yeah true. I’m assuming (and hoping) that the problems with consumer grade hardware being less accessible will be temporary.
I have wristwatches with significantly higher CPU, memory, and storage specs than my first few computers, while consuming significantly less energy. I think the current state of LLMs is pretty rough but will continue to improve.
It’s not going to be enough to spend thirty thousand dollars a year per person on it, though, so the current first mover corps are still fucked. I agree that the tech itself has huge possibilities, just not the pets.com ass bullshit that is currently being pushed.
You say “dries up” like that wasn’t always the end goal for rideshare apps. Disrupt, overtake, starve out, hike prices.
With Uber that was indeed the plan and it worked. The same plan was there for AI, but AI isn’t doing so well on the whole overtake and starve out thing. They’ll have to jump directly to hiking prices. So it’s only kinda like Uber.
You sure?
They are targeting the next generation, the next generation will not know how to search the internet without an AI chatbot
We all know that most parents will just let the “Digital Natives” do their thing or/and that kids in a certain age definitely know better than their parents who fear this “technological leap” and don’t get it anyway
blaggle42@lemmy.today 2 weeks ago
This.