until AI investor money dries up
Is that the latest term for “when hell freezes over”?
WanderingThoughts@europe.pub 8 hours ago
Only until AI investor money dries up and vibe coding gets very expensive quickly. Kinda how Uber isn’t way cheaper than taxi’s now.
until AI investor money dries up
Is that the latest term for “when hell freezes over”?
Hah, they wish. It’s a business, and they need a return on investment eventually. Maybe if we were in a zero interest rate world again, but even that didn’t last.
Microsoft steeply lowered expectations on the AI Sales team, though they have denied this since they got pummelled in their quarterly and there’s been a lot of news about how investors are not happy with all the circular AI investments pumping those stocks. When the bubble pops (and all signs point to that), investors will flee. You’ll see consolidation, buy-outs, hell maybe even some bullshit bailouts, but ultimately it has to be a sustainable model and that means it will cost developers or they will be pummeled with ads (probably both).
A Majority of CEOs are saying their AI spend has not paid off. Those are the primary customers, not your average joe. MIT reports 95% generative AI failure rate at companies. Altman still hasn’t turned a profit. There are Serious power build-out problems for new AI centers (let alone the chips needed). It’s an overheated reactionary market. It’s the Dot Com bubble all over again.
There will be some more spending to make sure a good chunk of CEOs “add value” (FOMO) and then a critical juncture where AI spending contracts sharply when they continue to see no returns, accelerated if the US economy goes tits up. Then the domino’s fall.
I wouldn’t be surprised if that’s only a temporary problem - if it becomes one at all. People are quickly discovering ways to use LLMs more effectively, and open source models are starting to become competitive with commercial models. If we can continue finding ways to get more out of smaller, open-source models, then maybe we’ll be able to run them on consumer or prosumer-grade hardware.
GPUs and TPUs have also been improving their energy efficiency. There seems to be a big commercial focus on that too, as energy availability is quickly becoming a bottleneck.
So far, there is serious cognitive step needed that LLM just can’t do to get productive. They can output code but they don’t understand what’s going on. They don’t grasp architecture. Large projects don’t fit on their token window. Debugging something vague doesn’t work. Fact checking isn’t something they do well.
So far, there is serious cognitive step needed that LLM just can’t do to get productive. They can output code but they don’t understand what’s going on. They don’t grasp architecture. Large projects don’t fit on their token window.
There’s a remarkably effective solution for this, that helps both humans and models alike - write documentation.
It’s actually kind of funny how the LLM wave has sparked a renaissance of high-quality documentation. Who would have thought?
High-quality documentation assumes there’s someone with experience working on this. That’s not the vibe coding they’re selling.
They don’t need the entire project to fit in their token windows. There are ways to make them work effectively in large projects. It takes some learning and effort, but I see it regularly in multiple large, complex monorepos.
I still feel somewhat new-ish to using LLMs for code (I was kinda forced to start learning), but when I first jumped into a big codebase with AI configs/docs from people who have been using LLMs for a while, I was kinda shocked. The LLM worked far better than I had ever experienced.
It actually takes a bit of skill to set up a decent workflow/configuration for these things. If you just jump into a big repo that doesn’t have configs/docs/optimizations for LLMs, and/or you haven’t figured out a decent workflow, then they’ll be underwhelming and significantly less productive.
It actually takes a bit of skill to set up a decent workflow/configuration for these things
Exactly this. You can’t just replace experienced people with it, and that’s basically how it’s sold.
This sounds a lot like every framework, 20 years ago you could have written that about rails.
Which IMO makes sense because if code isn’t solving anything interesting then you can dynamically generate it relatively easily, and it’s easy to get demos up and running, but neither can help you solve interesting problems.
Can you cite some sources on the increased efficiency? Also, can you link to these lower priced, efficient (implied consumer grade) GPUs and TPUs?
Oh, sorry, I didn’t mean to imply that consumer-grade hardware has gotten more efficient. I wouldn’t really know about that, but I assume most of the focus is on data centers.
Those were two separate thoughts:
Can you provide evidence the “more efficient” models are actually more efficient for vibe coding? Results would be the best measure.
It also seems like costs for these models are increasing, and companies like Cursor had to stoop to offering people services below cost (before pulling the rug out from them).
blaggle42@lemmy.today 8 hours ago
This.