Once these companies have to start charging what it really costs to maintain and run these huge models. The number of use cases will shrivel.
Once these companies have to start charging what it really costs to maintain and run these huge models. The number of use cases will shrivel.
vacuumflower@lemmy.sdf.org 1 day ago
Models are becoming more optimized. I’ve recently tried LFM2.5, small version, and it’s ridiculously close in usefulness to Qwen3.5, for example. Or RNJ-1.
To maintain, meaning actualized datasets - well, sort of expensive, but they were assembling those as a side effect of their main businesses.
So this is not what’ll kill them. Their size will. These are very big companies with lots of internal corruption and inefficiency pulling them down. And a few new AI companies, which, I think, are going to survive, they are centered around specific products, some will die, but I’d expect LiquidAI or Anthropic or such to still be around some time after the crash.
The crash might coincide with a bubble burst, but notice how this family of technologies really is delivering results. Instead of a bunch of specialized applications people are asking LLMs and getting often good enough answers. LLM agents can retrieve data from web services, perform operations, assist in using tools.
You shouldn’t look at the big ones in the cloud, rather at what value local LLMs give you for energy spent. Right now it’s not that good, but approaching good honestly. I don’t feel like they’ve stopped becoming better. Human time is still more expensive. The tools are there, and are being improved, and the humans are slowly gaining experience in using them, and that makes them more efficient in various tasks.
It’s for all kinds of reference and knowledge tools what Google was for search.
And there’s one just amazing thing about these models - they are self-contained, even if some can use tools to access external sources. Our corporate overlords have been building a dependent networked world for 20 years, simply to break it by popularizing a technology that almost neuters that. They were thinking, probably, that they were reaping the crops of the web for themselves, instead they taught everyone that you don’t have to eat at the diner, you can take the food home.
CileTheSane@lemmy.ca 1 day ago
Only people who know very little about a field feel like AI “is good enough” for that field. Experts in a field will universally say that AI is shit in their field.
LLMs are the extreme example of “the dumb man’s idea of a smart man.” It sounds like it knows what it’s talking about so people ignorant on the subject don’t know it’s full of shit.
jj4211@lemmy.world 1 day ago
I agree with you and I consider it similar to the ‘hollywood effect’: Ask any expert to review typical depictions of their expertise in film and tv and they will mostly groan at the inaccuracies that most people won’t catch.
Problem is that if you compare the works that do it ‘right’ to the ones that do it ‘wrong’, there’s no correlation between doing it right and being more popular, the horribly wrong depictions get plenty of ratings regardless.
Now one might reasonably argue ‘sure, but that’s purely fiction anyway, if it had real consequences, that would actually matter’, except it constantly happens in real world situations.
My work colleague picked up his car from some mechanic chain after having it ‘fixed’ and took us to lunch. There was just this awful squeal as he started the car and I said why is it making that noise after just getting fixed and the guy said “Oh, the staff told me that cars just sound like that after a repair until the parts break in” and that bullshit worked to get him to pay and walk out the door. I ask if I can take a quick look under his hood and there was a flashlight wedged against a belt. He just laughed it off and said “hey, free flashlight, thanks for figuring that out” and a few months later he had mentioned going back to the exact same place for something else.
A few days ago I went to a hardware store and their site said they had it, but under location it said “see associate”. The first one checked his device and didn’t understand what the deal was so he said “Oh, go over there and ask John, he knows all this stuff”. Ok, so I walk over to John, who takes one glance and confidently says “oh yeah, that stuff is in a cage in the back row locked up, just go up to the cage and press the button to get someone to get it”. I think “ok, good, a guy who really knows his stuff and the other staff recognize him for it”. I roll up to the cage and look in and realize “uh oh, this is not the type of stuff I’m looking for, he made a pretty amateur mistake”, but I push the button anyway. I show my phone to the guy who comes up and said that “John” said it would be here but I couldn’t see it, and at the mention of “John” the guy clearly rolled his eyes and it was abundantly clear that John’s “expertise” was a repeated annoyance for the guy. The actual answer is they kept that stuff in back and the employees all are supposed to see the notation in their devices telling them this, but none of them seem to figure it out and John just keeps sending people to his department instead.
This has also come out in use of AI. I offered that my group could crank out a quick tool to do something that could be a problem, and one of the people said “in this new era, we don’t need you for this quick tool, I just asked Claude and it made me this application”. So I tested it and reported that ‘a’, it didn’t actually work, it produced stuff that looked right, but the actual tool wouldn’t accept it because it didn’t se the right syntax, and ‘b’, if t did work, it faked authentication and had a huge vulnerability. He just laughed it off and said ‘guess LLMs sometimes aren’t perfect yet’, no consequences for what could have been a disastrous tool, no severe change in stance on using LLMs, and I am pretty sure the audience probably found the response about it not working to be annoyingly buzzkill and were rooting for the LLM to do all the work instead. People who need your expertise are desparate to not need your expertise anymore and are willing to believe anything to enable that, and are willing to accept a lot of badness just to not be dependent on you.
AI produce what is seen as plausible narrative, and plausible narrative can win even when the facts are against it. To be very charitable, a quick “usually” correct answer is indeed frequently “good enough” for a lot of purposes, and LLM’s speed at generating output can’t be beat.
Croquette@sh.itjust.works 1 day ago
The problem is that there is a lot if these people that thinks LLMs are good enough, and many of them are in decisional positions, so we’re getting raked no matter what.
ieGod@lemmy.zip 1 day ago
A lot of fields don’t require doctorate levels of expertise to render effective business services. I’ve seen first hand companies replace thousands of employees and shutter divisions because their AI counterpart has been doing the job quantitatively equally, and faster. Perfect is the enemy of good enough, in most cases, as they say.
Lemmy is filled to the brim with llm haters but you’re not only a minority, you’re probably also closing doors on the future trajectory of tech in business.
CileTheSane@lemmy.ca 1 day ago
“Think of the shareholder value of firing all these people!”
Also, I call bullshit. I’ve seen many cases of companies replacing their staff with AI, then a month later desperately trying to hire staff again because the AI is good at "looking like* it can do the job but once in use turns out it’s complete shit.
hanrahan@slrpnk.net 21 hours ago
perhaps but one example, Commonwealth Bank (largest Australian Bank and in the top 10 worldwide AFAIK) in Australia said they were dismissing 1000’s of staff because of AI, turned out they were just offshoring. The latter is seen positively apparently, the former not so much.
vacuumflower@lemmy.sdf.org 1 day ago
Bad craftsman blames his tools is what I’d answer to this.
CileTheSane@lemmy.ca 1 day ago
I agree anyone using an LLM is a bad craftsman, because they’re using a hammer to drive in a nail.
moto@programming.dev 1 day ago
I like local LLMs as much as the next person but the issue is that doesn’t scale the way companies need it to.
As a personal assistant? Sure, I agree. They’re useful at times. But as soon as you need multiple to run simultaneously you’re gonna hit resource issues.
What Oracle and others were banking on is that you have engineers and others running a lot of agents in parallel composing different things together. Or having one input that multiple serverside agents take and execute numerous tasks on. That’s something you can’t run on an individual machine right now. And with the way they currently work I don’t envision they will anytime soon.
vacuumflower@lemmy.sdf.org 18 hours ago
There are lightweight models as good as some heavier ones. It’s a bit like Intel’s tick-tock advertised process. Heavy memory-hungry models are “tick”, but there’s “tock”- say, “lfm2.5-thinking” model, the light version, in the ollama repository seems almost as good as qwen3.5 for me, except it’s very lightweight and lightning-fast compared to that.
These things are being optimized. It’s just that in the market capture phase nobody bothered.
That they are not being used correctly - yeah, absolutely, my idea of their proper use is some graph-based system with each node being processed by a select LLM (or just piece of logic) with select set of tools and actions and choices available for each. A bit like ComfyUI, but something saner than a zoom-based web UI. Like MacOS Automator application, rather.
anomnom@sh.itjust.works 1 day ago
Even if local models are good, the big companies are making local computing more expensive than cloud tokens by colluding with ram and storage makers to restrict supply.
vacuumflower@lemmy.sdf.org 1 day ago
More expensive, but still autonomous which is very precious.