A new survey conducted by the U.S. Census Bureau and reported on by Apolloseems to show that large companies may be tapping the brakes on AI. Large companies (defined as having more than 250 employees) have reduced their AI usage, according to the data (click to expand the Tweet below). The slowdown started in June, when it was at roughly 13.5%, slipping to about 12% at the end of August. Most other lines, representing companies with fewer employees, are also at a decline, with some still increasing.
Personal Anecdote
Last week I used the AI coding assistant within JetBrains DataGrip to build a fairly complex PostgreSQL function.
It put together a very well organized, easily readable function, complete with explanatory comments, that failed to execute because it was absolutely littered with errors.
I don’t think it saved me any time but it did help remove my brain block by reorganizing my logic and forcing me to think through it from a different perspective. Then again, I could have accomplished the same thing by knocking off work for the day and going to the driving range.
sj_zero 3 weeks ago
IMO, AI is a really good demo for a lot of people, but once you start using it, the gains you can get from it end up being somewhat minimal without doing some serious work.
Reminds me of 10 other technologies that if you didn't get in the world was going to end but ended up more niche than you'd expect.
MagicShel@lemmy.zip 3 weeks ago
As someone who is excited about AI and thinks it’s pretty neat, I agree we’ve needed a level-set around the expectations. Vibe coding isn’t a thing. Replacing skilled humans isn’t a thing. It’s a niche technology that never should’ve been sold as making everything you do with it better.
We’ve got far too many companies who think adoption of AI is a key differentiator. It’s not. The key differentiator is almost always the people, though that’s not as sexy as cutting edge technology.
krunklom@lemmy.zip 3 weeks ago
The technology is fascinating and useful - for specific use cases and with an understanding of what it’s doing and what you can get out of it.
From LLMs to diffusion models to GANs there are really, really interesting use cases, but the technology simply isn’t at the point where it makes any fucking sense to have it plugged into fucking everything.
Leaving the questionable ethics many paid models’ creators have used to make their models aside, the backlash against so is understandable because it’s being shoehorned into places it just doesn’t belong.
I think eventually we may “get there” with models that don’t make so many obvious errors in their output - in fact I think it’s inevitable it will happen eventually - but we are far from that.
I do think that the “fuck ai” stance is shortsighted though, because of this. This is happening, it’s advancing quickly, and while gains on LLMs are diminishing we as a society really need to be having serious conversations about what things will look like when (and/or if, though I’m more inclined to believe it’s when) we have functional models that can are accurate in their output.
When it actually makes sense to replace virtually every profession with ai (it doesn’t right now, not by a long shot) then how are we going to deal with this as a society?
floofloof@lemmy.ca 3 weeks ago
Evidently you haven’t worked with me. I’m actually quite sexy.
Damage@feddit.it 3 weeks ago
I’ve got a friend who has to lead a team of apparently terrible developers in a foreign country, he loves AI, because “if I have to deal with shitty code, send back PRs three times then do it myself, I might as well use LLMs”
And he’s like one of the nicest people I know, so if he’s this frustrated, it must be BAD.
Aceticon@lemmy.dbzer0.com 3 weeks ago
I had to do this myself at one point and it can be very frustrating.
It’s basically the “tech makes lots of money” effect, which attracts lots of people who don’t really have any skill at programming and would never have gone into it if it weren’t for the money.
We saw this back in earlier tech booms and see it now in poorer countries to were lots of IT work has been outsourced - they still have the same fraction of natural techies as the rest but the demand is so large that people with no real tech skill join the profession and get given actual work to do.
Also beware of cultural expectations and quirks - the team I had to manage were based in India and during group meetings on the phone would never admit if they did not understood something of a task they were given or if there was something missing, so ended up often doing the wrong things or filling in the blanks with wrong assumptions. I solved this by, after any such group meeting, talking to each member of that outsourced team, individually after any such meetings, and in a very non-judgemental way (pretty much had to pass it as “me, being unsure if I explained things correctly”) to tease from them any questions or doubts.
That said, even their shit code (compare to us on the other side, who were all senior developers or above) actually had a consistent underlying logic throughout the whole thing, with even the bugs being consistent (humans tend to be consistent in the kind of mistakes they make), all of which helps with figuring out what is wrong. LLMs aren’t as consistent as even incompetent humans.
chaosCruiser@futurology.today 3 weeks ago
Cyberspace, dot com, Web 2.0, cloud computing, SAAS, mobile, big data, blockchain, IoT, VR and so many more. Sure, they can be used for some things, but doing that takes time, effort and money. On top of that, you need to know exactly when to use these things and when to choose something completely different.
paequ2@lemmy.today 3 weeks ago
I’m so sick of “AI demos” at work. Every demo goes like this.
Meanwhile they ignore that zero AI projects have actually stuck around or get used in a meaningful way.
setsubyou@lemmy.world 3 weeks ago
As someone who sometimes makes demos of our own AI products at work for internal use, you have no idea how much time I spend on finding demo cases where LLM output isn’t immediately recognizable as bad or wrong…
To be fair it’s pretty much only the LLM features that are like this. We have some more traditional AI features that work pretty well. I think they just tagged on LLM because that’s what’s popular right now.