Rubbing their chubby little hands together, thinking of all the wages they wouldn’t have to pay.
poopkins@lemmy.world 1 day ago
As an engineer, it’s honestly heartbreaking to see how many executives have bought into this snake oil hook, line and sinker.
Blackmist@feddit.uk 1 day ago
expr@programming.dev 1 day ago
Honestly, it’s heartbreaking to see so many good engineers fall into the hype and seemingly unable to climb out of the hole. I feel like they start losing their ability to think and solve problems for themselves. Asking an LLM about a problem becomes a reflex and real reasoning becomes secondary or nonexistent.
Executives are mostly irrelevant as long as they’re not forcing the whole company into the bullshit.
Mniot@programming.dev 7 hours ago
Executives are mostly irrelevant as long as they’re not forcing the whole company into the bullshit.
I’m seeing a lot of this, though. Like, I’m not technically required to use AI, but the VP will send me a message noting that I’ve only used 2k tokens this month and maybe I could get more done if I was using more…?
expr@programming.dev 7 hours ago
Yeah, fortunately while our CTO is giddy like a schoolboy about LLMs, he hasn’t actually attempted to force it on anyone, thankfully.
Unfortunately, a number of my peers now seem to have become irreparably LLM-brained.
jj4211@lemmy.world 1 day ago
Based on my experience, I’m skeptical someone that seemingly delegates their reasoning to an LLM were really good engineers in the first place.
Whenever I’ve tried, it’s been so useless that I can’t really develop a reflex, since it would have to actually help for me to get used to just letting it do it’s thing.
Meanwhile the people who are very bullish who are ostensibly the good engineers that I’ve worked with are the people who became pet engineers of executives and basically have long succeeded by sounding smart to those executives rather than doing anything or even providing concrete technical leadership. They are more like having something akin to Gartner on staff, except without even the data that at least Gartner actually gathers, even as Gartner is a useless entity with respect to actual guidance.
auraithx@lemmy.dbzer0.com 1 day ago
I mean before we’d just ask google and read stack, blogs, support posts, etc. Now it just finds them for you instantly so you can just click and read them. The human reasoning part is just shifting elsewhere where you solve the problem during debugging before commits.
expr@programming.dev 1 day ago
No, good engineers were not constantly googling problems because for most topics, either the answer is trivial enough that experienced engineers could answer them immediately, or complex and specific enough to the company/architecture/task/whatever that Googling it would not be useful. Stack overflow and the like has always only ever really been useful as the occasional memory aid for basic things that you don’t use often enough to remember how to do. Good engineers were, and still are, reasoning through problems, reading documentation, and iteratively piecing together system-level comprehension.
The nature of the situation hasn’t changed at all: problems are still either trivial enough that an LLM is pointless, or complex and specific enough that an LLM will get it wrong. The only difference is that an LLM will spit out plausible-sounding bullshit and convince people it’s valuable when it is, in fact, not.
auraithx@lemmy.dbzer0.com 1 day ago
In the case of a senior engineer then they wouldn’t need to worry about the hallucination rate. The LLM is a lot faster than them and they can do other tasks while it’s being generated and then review the outputs. If it’s trivial you’ve saved time, if not, you can pull up that documentation, and reason and step through the problem with the LLM. If you actually know what you’re talking about you can see when it slips up and correct it.
And that hallucination rate is rapidly dropping. We’ve jumped from about 40% accuracy to 90% over the past ~6mo alone (aider polygot coding benchmark) - at about 1/10th the cost (iirc).
Feyd@programming.dev 1 day ago
“Stack overflow engineer” has been a derogatory forever lol
pycorax@sh.itjust.works 1 day ago
A tale as old as time…
Feyd@programming.dev 1 day ago
Did you think executives were smart? What’s really heartbreaking is how many engineers did. I even know some that are pretty good that tell me how much more productive they are and all about their crazy agent setups (from my perspective i don’t see any more productivity)
rozodru@piefed.social 1 day ago
as someone who now does consultation code review focused purely on AI...nah let them continue drilling holes in their ship. I'm booked solid for the next several months now, multiple clients on the go, and i'm making more just being a digital janitor what I was as a regular consultant dev. I charge a premium to just simply point said sinking ship to land.
Make no mistake though this is NOT something I want to keep doing in the next year or two and I honestly hope these places figure it out soon. Some have, some of my clients have realized that saving a few bucks by paying for an anthropic subscription, paying a junior dev to be a prompt monkey, while firing the rest of their dev team really wasn't worth it in the long run.
the issue now is they've shot themselves in the foot. The AI bit back. They need devs, and they can't find them because putting out any sort of ad for hiring results in hundreds upon hundreds of bullshit AI generated resumes from unqualified people while the REAL devs get lost in the shuffle.
MangoCats@feddit.it 1 day ago
That’s the complete mistake right there. AI can help code, it can’t replace the organizational knowledge your team has developed.
Some shops may think they don’t have/need organizational knowledge, but they all do. That’s one big reason why new hires take so long to start being productive.