How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can’t manage this consistently with CRUD apps and people think that this number isn’t laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?
…
I don’t believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.
I don’t fear Artificial Intelligence, I fear Administrative Idiocy. The managers are the problem.
IHeartBadCode@kbin.run 4 months ago
This. Many of these tools are good at incredibly basic boilerplate that's just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.
There's a reality to these tools. That reality is they're helpful at times, but they are hardly transformative at the levels the grifters go on about.
0x0@programming.dev 4 months ago
I use them like wikipedia: it’s a good starting point and that’s it (and this comparison is a disservice to wikipedia).
SandbagTiara2816@lemmy.dbzer0.com 4 months ago
Yep! It’s a good way to get over the fear of a blank page, but I don’t trust it for more than outlines or summaries
grrgyle@slrpnk.net 4 months ago
I agree with your parenthetical, but Wikipedia actually agrees on your main point: Wikipedia itself is not a source of truth.
sugar_in_your_tea@sh.itjust.works 4 months ago
I interviewed a candidate for a senior role, and they asked if they could use AI tools. I told them to use whatever they normally would, I only care that they get a working answer and that they can explain the code to me.
The problem was fairly basic, something like randomly generate two points and find the distance between them, and we had given them the details (e.g. distance is a straight line). They used AI, which went well until it generated the Manhattan distance instead of the Pythagorean theorem. They didn’t correct it, so we pointed it out and gave them the equation (totally fine, most people forget it under pressure). Anyway, they refactored the code and used AI again to make the same mistake, didn’t catch it, and we ended up pointing it out again.
Anyway, at the end of the challenge, we asked them how confident they felt about the code and what they’d need to do to feel more confident (nudge toward unit testing). They said their code was 100% correct and they’d be ready to ship it.
They didn’t pass the interview.
And that’s generally my opinion about AI in general, it’s probably making you stupider.
deweydecibel@lemmy.world 4 months ago
I’ve seen people defend using AI this way by comparing it to using a calculator in a math class, i.e. if the technology knows it, I don’t need to.
And I feel like, for the kind of people whose grasp of technology, knowledge, and education are so juvenile that they would believe such a thing, AI isn’t making them dumber. They were already dumb. What the AI does is make code they don’t understand more accessible, which is to say, it’s just enabling dumb people to be more dangerous.
IHeartBadCode@kbin.run 4 months ago
Similar story, I had a junior dev put in a PR for SQL that gets lat and long and gives back distance. The request was using the Haversine formula but was using the km coefficient, rather than the one for miles.
I asked where they got it and they indicated AI. I sighed and pointed out why it was wrong and that we had PostGIS and that's there is literally scalar functions available that will do the calculations way faster and they should use those.
There's a clear over reliance on code generation. That said, it's pretty good for things that I can eye scan and verify that's what I would have typed anyway. But I've found it suggesting things I wouldn't remotely permit to things that are "sort of" correct. I'll let it pop on the latter case and go back and clean it up. But yeah, anyone blind trusting AI shouldn't be allowed to make final commits.
Excrubulent@slrpnk.net 4 months ago
Wait wait wait so… this person forgot the pythagorean theorem?
Like that is the most basic task. It’s
d = sqrt((x1 - x2)^2 + (y1 - y2)^2)
, right?That was off the top of my head, this person didn’t understand that? Do I get a job now?
I have seen a lot of programmers talk about how much time it saves them. It’s entirely possible it makes them very fast at making garbage code. One thing I’ve known for a long time us that understanding code is much harder than writing it, and so asking an LLM to generate your code sounds like it’s just creating harder work for you, unless you don’t care about getting it right.
xavier666@lemm.ee 4 months ago
I don’t want to believe that coders like these exist and are this confident in an AI’s ability to code.
Zikeji@programming.dev 4 months ago
Copilot / LLM code completion feels like having a somewhat intelligent helper who can think faster than I can, however they have no understanding of how to actually code, but are good at mimicry.
So it’s helpful for saving time typing some stuff, and sometimes the absolutely weird suggestions make me think of other scenarios I should consider, but it’s not going to do the job itself.
deweydecibel@lemmy.world 4 months ago
Legitimately, this is the only use I found for it. If I need something extremely simple, and feeling too lazy to type it all out, it’ll do the bulk of it, and then I just go through and edit out all little mistakes.
And what gets me is that anytime I read all of the AI wank about how people are using these things, it kind of just feels like they’re leaving out the part where they have to edit the output too.
At the end of the day, we’ve had this technology for a while, it’s just been in the form of suggestions on a keyboard app or code editor. You still had to steer in the right direction. Now it’s just smart enough to make it from start to finish without going off a cliff, but you still have to go back and fix it, the same way you had to steer it before.
afraid_of_zombies@lemmy.world 4 months ago
I know engineers who make over double what I make solely because of that skill.
AIhasUse@lemmy.world 4 months ago
Yes, and then you take the time to dig a little deeper and use something agent based like aider or crewai or autogen. It is amazing how many people are stuck in the mindset of “if the simplest tools from over a year aren’t very good, then there’s no way there are any good tools now.”
It’s like seeing the original Planet of the Apes and then arguing against how realistic the Apes are in the new movies without ever seeing them. Sure, you can convince people who really want unrealistic Apes to be the reality, and people who only saw the original, but you’ll do nothing for anyone who actually saw the new movies.
foenix@lemm.ee 4 months ago
I’ve used crewai and autogen in production… And I still agree with the person you’re replying to.
The 2 main problems with agentic approaches I’ve discovered this far:
One mistake or hallucination will propagate to the rest of the agentic task. I’ve even tried adding a QA agent for this purpose but what ends up happening is those agents aren’t reliable and also leads to the main issue:
It’s very expensive to run and rerun agents at scale. The scaling factor of each agent being able to call another agent means that you can end up with an exponentially growing number of calls. My colleague at one point ran a job that cost $15 for what could have been a simple task.
One last consideration: the current LLM providers are very aware of these issues or they wouldn’t be as concerned with finding “clean” data to scrape from the web vs using agents to train agents.
If you’re using crewai btw, be aware there is some builtin telemetry with the library. I have a wrapper to remove that telemetry if you’re interested in the code.
Personally, I’m kinda done with LLMs for now and have moved back to my original machine learning pursuits in bioinformatics.
FaceDeer@fedia.io 4 months ago
Also, a lot of people who are using AI have become quiet about it of late exactly because of reactions like this article's. Okay, you'll "piledrive" me if I mention AI? So I won't mention AI. I'll just carry on using it to make whatever I'm making without telling you.
There's some great stuff out there, but of course people aren't going to hear about it broadly if every time it gets mentioned it gets "piledriven."
grrgyle@slrpnk.net 4 months ago
I think we all had that first moment where copilot generates a good snippet, and we were blown away. But having used it for a while now, I find most of what it suggests feels like jokes.
Like it does save some typing / time spent checking docs, but you have to be very careful to check its work.
I’ve definitely seen a lot more impressively voluminous, yet flawed pull requests, since my employer started pushing for everyone to use it.
I foresee a real reckoning of unmaintainable codebases in a couple years.
Shadywack@lemmy.world 4 months ago
Looks like two people suckered by the grifters downvoted your comment (as of this writing). Should they read this, it is a grift, get over it.
TipRing@lemmy.world 4 months ago
If you ask most LLMs questions on a topic you are an expert at, you will quickly notice that they provide surface-level data. They are the AI equivalent of bullshitting your way through a paper.