I tried GPT-5 to write some code the other day and was quite unimpressed with how lazy it is. For every single thing, it needed nudging. I’m going back to Sonnet and Gemini. And even so, you’re right. As it stands, LLMs are useful at refactoring and writing boilerplate and repetitive code, which does save time. But they’re definitely shit at actually solving non-trivial problems in code and designing and planning implementation at a high level.
They’re basically a better IntelliSense and automated refactoring tool, but I wouldn’t trust them with proper software engineering tasks. All this vibe coding and especially agentic development bullshit people (mainly uneducated users and the AI vendors themselves) are shilling these days, I’m going nowhere near around.
I work in a professional software development team in a business that is pushing the AI coding stuff really hard. So many of my coworkers use agentic development tools routinely now to do most (if not all) of their work for them. And guess what, every other PR that goes in, random features that had been built and working are removed entirely, so then we have to do extra work to literally build things again that had been ripped out by one of these AI agents. smh
Pechente@feddit.org 3 days ago
Yeah right? I tried it yesterday to build a simple form for me. Told it to look at the structure of other forms for reference which it did and somehow it used NONE of the UI components and helpers from the other forms. It was bafflingly bad
errer@lemmy.world 3 days ago
Despite the “official” coding score for GPT5 being higher, Claude sonnet still seems to blow it out of the water. That seems to suggest they are training to the test and the test must not be a very good test. Or they are lying.
elvith@feddit.org 3 days ago
They’d never be lying! Look at these beautiful graphs from their presentation of GPT5. They’d never!
Image
Image
Source: theverge.com/…/openai-gpt-5-vibe-graphing-chart-c…
errer@lemmy.world 3 days ago
Wut…did GPT5 evaluate itself?
jj4211@lemmy.world 3 days ago
Problem with the “benchmarks” is Goodhart’s Law: one a measure becomes a target, it ceases to be a good measurement.
The AI companies obsession with these tests cause them to maniacly train on them, making then better at those tests, but that doesn’t necessarily map to actual real world usefulness. Occasionally you’ll see a guy that interviews well, but it’s petty useless in general on the job. LLMs are basically those all the time, but at least useful because they are cheap and fast enough to be worth it for super easy bits.