Thus you get a piece of software that no one really knows shit about the inner workings of. Sure you have a bunch of spec sheets but no one was there doing the grunt work so when something inevitably breaks during production there’s no one on the team saying “oh, that might be related to this system I set up over here.”
Comment on I Went All-In on AI. The MIT Study Is Right.
some_designer_dude@lemmy.world 23 hours agoUntrained dev here, but the trend I’m seeing is spec-driven development where AI generates the specs with a human, then implements the specs. Humans can modify the specs, and AI can modify the implementation.
This approach seems like it can get us to 99%, maybe.
Dojan@pawb.social 23 hours ago
Piatro@programming.dev 23 hours ago
How is what you’re describing different to what the author is talking about? Isn’t it essentially the same as “AI do this thing for me”, “no not like that”, “ok that’s better”? The trouble the author describes, ie the solution being difficult to change, or having no confidence that it can be safely changed, is still the same.
some_designer_dude@lemmy.world 20 hours ago
This poster calckey.world/notes/afzolhb0xk is more articulate than my post.
The difference between this “spec-driven” approach is that the entire process is repeatable by AI once you’ve gotten the spec sorted. So you no longer work on the code, you just work on the spec, which can be a collection of files, files in folders, whatever — but the goal is some kind of determinism, I think.
I use it on a much smaller scale and haven’t really cared much for the “spec as truth” approach myself, at this level. I also work almost exclusively on NextJS apps with the usual Tailwind + etc stack. I would certainly not trust a developer without experience with that stack to generate “correct” code from an AI, but it’s sort of remarkable how I can slowly document the patterns of my own codebase and just auto-include it as context on every prompt (or however Cursor does it) so that everything the LLMs suggest gets LLM-reviewed against my human-written “specs”. And doubly neat is that the resulting documentation of patterns turns out to be really helpful to developers who join or inherit the codebase.
I think the author / developer in the article might not have been experienced enough to direct the LLMs to build good stuff, but these tools like React, NextJS, Tailwind, and so on are all about patterns that make us all build better stuff. The LLMs are like “8 year olds” (someone else in this thread) except now they’re more like somewhat insightful 14 year olds, and where they’ll be in another 5 years… Who knows.
Anyway, just saying. They’re here to stay, and they’re going to get much better.
ChunkMcHorkle@lemmy.world 17 hours ago
They’re here to stay
Eh, probably. At least for as long as there is corporate will to shove them down the rest of our throats. But right now, in terms of sheer numbers, humans still rule, and LLMs are pissing off more and more of us every day while its makers are finding it increasingly harder to forge ahead in spite of us, which they are having to do ever more frequently.
and they’re going to get much better.
They’re already getting so much worse, with what is essentially the digital equivalent of kuru, that I’d be willing to bet they’ve already jumped the shark.
If their makers and funders had been patient, and worked the present nightmares out privately, they’d have a far better chance than they do right now, IMO.
Simply put, LLMs/“AI” were released far too soon, and with far too much “I Have a Dream!” fairly tale promotion that the reality never came close to living up to, and then shoved with brute corporate force down too many throats.
As a result, now you have more and more people across every walk of society pushed into cleaning up the excesses of a product they never wanted in the first place: being forced to share their communities AND energy bills with datacenters, depleted water reserves, privacy violations, EXCESSIVE copyright violations and theft of creative property, having to seek non-AI operating systems just to avoid it, right down to the subject of this thread, the corruption of even the most basic video search.
Can LLMs figure out how to override an angry mob, or resolve a situation wherein the vast majority of the masses are against the current iteration of AI even though the makers of it need us all to be avid, ignorant consumers of AI for it to succeed? Because that’s where we’re going, and we’re already farther down that road than the makers ever foresaw, apparently having no idea just how thin the appeal is getting on the ground for the rest of us.
So yeah, I could be wrong, and you might be right. But at this point, unless something very significant changes, I’d put money on you being mostly wrong.
floofloof@lemmy.ca 20 hours ago
Even more efficient: humans do the specs and the implementation. AI has nothing to contribute to specs, and is worse at implementation than an experienced human. The process you describe, with current AIs, offers no advantages.
AI can write boilerplate code and implement simple small-scale features when given very clear and specific requests, sometimes. It’s basically an assistant to type out stuff you know exactly how to do and review. It can also make suggestions, which are sometimes informative and often wrong.
pelespirit@sh.itjust.works 22 hours ago
Have you used any AI to try and get it to do something? It learns generally, not specifically. So you give it instructions and then it goes, “How about this?” You tell it that it’s not quite right and to fix these things and it goes off on a completely different tangent in other areas. It’s like working with an 8 year old who has access to the greatest stuff around.
SpaceNoodle@lemmy.world 18 hours ago
It doesn’t even actually learn, though.
CaptDust@sh.itjust.works 23 hours ago
Trained dev with a decade of professional experience, humans routinely fail to get me workable specs without hours of back and forth meetings. I’d say a solid 25% of my work day is spent understanding what the stakeholders are asking for and how contort the requirements to fit into the system.
If these humans can’t be explict enough with me, a living thinking human that understands my architecture better than any LLM, what chance does an LLM have?