Are you even reading what I say? You are supposed to have a professional approving generated stuff.
But it’s still AI-generated, it doesn’t become less AI-generated because a human that knows shit abot the subject approved it.
Are you even reading what I say? You are supposed to have a professional approving generated stuff.
But it’s still AI-generated, it doesn’t become less AI-generated because a human that knows shit abot the subject approved it.
Nibodhika@lemmy.world 22 hours ago
This is what you said:
At no point did you mention someone approving it.
Also, you should read what I said, I said most large stuff generated by AI needs to be completely redone. You can generate a small function or maybe a small piece of an image, if you have a professional validating that small chunk, but if you think you can generate an entire program or image with LLMs you’re delusional.
Mika@piefed.ca 20 hours ago
https://vger.to/piefed.ca/comment/2422544 mentioned here.
Dude are you a software dev? Did you hear about, like, tickets? You are supposed to split bigger task into smaller tickets at a project approval phase.
LLM agents are completely capable of taking well-documented tickets and generating some semblance of code that you shape with a few upcoming prompts, criticising code style & issues until they are all fixed.
I’m not theoretical, this is how it’s done today. MCPs into JIRA and Figma and UI tickets just get about 90% done in a single prompt. Harder stuff is done in “invesrigate and write .md how to solve” & “this is why that won’t work, do this instead” to like 70% ready.
Nibodhika@lemmy.world 10 hours ago
Sorry, I won’t go through your post history to reply to a comment, be clearer on the stuff you write.
I’m a software engineer, and if that’s how you code you’re either wasting time or producing garbage code, which might be acceptable wherever you work, but I guarantee you that you would not pass code reviews where I do. I do use copilot, and it’s good at suggesting small snippets, maybe an if, maybe a function header, but even then 60% of the time I need to change what it suggested. Reviewing code is harder than writing it yourself, even if I could trust that the LLM would do exactly what I asked (which I can’t, not by a long shot) it would maybe be opened to bugs or special cases that I would have to read the code, understand what it tried to do, figure out edge cases on that solution and see if it handled them. In short, it would take me much longer to do stuff via LLMs than writing them myself, because writing code is the easy part of programming, thinking on the solution and it’s limitations and edge cases is the hard part, and LLMs can’t understand that. The moment you describe your solution in sufficient detail that an LLM can possibly generate the right code, you’ve essentially written the code yourself just in a more complicated and ambiguous format, this is what most non technical managers fail to understand, code is just structured English, we’re already writing something better than prompts to an LLM.
Mika@piefed.ca 8 hours ago
This is literally in this thread.
Again, your solution should be already thought out and described in tickets and approved tech plan. If it’s not, SDLC problem.
And it’s not true that agents can’t help with edge cases, they can. If you know which points to look at, you task to analyze the specific interaction and watch which parts of the code would be mentioned.
I do write way less amount of symbols to LLM than I would when I write code. Those symbols don’t have to be structured and they can even have typos, so I can focus my brain activity on things that actually matter.
Plus, copilot is shit.
I rate your post as a skill issue.