No, the issue with “AI” is thinking that it’s able to make anything production ready, be it art, code or dialog.
I do believe that LLMs have lots of great applications in a game pipeline, things like placeholders and copilot for small snippets work great, but if you think that anything that an LLM produces is production ready and you don’t need a professional to look at it and redo it (because that’s usually easier than fixing the mistakes) you’re simply out of touch with reality.
Mika@piefed.ca 23 hours ago
Are you even reading what I say? You are supposed to have a professional approving generated stuff.
But it’s still AI-generated, it doesn’t become less AI-generated because a human that knows shit abot the subject approved it.
Nibodhika@lemmy.world 21 hours ago
This is what you said:
At no point did you mention someone approving it.
Also, you should read what I said, I said most large stuff generated by AI needs to be completely redone. You can generate a small function or maybe a small piece of an image, if you have a professional validating that small chunk, but if you think you can generate an entire program or image with LLMs you’re delusional.
Mika@piefed.ca 19 hours ago
https://vger.to/piefed.ca/comment/2422544 mentioned here.
Dude are you a software dev? Did you hear about, like, tickets? You are supposed to split bigger task into smaller tickets at a project approval phase.
LLM agents are completely capable of taking well-documented tickets and generating some semblance of code that you shape with a few upcoming prompts, criticising code style & issues until they are all fixed.
I’m not theoretical, this is how it’s done today. MCPs into JIRA and Figma and UI tickets just get about 90% done in a single prompt. Harder stuff is done in “invesrigate and write .md how to solve” & “this is why that won’t work, do this instead” to like 70% ready.
Nibodhika@lemmy.world 10 hours ago
Sorry, I won’t go through your post history to reply to a comment, be clearer on the stuff you write.
I’m a software engineer, and if that’s how you code you’re either wasting time or producing garbage code, which might be acceptable wherever you work, but I guarantee you that you would not pass code reviews where I do. I do use copilot, and it’s good at suggesting small snippets, maybe an if, maybe a function header, but even then 60% of the time I need to change what it suggested. Reviewing code is harder than writing it yourself, even if I could trust that the LLM would do exactly what I asked (which I can’t, not by a long shot) it would maybe be opened to bugs or special cases that I would have to read the code, understand what it tried to do, figure out edge cases on that solution and see if it handled them. In short, it would take me much longer to do stuff via LLMs than writing them myself, because writing code is the easy part of programming, thinking on the solution and it’s limitations and edge cases is the hard part, and LLMs can’t understand that. The moment you describe your solution in sufficient detail that an LLM can possibly generate the right code, you’ve essentially written the code yourself just in a more complicated and ambiguous format, this is what most non technical managers fail to understand, code is just structured English, we’re already writing something better than prompts to an LLM.