So far, there is serious cognitive step needed that LLM just can’t do to get productive. They can output code but they don’t understand what’s going on. They don’t grasp architecture. Large projects don’t fit on their token window. Debugging something vague doesn’t work. Fact checking isn’t something they do well.
percent@infosec.pub 2 hours ago
They don’t need the entire project to fit in their token windows. There are ways to make them work effectively in large projects. It takes some learning and effort, but I see it regularly in multiple large, complex monorepos.
I still feel somewhat new-ish to using LLMs for code (I was kinda forced to start learning), but when I first jumped into a big codebase with AI configs/docs from people who have been using LLMs for a while, I was kinda shocked. The LLM worked far better than I had ever experienced.
It actually takes a bit of skill to set up a decent workflow/configuration for these things. If you just jump into a big repo that doesn’t have configs/docs/optimizations for LLMs, and/or you haven’t figured out a decent workflow, then they’ll be underwhelming and significantly less productive.
RIotingPacifist@lemmy.world 1 hour ago
This sounds a lot like every framework, 20 years ago you could have written that about rails.
Which IMO makes sense because if code isn’t solving anything interesting then you can dynamically generate it relatively easily, and it’s easy to get demos up and running, but neither can help you solve interesting problems.