Do you have any ideas or thoughts about this?
Unionize
Submitted 22 hours ago by obbeel@lemmy.eco.br to nostupidquestions@lemmy.world
Do you have any ideas or thoughts about this?
Unionize
AI is a tech debt generator.
Any programmer who worked with legacy code knows a situation where something was written by a former employee or a contractor without much comments or documentation, making it difficult to modify (because of complexity or readability) or replace (because of non-existing business documentation and/or peculiar bugs and features)
AI accelerates these situations, but the person does not even exist. Which, IMO is the main thing that needs to be called out.
Yeah I’ve been trying to call this out at my company. Junior programmers, especially , do t seem to know how to turn ai responses into maintainable code
I find it ironic since ive mostly been on the QA side of dev. I’ve spent decades pointing out the stats that code is much more expensive to maintain than it is to write the first time, so now AI puts us in a position of writing something the first time a little faster, but that’s even more expensive to maintain. Does not compute
Not if you use it correctly. You don’t write code with AI, you get inspiration to get over sticking points. You pick out the relevant bits, make certain you understand how they work, save hours of banging your head.
Ah yes, “just use it correctly”. All these programmers convinced that they are one of the chosen few that “get it” and can somehow magically make it not a damaging, colossal waste of time.
“Inspiration”, yeah, in the same way we can draw “inspiration” from a monkey throwing shit at a wall.
Moving away from GitHub to other git hosting sites.
Abandoning forges makes it harder for humans while bots could still download any publicly available repo.
No. You archive your GH code with the readme.md saying all new stuff is at gitlab, codeburg, bit bucket, etc. And a link to it.
Just don’t use it
Most programmers are embracing ai. As its the use case where it acts as the biggest force multiplyer.
Shhhh don’t tell them. We’re trying to leave these guys in the dust.
They will adapt or die. If they haven’t adapted already telling them isn’t gonna change their minds.
I still look for answers on stack overflow, instead of waiting for an AI summary of the same answer
I mean, agentic AIs are getting good at outputting working code. Thousands of lines per minute; talking trash of it won’t work.
However, I agree that losing the human element of writing code is losing a very important element of programming. So, I believe there should exist a strong resistance against this. Don’t feel pressured to answer if you think your plans shouldn’t be revealed, but it would be nice to know if someone is preparing a great resistance out there.
This is honestly a lot of the problem: code generation tools can output thousands of lines of code per minute. Great, committable, defendable code.
There is basically no circumstance in which a project’s codebase growing at a rate of thousands of lines per minute is a good thing. Code is a necessary evil of programming: you can’t always avoid having it, but you should sure as hell try, because every line of code is capable of being wrong and will need to be read and understood later. Probably repeatedly.
Taking the approach to solving a problem that involves writing a lot of code, rather than putting in the time to find the setup that lets you express your solution in a little code, or reworking the design so code isn’t needed there at all, is a mistake. It relinquishes the leverage that is very point of software engineering.
A tool that reduces the effort needed to write large amounts of human-facing, gets-committed-to-the-source-tree code, so that it’s much easier and faster than finding the actual right way to parse your problem, is a tool that makes your project worse and that makes you a worse programmer when you hold it.
Maybe eventually someone will create a thinking machine that itself understands this, but it probably won’t be someone who charges by the token.
they are not good at consistently following best practices or architectural instructions. So you have to have some kind of hierarchical goal/context scope framework - But then the high-level goals actually need to be reasoned about, which LLMs don’t do, so efforts to make the framework analyze/plan/reflect In order to select and sub divide those top goals fail.
I have to fight with Claude to get it to just do three or four back-and-forth questions with me to establish the actual requirement instead of dumping 1000 lines of irrelevant code (And an MD document, and a usage guide, and an test suite) that ignores guidelines I had already given it.
It’s just a greater level of abstraction. First we talked to the computers on their own terms with punch cards.
Then Assembly came along to simplify the process, allowing humans to write readable code while compiling into Machine Code so the computers can run it.
Then we used higher-level languages like C to create the Assembly Code required.
Then we created languages like Python, that were even more human-readable, doing a lot more of the heavy lifting than C.
I understand the concern, but it’s just the latest step in a process that has been playing out since programming became a thing. At every step we give up some control, for the benefit of making our jobs easier.
I disagree. Even high level languages will consistently produce the same results. There may be low level differences depending on the compiler and the system’s architecture but if those are consistent you will get the same results.
AI coding isn’t an extremely human readable higher level programming language. Using an LLM to generate code adds a literal black box and the interpretation of the user and LLM’s human language (which humans can’t even do consistently) to the equation.
mistermodal@lemmy.ml 3 hours ago
Brazil must invent Lua 2.