there is no problem in keeping code quality while using AI
This opinion is contradicted by basically everyone who has attempted to use models to generate useful code which must interface with existing codebases. There are always quality issues, it must always be reviewed for functional errors, it rarely interoperates with existing code correctly, and it might just delete your production database no matter how careful you try to be.
dukemirage@lemmy.world 1 day ago
[deleted]NaibofTabr@infosec.pub 12 hours ago
I’m sorry, what exactly do you think this conversation is about if not using AI for code generation?
dukemirage@lemmy.world 9 hours ago
I’m sorry, too
Katana314@lemmy.world 1 day ago
I feel like I get where he’s coming from, but I can see the revulsion.
I picture someone asking their AI to write a rules engine for a gamemode and getting masses of duplicative, horrific code; but in my own work, my company has encouraged an assistive tool, and once it has an idea of what I’m trying to do, it will offer autocomplete options that are pretty spot on.
Still, I very much agree it’s hard to sort the difference and in untrained hands can definitely lead to unmaintainable code slop. Everything needs to get reviewed by knowledgeable human eyes before running.
Mika@piefed.ca 1 day ago
So don’t accept code that is shit. Have decent PR process. Accountability is still on human.
Bronzebeard@lemmy.zip 1 day ago
The people lazy enough to have ai generate their code aren’t going to do that.
NaibofTabr@infosec.pub 13 hours ago
If this is necessary then there is, in point of fact, a “problem with keeping code quality when using AI”.
dukemirage@lemmy.world 9 hours ago
A decent review process is always necessary, LLMs or not.
Mika@piefed.ca 8 hours ago
This is necessary when working with 100% human code too.