They probably tested in ideal circumstances and their stuff breaks down when even coming close to an edge case.
Comment on OpenAI Threatening to Ban Users for Asking Strawberry About Its Reasoning
Reverendender@sh.itjust.works 1 month agoDo they not test them before submission?
SlopppyEngineer@lemmy.world 1 month ago
Reverendender@sh.itjust.works 1 month ago
I would be really interested in learning a language. The AI assistance method actually meshes very well with my learning style. I would never submit anything to anyone that I was not certain was good working code though. My brain wouldn’t let me do it. Now i just need to choose a language.
Failx@sh.itjust.works 1 month ago
I applaud your ethics. But you don’t know how close you are to falling from grace.
Just yesterday I had to remove perfectly tested, sensible and non-ai code from our production system, not because that it did not do what the author intended, but because what the author intended was flawed. And this is exactly what ai also cannot teach you right now: Taking a step back to realize that your code might be right, but your intentions are not.
Definetly keep at it. But be aware you will do the wrong things even with perfectly working code.
SlopppyEngineer@lemmy.world 1 month ago
Yeah, the code can work flawlessly in test, but after a few months of production there are a lot more records or files and the code starts to have issues.
RecluseRamble@lemmy.dbzer0.com 1 month ago
Probably don’t know how to get it to run.
vrighter@discuss.tchncs.de 1 month ago
I’ve met someone employed as a dev, who not only didn’t know that the compiler generates an executable file, but actually spent a month trying to change the code, not noticing that 0 of their code changes were having any effect whatsoever (because they kept running an old build of mine)