your error/hallucination rate is like 1/10th of what I’d expect. I’ve been using an AI assistant for the better part of a year,
I’m having AI write computer programs, and when I tried it a year ago I laughed and walked away - it was useless. It has improved substantially in the past 3 months.
CONSTANTLY reinforcing fucking BASIC directives
Yes, that is the “limited context window” - in my experience people have it too.
I have given my AIs basic workflows to follow for certain operations, simple 5 to 8 step processes, and they do them correctly about 19 times out of 20, but that 5% they’ll be executing the same process and just skip a step - like many people tend to as well.
but a human can learn
In the past week I have been having my AIs “teach themselves” these workflows and priorities. Prioritizing correctness over speed, respecting document hierarchies when deciding which side of a conflict needs to be edited, etc. It seems to be helping somewhat. I had it research current best practices on context window management and apply it to my projects, and that seems to have helped a little too. But, while I type this, my AI just ran off and started implementing code based on old downstream specs that should have been updated to reflect top level changes we just made, I interrupted it and told it to go back and do it the right way, like its work instructions already tell it to. After the reminder it did it right : limited context window.
The main problem I have with computer programming AIs is: when you have a human work on a problem for a month, you drop by every day or two to see how it’s going, clarify, course correct. The AI does the equivalent work in an hour and I just don’t have the bandwidth to keep up at that speed, so it gets just as far off in the weeds as a junior programmer locked in a room and fed Jolt cola and Cheetos through a slot in the door would after a month alone.
An interesting response I got from my AI recently regarding this phenomenon was: it provided “training seminar” materials for our development team telling them how to proceed incrementally with the AI work and carefully review intermediate steps. I already do that with my “work side” AI project, it didn’t suggest it. My home side project where I normally approve changes without review is the one that suggested the training seminar.
aesthelete@lemmy.world 22 hours ago
This is the point nobody seems to get. Especially people that haven’t worked with the technology.
It just does not have the ability to learn in any meaningful way. A human can learn a new technique and move to master simple new techniques in a couple of hours. AI just keeps falling back on its training data no matter how many times you tell it to stop. It has no other option. It would need to be re-trained with better material in order to consistently do what you want it to do, but nobody is really re-training these things…they’re using the “foundational” models and at most “fine-tuning” them…and fine-tuning only provides a quickly punctured facade…it eventually falls back to the bulk of its learning material.