Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With
MangoCats@feddit.it 18 hours agoGranting them AI status, we should recognize that they “gained their abilities” by training on the rando junk that people post on the internet.
I have been working with AI for computer programming, semi-seriously for 3 months, pretty intensively for the last two weeks. I have also been working with humans for computer programming for 35 years. AI’s “failings” are people’s failings. They don’t follow directions reliably, and if you don’t manage them they’ll go down rabbit holes of little to no value. With management, working with AI is like an accelerated experience with an average person, so the need for management becomes even more intense - where you might let a person work independently for a week then see what needs correcting, you really need to stay on top of AI’s “thought process” on more of a 15-30 minute basis. It comes down to the “hallucination rate” which is a very fuzzy metric, but it works pretty well - at a hallucination rate of 5% (95% successful responses) AI is just about on par with human workers - but faster for complex tasks, and slower for simple answers.
Interestingly, for the past two weeks, I have been having some success with applying human management systems to AI: controlled documents, tiered requirements-specification-details documents, etc.
Passerby6497@lemmy.world 16 hours ago
I have no idea what you’re doing, but based on my own experience, your error/hallucination rate is like 1/10th of what I’d expect.
I’ve been using an AI assistant for the better part of a year, and I’d laugh at the idea that they’re right even 60% of the time without CONSTANTLY reinforcing fucking BASIC directives or telling it to provide sources for every method it suggests. Like, I can’t even keep the damned thing reliably in the language framework I’m working on without it falling back to the raw vendor CLI in project conversations. I’m correcting the exact same mistakes week after week because the thing is braindead and doesn’t understand that you cannot use reserved keywords for your variable names. It just makes up parameters to core functions based on the question I ask it, regardless of documentation until I call it’s bullshit and it gets super conciliatory and then actually double checks it’s own work instead of authoritatively lying to me.
You’re not wrong that AI makes human style mistakes, but a human can learn, or at least generally doesn’t have to be taught the same fucking lesson at least once a week for a year (or gets fired well before then). AI is artificial, but there absolutely isn’t any intelligence behind it, it’s just a stochastic parrot that somehow comes to plausible answers that the algorithm expects that you want to hear.
aesthelete@lemmy.world 8 hours ago
This is the point nobody seems to get. Especially people that haven’t worked with the technology.
It just does not have the ability to learn in any meaningful way. A human can learn a new technique and move to master simple new techniques in a couple of hours. AI just keeps falling back on its training data no matter how many times you tell it to stop. It has no other option. It would need to be re-trained with better material in order to consistently do what you want it to do, but nobody is really re-training these things…they’re using the “foundational” models and at most “fine-tuning” them…and fine-tuning only provides a quickly punctured facade…it eventually falls back to the bulk of its learning material.
MangoCats@feddit.it 16 hours ago
I’m having AI write computer programs, and when I tried it a year ago I laughed and walked away - it was useless. It has improved substantially in the past 3 months.
Yes, that is the “limited context window” - in my experience people have it too.
I have given my AIs basic workflows to follow for certain operations, simple 5 to 8 step processes, and they do them correctly about 19 times out of 20, but that 5% they’ll be executing the same process and just skip a step - like many people tend to as well.
In the past week I have been having my AIs “teach themselves” these workflows and priorities. Prioritizing correctness over speed, respecting document hierarchies when deciding which side of a conflict needs to be edited, etc. It seems to be helping somewhat. I had it research current best practices on context window management and apply it to my projects, and that seems to have helped a little too. But, while I type this, my AI just ran off and started implementing code based on old downstream specs that should have been updated to reflect top level changes we just made, I interrupted it and told it to go back and do it the right way, like its work instructions already tell it to. After the reminder it did it right : limited context window.
The main problem I have with computer programming AIs is: when you have a human work on a problem for a month, you drop by every day or two to see how it’s going, clarify, course correct. The AI does the equivalent work in an hour and I just don’t have the bandwidth to keep up at that speed, so it gets just as far off in the weeds as a junior programmer locked in a room and fed Jolt cola and Cheetos through a slot in the door would after a month alone.
An interesting response I got from my AI recently regarding this phenomenon was: it provided “training seminar” materials for our development team telling them how to proceed incrementally with the AI work and carefully review intermediate steps. I already do that with my “work side” AI project, it didn’t suggest it. My home side project where I normally approve changes without review is the one that suggested the training seminar.