Comment on I Went All-In on AI. The MIT Study Is Right.
kreskin@lemmy.world 4 weeks ago
I work in an company who is all-in on selling AI and we are trying desperately to use this AI ourselves. We’ve concluded internally that AI can only be trusted with small use cases that are easily validated by humans, or for fast prototyping work… hack day stuff to validate a possibility but not an actual a high quality safe and scalable implementation, or in writing tests of existing code, to increase test coverage (yes, I know thats a bad idea but QA blessed the result… so uh … cool).
The use case we zeroed in on is writing well schema'd configs in yaml or json. Even then, a good percentage of the time the AI will miss very significant mandatory sections, or add hallucinations that are unrelated to the task at hand. We then can use AI to test AI's work, several times using several AIs. And to a degree, it'll catch a lot of the issues, but not all. So we then code review and lint with code we wrote that AI never touched, and send all the erroring configs to a human. It does work, but cant be used for mission critical applications. And nothing about the AI or the process of using it is free. Its also disturbingly not idempotent. Did it fail? Run it again a few times and it'll pass. We think it still saves money when done at scale, but not as much as we promise external AI consumers. The Senior leadership know its currently overhyped trash and pressure us to use it anyway on expectations it'll improve in the future, so we give the mandatory crisp salute of alignment and we're off.
I will say its great for writing reviews. It adds nonsense and doesnt get the whole review correct, but it writes very flowery stuff so managers dont have to. So we use it for first drafts and then remove a lot of the true BS out of it. If it gets stuff wrong, oh well, human perception is flawed.
jj4211@lemmy.world 4 weeks ago
At work there’s a lot of rituals where processes demand that people write long internal documents that no one will read, but management will at least open it up, scroll and be happy to see such long documents with credible looking diagrams, but never read them, maybe looking at a sentence or two they don’t know, but nod sagely at.
LLM can generate such documents just fine.
Incidentally an email went out to salespeople. It told them they didn’t need to know how to code or even have technical skills, they code just use Gemini 3 to code up whatever a client wants and then sell it to them. I can’t imagine the mind that thinks that would be a viable business strategy, even if it worked that well.
IronBird@lemmy.world 4 weeks ago
fantastic for pumping a bubble though, to idiots with more $ than sense
jj4211@lemmy.world 4 weeks ago
Yeah, this one is going to hurt. I’m pretty sure my rather long career will be toast as my company and mostly my network of opportunities are all companies that are bought so hard into the AI hype that I don’t know that they will be able to survive that going away.
IronBird@lemmy.world 4 weeks ago
if you don’t mind compromising your morales somewhat and have moderate understanding of how the stock
marketcasino works…loads of $ to be made when pops, atleast