You’re right but it’s worse than that. I have been in the game for decades. One bum calc and the whole platform loses credibility. There isn’t a customer in the planet who’ll look at us as 5%.
Comment on ‘Overhyped’ generative AI will get a ‘cold shower’ in 2024, analysts predict
Knusper@feddit.de 1 year ago
We’re getting customers that want to use LLMs to query databases and such. And I fully expect that to work well 95% of the time, but not always, while looking like it always works correctly. And then you can tell customers a hundred times that it’s not 100% reliable, they’ll forget.
So, at some point, that LLM will randomly run a complete non-sense query, returning data that’s so wildly wrong that the customers notice. And precisely that is the moment when they’ll realize, holy crap, this thing isn’t always reliable?! It’s been telling us inaccurate information 5% of the usages?! Why did no one inform us?!?!?!
And then we’ll tell them that we did inform them and no, it cannot be fixed. Then the project will get cancelled and everyone lived happily ever after.
Or something. Can’t wait to see it.
Hackerman_uwu@lemmy.world 1 year ago
RIPandTERROR@lemmy.blahaj.zone 1 year ago
Would you trust a fresh out of college intern to do it? That’s been my metric for relying on LLM’s
Womble@lemmy.world 1 year ago
Yup this is the way to think about LLMs, infinite eager interns willing to try anything and never trusting themselves to say “I dont know”
the_ocs@lemmy.world 1 year ago
It might actually help the intern if they use it:
consultancy.uk/…/chatgpt-most-benefits-below-aver…
ScreaminOctopus@sh.itjust.works 1 year ago
I’ve been speculating that people raving about these things are just bad at their jobs for a bit, I’ve never been able to get anything useful out of an llm.
RIPandTERROR@lemmy.blahaj.zone 1 year ago
If you have a job that involves diagnosing, or a wide array of different problems that change day to day it’s extremely useful. If you do the same thing over and over again it may not be as much.