Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study
ChairmanMeow@programming.dev 2 days agoIf you’re doing it once, then that’s fine. But if you have to do it loads of times, and things keep getting more complex, you’ll find that you won’t be able to correctly use the tools anymore and spot its mistakes.
AI raises your skill level a bit, but also stumps your growth if used irresponsibly. And that growth may be necessary later on, especially if you’re a junior in the field still.
mindbleach@sh.itjust.works 1 day ago
Should urologists still train to detect diabetes by taste? We wouldn’t want the complexity of modern medicine to stunt their growth. These quacks can’t sniff piss with nearly the accuracy of Victorian doctors.
When a tool gets good enough, not using it is irresponsible. Sawing lumber by hand is a waste of time. Farmers today can’t use scythes worth a damn. Programming in assembly is frivolous.
At what point do we stop practicing without the tool? How big can the difference be, and still be totally optional? It’s not like these doctors lost or lacked the fundamentals. They’re just rusty at doing things the old way. If the new way is simply better, good, that’s progress.
ChairmanMeow@programming.dev 1 day ago
It’s true that if a tool is objectively better, then it makes little sense to not use it.
But LLMs aren’t that good yet. There’s a reason senior developers are complaining about vibecoding juniors; their code quality is often just bad. And when pressed, they often can’t justify why their code is a certain way.
As long as experienced developers are able to do proper code review, the quality control is maintained. But a vibecoding developer isn’t good at reviewing. And code review is an absolutely essential skill to have.
I see this at my company too. There’s a handful of junior devs that have managed to be fairly productive with LLMs. And to the LLMs credit, the code is better than it was without it. But when I do code review on their stuff and ask them to explain something, I often get a nonsensical, AI-generated response. And that is a problem. These devs also don’t do a lot of code review, if any, and when they do they often have very minor comments or none at all. Some just don’t do any reviews, stating they’re not confident approving code (which is honest, but also problematic of course).
I don’t mind a junior dev, or any dev for that matter, using an LLM as an assistant. I do mind an LLM masquerading as a developer, using a junior dev as a meat puppet, if you get what I mean.
mindbleach@sh.itjust.works 1 day ago
We’re not talking about LLMs.
These doctors didn’t ask ChatGPT “does this look like cancer.” We’re talking about domain-specific medical tools.
ChairmanMeow@programming.dev 1 day ago
I was responding to a thread by RgoueBananas who is clearly talking about LLMs as he drew a parallel with IT.