This shows that AI isn’t an infallible machine that gets everything right — instead, we can think of it as a person who can think quickly, but its output needs to be double-checked every time. AI is certainly a useful tool in many situations, but we can’t let it do the thinking for us, at least for now.
No, it’s not “like a person who can think.” Unless you mean it’s like an ADHD person who got distracted halfway through the transcript and started working on a different project in the same file.
NABDad@lemmy.world 3 weeks ago
Many, many years ago, the hospital where I work had a medical transcription company to transcribe dictated radiology results.
At the time, users would access the server via DEC terminals or a terminal application on their computer.
One radiologist set up a script in the terminal application to sign off all his reports with one click. Another radiologist liked it so the first let the second copy it.
Later, the second radiologist opened a ticket with IT because all his reports were being signed by the first radiologist. Yeah, because he didn’t update the script to change the username and password being used to sign the reports.
That’s an amusing anecdote, but the terror comes from the fact that NEITHER RADIOLOGIST WAS READING THEIR REPORTS. BEFORE SIGNING THEM.
The reason they are supposed to sign the report is to confirm that they reviewed the work of the transcriptionist and verified that the report was correct.
No matter what the tool is, doctors will assume the results are correct and sign off on them without checking.