There is an implicit assumption here that models are being ‘trained’, perhaps because LLMs are a hot topic. By models we are usually talking about things like decision trees or regression models or Markov models that put in risk probabilities of various eventualities based on patient characteristics. These things are not designed to mimic human decision makers, they are designed to make as objective a recommendation as possible based on probability and utility and then left down to doctors to use the result in whichever way seems best suited to the context. If you have one liver and 10 patients, it seems prudent to have some sort of calculation as to who is going to have the best likely outcome to decide who to give it to, for example, then just asking one doctor that may be swayed by a bunch of irrelevant factors.
grue@lemmy.world 6 days ago
[Citation needed]
If these things were being based on traditional AI techniques instead of neural network techniques, why are they getting implemented now (when, as you say, LLMs are the hot topic) instead of a decade or so ago when that other stuff was in vogue?
I think the assumption that they’re using training data is a very good one in the absence of evidence to the contrary.
reversedposterior@lemmy.world 6 days ago
Because it’s sensationalist reporting that is capitalising on existing anxieties in society.
The MELD score for liver transplants has been used for at least 20 years. There are plenty of other algorithmic decision models used in medicine (and in insurance to determine what your premiums are, and anything else that requires a prediction about uncertain outcomes). There are obviously continual refinements over time to models but nobody is going to use chatGPT or whatever to decide whether you get a transplant.
onlinelibrary.wiley.com/doi/pdf/…/hep.21563
onlinelibrary.wiley.com/doi/pdf/…/hep.28998