reversedposterior
@reversedposterior@lemmy.world
- Comment on Computer says no: Impact of automated decision-making on human life; Algorithms are deciding whether a patient receives an organ transplant or not; Algorithms use in Welfare, Penalise the poor. 6 days ago:
Because it’s sensationalist reporting that is capitalising on existing anxieties in society.
The MELD score for liver transplants has been used for at least 20 years. There are plenty of other algorithmic decision models used in medicine (and in insurance to determine what your premiums are, and anything else that requires a prediction about uncertain outcomes). There are obviously continual refinements over time to models but nobody is going to use chatGPT or whatever to decide whether you get a transplant.
- Comment on Computer says no: Impact of automated decision-making on human life; Algorithms are deciding whether a patient receives an organ transplant or not; Algorithms use in Welfare, Penalise the poor. 6 days ago:
There is an implicit assumption here that models are being ‘trained’, perhaps because LLMs are a hot topic. By models we are usually talking about things like decision trees or regression models or Markov models that put in risk probabilities of various eventualities based on patient characteristics. These things are not designed to mimic human decision makers, they are designed to make as objective a recommendation as possible based on probability and utility and then left down to doctors to use the result in whichever way seems best suited to the context. If you have one liver and 10 patients, it seems prudent to have some sort of calculation as to who is going to have the best likely outcome to decide who to give it to, for example, then just asking one doctor that may be swayed by a bunch of irrelevant factors.
- Comment on Computer says no: Impact of automated decision-making on human life; Algorithms are deciding whether a patient receives an organ transplant or not; Algorithms use in Welfare, Penalise the poor. 6 days ago:
Sigh. Unfortunately there’s a lot of misinformation around this topic that gets people riled up for no reason. There’s plenty of research in healthcare decision making since Paul Meehl (see Gerd Gigerenzer for more recent work) that shows using statistical models as decision aids massively compensate for the biases that happen when you entrust a decision to a human practitioner. No algorithm is making a final call without supervision, they are just being used to look at situations more objectively. People get very anxious in healthcare when a model is involved and yet the irony is humans alone make terrible decisions.