Could a human have judged it better? Maybe not. I think a better question to ask is, "Should anyone be sent back into a violent domestic situation with no additional protection, no matter the calculated risk? And as someone who has been on the receiving end of that conversation, I would say no…no one should be told that, even though they were in a terrifying, life-threatening situation, they will not be provided protection, and no further steps will be taken to keep them from being injured again, or from being killed next time. But even without algorithms, that happens constantly…the only thing the algorithm accomplishes is that the investigator / social worker / etc doesn’t have to have any kind of personal connection with the victim, so they don’t have to feel some kind of way for giving an innocent person a death sentence because they were just doing what the computer told them to.
Final thought: When you pair this practice with the ongoing conversation around the legality of women seeking divorce without their husband’s consent, you have a terrifying and consistently deadly situation.
Vanth@reddthat.com 4 months ago
I also wonder if the algorithm is being used to override the victim.
Like if she asked for help, if she didn’t want to go home and wanted to go to a shelter and get a restraining order. But they said, “low risk, nope, no resources for you”. Depending on her situation, home to her abuser may have been her only option then. In which case, this is a level of horror the article didn’t cover. The article really doesn’t explain how the risk level output by the algorithm is used.
madsen@lemmy.world 4 months ago
The article mentions that one woman (Stefany González Escarraman) went for a restraining order the day after the system deemed her at “low risk” and the judge denied it referring to the VioGen score.
It also says:
You could argue that the problem isn’t so much the algorithm itself as it is the level of reliance upon it. The algorithm isn’t unproblematic though. The fact that it just spits out a simple score: “negligible”, “low”, “medium”, “high”, “extreme” is, IMO, an indicator that someone’s trying to conflate far too many factors into a single dimension. I have a really hard time believing that anyone knowledgeable in criminal psychology and/or domestic abuse would agree that 35 yes or no questions would be anywhere near sufficient to evaluate the risk of repeated abuse. (I know nothing about domestic abuse or criminal psychology, so I could be completely wrong.)
Apart from that, I also find this highly problematic:
rottingleaf@lemmy.world 4 months ago
From those quotes looks like Idiocracy.
UserMeNever@feddit.nl 3 months ago
The judge should be in jail for that and If the judge thinks the “system” can do his job then he should quit as he is clearly useless.
braxy29@lemmy.world 3 months ago
i could say a lot in response to your comment about the benefits and shortcomings of algorithms (or put another way, screening tools or assessments), but i’m tired.
i will just point out this, for anyone reading.
www.ncbi.nlm.nih.gov/pmc/articles/PMC2573025/
i am exceedingly troubled that something which is commonly regarded as indicating very high risk when working with victims of domestic violence was ignored in the cited case (disclaimer - i haven’t read the article). if the algorithm fails to consider history of strangulation, it’s garbage. if the user of the algorithm did not include that information (and it was disclosed to them), or keyed it incorrectly, they made an egregious error or omission.
i suppose, without getting into it, i would add - 35 questions (ie established statistical risk factors) is a good amount. large categories are fine. no screening tool is totally accurate, because we can’t predict the future or have total and complete understanding of complex situations. tools are only useful to people trained to use them and with accurate data and inputs. screening tools and algorithms must find a balance between accurate capture and avoiding false positives.