There are billions being sunk into AI. How much health care could that buy? Your logic only makes sense if AI is free. It’s not.
Comment on New York considers bill that would ban chatbots from giving legal, medical advice
henfredemars@infosec.pub 10 hours ago
Mixed feelings about this. Let me play devils advocate and say that many Americans don’t have access to these resources at all. Having potentially inaccurate resources might be better than nothing, or is that worse?
wewbull@feddit.uk 10 hours ago
thisbenzingring@lemmy.today 10 hours ago
the AI devices will just have preambles and disclaimers and word things in ways to refer the user to human resources
JoshuaFalken@lemmy.world 9 hours ago
‘Should I use one teaspoon of salt in this recipe, or two?’
Two is ideal.‘Do dogs like chicken wings?’
Wild dogs regularly hunt small animals like hare or chicken for food.One of these answers results in a bad cake, the other results in a hurt dog. Potentially inaccurate answers aren’t much of a problem when the stakes are low, but even a simple question about what to feed a pet could end with a negative outcome.
henfredemars@infosec.pub 9 hours ago
Hm, good point. Perhaps the overconfidence AI might provide is even worse than knowing you don’t know.
Catoblepas@piefed.blahaj.zone 9 hours ago
If you’re going to be your own lawyer or perform a bit of self surgery, there is no way the AI is helping that situation. Especially if the inherent nature of AI is to validate everything you say.
HeyThisIsntTheYMCA@lemmy.world 6 hours ago
especially if it’s wrong 20-35% of the time
Passerby6497@lemmy.world 9 hours ago
Having potentially inaccurate resources might be better than nothing, or is that worse?
You pick up a mushroom in the forest and take it home. If you have no information, do you eat it? If something tells you it’s safe do you eat it?
Cyteseer@lemmy.world 9 hours ago
No, misinformation is worse.
voidsignal@lemmy.world 10 hours ago
it’s worse. In 4D it’s even worser