i mean, you could just as easily say professors and university would stamp those habits out of human doctors, but, as we can see… they don’t.
just because an intelligence was engineered doesn’t mean it’s incapable of divergent behaviors, nor does it mean the ones it displays are of intrinsically lesser quality than those a human in the same scenario might exhibit. i don’t understand this POV you have because it’s the direct opposite of what most people complain about with machine learning tools… first they’re too non-deterministic to such a degree as to be useless, but now they’re so deterministic as to be entirely incapable of diverging their habits?
digressing over how i just kind of disagree with your overall premise (that’s okay that’s allowed on the internet and we can continue not hating each other!), i just kind of find this “contradiction,” if you can even call it that, pretty funny to see pop up out in the wild.
thanks for sharing the anecdote about the cardiac procedure, that’s quite interesting. if it isn’t too personal to ask, would you happen to know the specific procedure implicated here?
BrianTheeBiscuiteer@lemmy.world 1 week ago
Not specifically but I think the guidance is applicable to most incisions of the heart. I think the fact that it’s a muscular and constantly moving organ makes it differently than something like an epidermal stitch.
And my post isn’t to say “all mistakes are good” but that invariablity can lead to stagnation. AI doesn’t do things the same way every single time but it also doesn’t aim to “experiment” as a way to grow or to self-reflect on its own efficacy (which could lead to model collapse). That’s almost at the level of sentience.