Have you seen a Tesla drive itself? Never mind ethical dilemmas, they can barely navigate the downtown without hitting pedestrians
Comment on AI shouldn’t make ‘life-or-death’ decisions, says OpenAI’s Sam Altman
pearsaltchocolatebar@discuss.online 1 year agoYes on everything but drone strikes.
A computer would be better than humans in those scenarios. Especially driving cars, which humans are absolutely awful at.
LWD@lemm.ee 1 year ago
pearsaltchocolatebar@discuss.online 1 year ago
Teslas aren’t self driving cars.
LWD@lemm.ee 1 year ago
According to their own website, they are
pearsaltchocolatebar@discuss.online 1 year ago
Well, yes. Elon Musk is a liar. Teslas are by no means fully autonomous vehicles.
Deceptichum@kbin.social 1 year ago
So if it looks like it’s going to crash, should it automatically turn off and go “Lol good luck” to the driver now suddenly in charge of the life-and-death situation?
pearsaltchocolatebar@discuss.online 1 year ago
I’m not sure why you think that’s how they would work.
Deceptichum@kbin.social 1 year ago
Well it's simple, who do you think should make the life or death decision?
pearsaltchocolatebar@discuss.online 1 year ago
The computer, of course.
A properly designed autonomous vehicle would be polling data from hundreds of sensors hundreds of times per second. A human’s reaction speed is 0.2 seconds, which is a hell of a long time in a crash scenario.
It has a way better chance of a ‘life’ outcome than a human who’s either unaware of the potential crash, or is in fight or flight mode and making (likely wrong) reactions based on instinct.
Again, humans are absolutely terrible at operating giant hunks of metal that go fast. If every car on the road was autonomous, then crashes would be extremely rare.