Have you seen a Tesla drive itself? Never mind ethical dilemmas, they can barely navigate the downtown without hitting pedestrians
Comment on AI shouldn’t make ‘life-or-death’ decisions, says OpenAI’s Sam Altman
pearsaltchocolatebar@discuss.online 11 months agoYes on everything but drone strikes.
A computer would be better than humans in those scenarios. Especially driving cars, which humans are absolutely awful at.
LWD@lemm.ee 11 months ago
pearsaltchocolatebar@discuss.online 11 months ago
Teslas aren’t self driving cars.
LWD@lemm.ee 11 months ago
According to their own website, they are
pearsaltchocolatebar@discuss.online 11 months ago
Well, yes. Elon Musk is a liar. Teslas are by no means fully autonomous vehicles.
Deceptichum@kbin.social 11 months ago
So if it looks like it’s going to crash, should it automatically turn off and go “Lol good luck” to the driver now suddenly in charge of the life-and-death situation?
pearsaltchocolatebar@discuss.online 11 months ago
I’m not sure why you think that’s how they would work.
Deceptichum@kbin.social 11 months ago
Well it's simple, who do you think should make the life or death decision?
pearsaltchocolatebar@discuss.online 11 months ago
The computer, of course.
A properly designed autonomous vehicle would be polling data from hundreds of sensors hundreds of times per second. A human’s reaction speed is 0.2 seconds, which is a hell of a long time in a crash scenario.
It has a way better chance of a ‘life’ outcome than a human who’s either unaware of the potential crash, or is in fight or flight mode and making (likely wrong) reactions based on instinct.
Again, humans are absolutely terrible at operating giant hunks of metal that go fast. If every car on the road was autonomous, then crashes would be extremely rare.