just one more AI model, please that’ll do it, just one more, just you wait, have you seen how fast things are improving? Just one more. Common, just one more…
Comment on Tesla Robotaxis Reportedly Crashing at a Rate That's 4x Higher Than Humans
slevinkelevra@sh.itjust.works 3 weeks agoYeah that’s well known by now. However, safety through additional radar sensors costs money and they can’t have that.
halcyoncmdr@piefed.social 3 weeks ago
I don’t think it’s necessarily about cost. They were removing sensors both before costs rose and supply became more limited with things like the tariffs.
Too many sensors also causes issues, adding more is not an easy fix. Sensor Fusion is a notoriously difficult part of robotics. It can help with edge cases and verification, but it can also exacerbate issues. Sensors will report different things at some point. Which one gets priority? Is a sensor failing or reporting inaccurate data? How do you determine what is inaccurate if the data is still within normal tolerances?
More on topic though… My question is why is the robotaxi accident rate different from the regular FSD rate? Ostensibly they should be nearly identical.
NotMyOldRedditName@lemmy.world 3 weeks ago
Regular FSD rate has the driver (you) monitoring the car so there will be less accidents IF you properly stay attentive.
The FSD rides with a saftey monitor (passenger seat) had a button to stop the ride.
The driverless and no monitor cars have nothing.
So you get more accidents as you remove that supervision.
73ms@sopuli.xyz 3 weeks ago
The unsupervised cars are very unlikely to be involved in these crashes yet because according to Robotaxi tracker there was only a single one of those operational and only for the final week of January.
As you suggest there’s a difference in how much the monitor can really do about FSD misbehaving compared to a driver in the driver’s seat though. On the other hand they’re still forced to have the monitor behind the wheel in California so you wouldn’t expect a difference in accident rate based on that there, would be interesting to compare.
NotMyOldRedditName@lemmy.world 2 weeks ago
There are multiple unsupervised cars around now, it was only the 1 before earnings call, then a few days after earnings they came back and weren’t followed by chase cars. There’s a handful of videos over many days out there now if you want to watch any. The latest gaffe video I’ve seen is from last week where it drove into a construction zone that wasn’t blocked off.
I would still expect a difference between California and people like you and me using it.
My understanding is that in California, they’ve been told not to intervene unless necessary, but when someone like us is behind the steering wheel what we consider necessary is going to be different than what they’ve been told to consider necessary.
So we would likely intervene much sooner than the saftey driver in California, which would mean we were letting the car get into less situations we perceive to be be dicey.
merc@sh.itjust.works 2 weeks ago
Which one gets priority?
The one that says there’s a danger.
halcyoncmdr@piefed.social 2 weeks ago
Alright, so the radar is detecting a large object in front of the vehicle while travelling at highway speeds. The vision system can see the road is clear.
So with your assumption of listening to whatever says there’s an issue, it slams on the brakes to stop the car. But it’s actually an overpass, or overhead sign that the radar is reflecting back from while the road is clear. Now you have phantom braking.
Now extend that to a sensor or connection failure. The radar or a wiring harness is failing and sporadically reporting back close contacts that don’t exist. More phantom braking, and this time with no obvious cause.
merc@sh.itjust.works 2 weeks ago
Now you have phantom braking.
Phantom braking is better than Wyle E. Coyoteing a wall.
and this time with no obvious cause.
Again, better than not braking because another sensor says there’s nothing ahead. I would hope that flaky sensors is something that would cause the vehicle to show a “needs service” light or something. But, even without that, if your car is doing phantom braking, I’d hope you’d take it in.
But, consider your scenario without radar and with only a camera sensor. The vision system “can see the road is clear”, and there’s no radar sensor to tell it otherwise. Turns out the vision system is buggy, or the lens is broken, or the camera got knocked out of alignment, or whatever. Now it’s claiming the road ahead is clear when in fact there’s a train currently in the train crossing directly ahead. Boom, now you hit the train. I’d much prefer phantom breaking and having multiple sensors each trying to detect dangers ahead.
parzival@lemmy.org 3 weeks ago
I’m not too sure it’s about cost, it seems to be about Elon not wanting to admit he was wrong, as he made a big point of lidar being useless
tomalley8342@lemmy.world 3 weeks ago
Nah, that one’s on Elon just being a stubborn bitch and thinking he knows better than everybody else (as usual). Image
ageedizzle@piefed.ca 3 weeks ago
He’s right in that if current AI models were genuinely intelligent in the way humans are then cameras would be enough to achieve at least human level droving skills. The problem of course is that AI models are not nearly at that level yet
T156@lemmy.world 3 weeks ago
Even if they were, would it not be better to give the car better senses?
Humans don’t have LIDAR because we can’t just hook something into a human’s brain and have it work. If you can do that with a self-driving car, why cut it down to human senses?
48954246@lemmy.world 3 weeks ago
Exactly, with this logic why have motors or wheels?
You don’t have wheels so you shouldn’t use cars
ageedizzle@piefed.ca 3 weeks ago
I agree it would be better. I’m just saying that in theory cameras are all that would be required to achieve human level performance, so long as the AI was capable enough
Clent@lemmy.dbzer0.com 3 weeks ago
Cameras are inferior to human vision in many ways. Especially the ones used on Teslas.
alsimoneau@lemmy.ca 3 weeks ago
Lower dynamic range for one.
CheeseNoodle@lemmy.world 2 weeks ago
Also the Human brain is still on par with some of the worlds best supercomputers, I doubt a Tesla has that much onboard processing power.
ageedizzle@piefed.ca 2 weeks ago
Good point. Though I’ve heard some of these self driving cars connect remotely to a person to help drive when the AI doesnt know what to do, so I guess it’s conceivable that the car could connect to the cloud. That would be super error prone though. Connectivity issues cloud brick your car.
kameecoding@lemmy.world 3 weeks ago
I am a Human and there were occasions where I couldn’t tell if it’s an obstacle on the road or a weird shadow…
merc@sh.itjust.works 2 weeks ago
And, we humans have built-in binocular vision that we’ve been training for at least 1.5 decades by the time we’re allowed to drive.
Also, think about what you do in that situation where there’s a weird shadow. Slow down, sure. But, also move our heads up and down, side to side, trying to use that powerful binocular vision to get different angles on that strange shadow. How many front-facing cameras does Tesla have. Maybe 3, and one of those is mounted on the bumper? In theory, 3 cameras could give it 3 different “viewpoints” for binocular vision. But, that’s not as good as a human driver who can shift their eyes around to multiple points to examine a situation. And, if one of those 3 cameras is obscured (say the one on the bumper) you’re down to basic binocular vision without even the ability to take a look from a different angle.
Plus, we have evidence that Tesla isn’t even able to use its cameras to achieve binocular vision. If it worked, it shouldn’t have fallen for the Wile E. Coyote trick.
ageedizzle@piefed.ca 3 weeks ago
Yes. In theory cameras should be enough to get you up to human level driving competence but even that is a low bar.
73ms@sopuli.xyz 3 weeks ago
well I mean it’s the one thing that Tesla’s got going for it compared to Waymo which is way ahead of them.