First Tesla Autopilot Death

Seen something interesting in the news or on the intertubes? Discuss it here.

Moderators: Zamfir, Hawknc, Moderators General, Prelates

Tyndmyr
Posts: 10119
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: First Tesla Autopilot Death

Postby Tyndmyr » Tue Jul 05, 2016 8:29 pm UTC

commodorejohn wrote:I'm curious about that "130 million miles driven" - that is 130 million miles with the autopilot engaged, right?


Good thing to check, but yes, Tesla specified the number was with autopilot engaged.

KnightExemplar
Posts: 5203
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: First Tesla Autopilot Death

Postby KnightExemplar » Tue Jul 05, 2016 8:30 pm UTC

Tyndmyr wrote:
No. This system is not even as good as the crappy humans we are. Maybe one day in the future we will have autonomous drivers that are better than us. But lets not drink the kool-aid and pretend that the day has come yet.


It's doing pretty good. It's better than some drivers, certainly. All, or even a majority? Maybe not, but it's at least playing in the same rough ballpark. I know everyone believes themselves to be an exceptional driver, and thus, views reliability approaching that of an average driver as crap compared to their mad skillz, but most of those people are wrong. Sure, sure, maybe we COULD be much better, but much of the time we are tired, or drunk, or looking at cell phones, or bored and inattentive. This accident is not so different from what people do.


No. It really isn't.

----------

Editor's note: I ninja-edited this above stuff into my earlier post. But I think it makes more sense in a new post.
First Strike +1/+1 and Indestructible.

cphite
Posts: 1083
Joined: Wed Mar 30, 2011 5:27 pm UTC

Re: First Tesla Autopilot Death

Postby cphite » Tue Jul 05, 2016 8:30 pm UTC

sardia wrote:Cphite, how is that any different than when Joe scmoe misses crucial maintenance and causes deaths anyway in a regular car?


That's the point: We don't know how different it is because we have no data. The data we do have is coming from test cases that do not reflect real world ownership, and from testers who have a vested interest in the results.

Should hybrids be banned because they use complex braking systems?


No, because hybrids still employ the same basic braking mechanism as any other car; the added complexity does not interfere with the underlying friction brake.

Tyndmyr
Posts: 10119
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: First Tesla Autopilot Death

Postby Tyndmyr » Tue Jul 05, 2016 8:35 pm UTC

Makes sense with flow of conversation, IMO, we posted at basically the same time.

I've seen some REALLY bad human error driving things. Elderly person that couldn't turn his head slowly backing up until he bumped into the vehicle behind him repeatedly when trying to leave a parallel parking space. Some idiot passing me on the right, through unplowed roads after a snowstorm. People literally looking in the backseat, yelling at kids, driving into parked cars at a stoplight.

These aren't even strange. People hit obvious obstacles allll the time.

It's not that this is some miracle tech, because it certainly isn't. It's just...amazing that society functions despite being filled with humans who mostly are giving zero shits about what they're doing.

We can't tell for sure if it's *actually* better. 130m vs 100m miles, a single accident is simply not enough to be very certain that it's actually better. But it's at least vaguely in the same league.

KnightExemplar
Posts: 5203
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: First Tesla Autopilot Death

Postby KnightExemplar » Tue Jul 05, 2016 8:41 pm UTC

Tyndmyr wrote:We can't tell for sure if it's *actually* better. 130m vs 100m miles, a single accident is simply not enough to be very certain that it's actually better. But it's at least vaguely in the same league.


I'm not sure if the accident numbers will ever be released. The fatality numbers are released and are well-tracked. But standard run-of-the-mill accidents? I don't think there's any statistics on that.

As stated earlier: the only way we have "130-million miles on Autopilot" is because Tesla runs a hell-of-a-lot of spyware on your cars to track everything. And then Tesla felt like they needed to grace us with that number because that'd be better for PR purposes.
First Strike +1/+1 and Indestructible.

commodorejohn
Posts: 849
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: First Tesla Autopilot Death

Postby commodorejohn » Tue Jul 05, 2016 8:42 pm UTC

Tyndmyr wrote:It's not that this is some miracle tech, because it certainly isn't. It's just...amazing that society functions despite being filled with humans who mostly are giving zero shits about what they're doing.

Here's the thing, though. Yes, people in the aggregate are frequently idiots and bad drivers. But: unless you're going to sponsor a program to buy every terrible driver a Tesla so that it's at the very least a lateral move, all you're doing is putting more untested and possibly not very good drivers on the road, not removing known bad drivers from it.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

User avatar
Dauric
Posts: 3737
Joined: Wed Aug 05, 2009 6:58 pm UTC
Location: In midair, traversing laterally over a container of sharks. No water, just sharks, with lasers.

Re: First Tesla Autopilot Death

Postby Dauric » Tue Jul 05, 2016 8:44 pm UTC

commodorejohn wrote:...unless you're going to sponsor a program to buy every terrible driver a Tesla so that it's at the very least a lateral move...


Then you run in to the problem of "How terrible a driver do I have to be to get a Tesla?"
We're in the traffic-chopper over the XKCD boards where there's been a thread-derailment. A Liquified Godwin spill has evacuated threads in a fourty-post radius of the accident, Lolcats and TVTropes have broken free of their containers. It is believed that the Point has perished.

commodorejohn
Posts: 849
Joined: Thu Dec 10, 2009 6:21 pm UTC
Location: Placerville, CA
Contact:

Re: First Tesla Autopilot Death

Postby commodorejohn » Tue Jul 05, 2016 8:47 pm UTC

Yup.
"'Legacy code' often differs from its suggested alternative by actually working and scaling."
- Bjarne Stroustrup
www.commodorejohn.com - in case you were wondering, which you probably weren't.

cphite
Posts: 1083
Joined: Wed Mar 30, 2011 5:27 pm UTC

Re: First Tesla Autopilot Death

Postby cphite » Tue Jul 05, 2016 8:51 pm UTC

LaserGuy wrote:
cphite wrote:
HES wrote:The most serious collisions are when a HGV veers sideways across three lanes, through the central barrier, and across another three lanes of oncoming traffic. Most commonly attributed to tired, distracted, or unwell drivers. Taking that risk away sounds like a great idea to me.


Sure; if you're actually taking that risk away. Not everybody is convinced that it's being taken away.

We already have an example here of an autonomous vehicle failing to notice a tractor trailer.


And? I posted just upthread an example of a bus driver failing to notice a tractor trailer. I think you underestimate how terrible some human drivers are.


I think you underestimate the potential consequences of a standardized system failing so dramatically as to miss the presence of a tractor trailer. A human missing a tractor trailer suggests there was something wrong with that person; drunk, distracted, just plain stupid, etc. A system missing a tractor trailer implies a fundamental flaw in that system. The reasonable assumption is that all copies of that system share the flaw.

Pedestrians and other people (and drivers) is one of the hard parts. Another hard part is that we aren't anywhere close to having perfect sensors that work all the time, or perfect computers that work all the time. For things like trains and planes that undergo constant maintenance and monitoring of systems, you can mitigate that... I'm not sure your average private citizen is going to do that.


No, we're never going to have perfect computers or perfect sensors. Will we have computers and sensors that can perform this task significantly better than a human? Almost certainly. Will we have computers and sensors that can perform this task better than my 90 year old grandfather? I would venture that we already do. It's baffling to me that people are demanding perfection from a system that is 1) less than five years old and 2) when the failure rate for humans at the task is pretty high to begin with.


First off, it's not "almost certain" that these systems will outperform even average drivers in the foreseeable future. Certainly that's what the manufacturers want people to believe, but the evidence simply isn't there. Better than grandpa is a more reasonable benchmark.

What data we have suggests is that under controlled conditions, these cars perform very well. That includes very frequent maintenance and monitoring, and for the most part sticking to known roads that have good markings, and good conditions, etc. We have very little data - almost none - from these vehicles where the conditions are not controlled.

Second... I don't believe anyone is demanding perfection. I for one am certain that "perfect" isn't even attainable. But the underlying assumption that a lot of people seem to be making is that these things are "at least" as good as human drivers and that we ought to just accept the fact that they're coming (the "tough shit" argument) and be happy. And the fact of the matter is we don't have the data to say that they're as good as human drivers.

And frankly, the problem of making these things as good as human drivers - or even "good enough" - is not the only issue. Another issue, perhaps even bigger and more challenging - is protecting these system from malicious actors. People have already shown the ability to hack vehicles normal vehicles to the point of interference - we're talking about vehicles that are 1.) designed to drive themselves, and 2.) designed to accept external communications.

KnightExemplar
Posts: 5203
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: First Tesla Autopilot Death

Postby KnightExemplar » Tue Jul 05, 2016 8:57 pm UTC

@cphite: To be clear with my argument, I believe this "Autopilot" is Tesla's crappy marketing. This is NOT autonomous driving. As noted elsewhere in this thread, Google uses LIDAR, Radar, AND optics. Such a system has been deployed for years, and I don't recall any deaths that have occurred while the Google Cars have been automatically taking street-view pictures for us.

The problem is that Tesla only equips the Model S with a single Radar (which can't see tractor trailers, or anything else windshield-height), and a camera which has been blinded by the sun. Tesla's system is fundamentally inferior. Its not designed to be fully-autonomous, but semi-autonomous. Tesla's technology requires an attentive driver. Its supposed to augment the human driver, not replace it.

And the human in this case was watching Harry Potter instead of driving.

Autonomous cars, when they come out for real, will likely improve transportation. But Tesla is trying to half-ass their way there: they don't have a fully autonomous system (despite the name "Autopilot"), and they have let marketing hype / Youtubers "drive" their cars with hands off the steering wheels (unlike BMW / Mercedes which turns off their lane-assist technologies if your hands remain off the wheel for more than three seconds).

----------------------

I think augmenting the human with these technologies can help. But the marketing effort must be clear. The approach of Subaru, BMW, Mercedes to play down their systems is what we need.
First Strike +1/+1 and Indestructible.

User avatar
Dauric
Posts: 3737
Joined: Wed Aug 05, 2009 6:58 pm UTC
Location: In midair, traversing laterally over a container of sharks. No water, just sharks, with lasers.

Re: First Tesla Autopilot Death

Postby Dauric » Tue Jul 05, 2016 9:03 pm UTC

KnightExemplar wrote:@cphite: To be clear with my argument, I believe this "Autopilot" is Tesla's crappy marketing. This is NOT autonomous driving. As noted elsewhere in this thread, Google uses LIDAR, Radar, AND optics. Such a system has been deployed for years, and I don't recall any deaths that have occurred while the Google Cars have been automatically taking street-view pictures for us.

The problem is that Tesla only equips the Model S with a single Radar (which can't see tractor trailers, or anything else windshield-height), and a camera which has been blinded by the sun. Tesla's system is fundamentally inferior. Its not designed to be fully-autonomous, but semi-autonomous. Tesla's technology requires an attentive driver. Its supposed to augment the human driver, not replace it.

And the human in this case was watching Harry Potter instead of driving.

Autonomous cars, when they come out for real, will likely improve transportation. But Tesla is trying to half-ass their way there: they don't have a fully autonomous system (despite the name "Autopilot"), and they have let marketing hype / Youtubers "drive" their cars with hands off the steering wheels (unlike BMW / Mercedes which turns off their lane-assist technologies if your hands remain off the wheel for more than three seconds).


Ultimately the industry is going to have to come up with some standards for both terminology and functionality, or else a government agency will have to, to dictate what does and doesn't constitute a "Self Driving" vehicle and a "Driver Assisted" vehicle. Right now the state of the art is moving too fast for industry to standardize or for governments to nail down some benchmarks, so the frontier of the technology is going to be a bit "Wild West" until it begins to settle down.
We're in the traffic-chopper over the XKCD boards where there's been a thread-derailment. A Liquified Godwin spill has evacuated threads in a fourty-post radius of the accident, Lolcats and TVTropes have broken free of their containers. It is believed that the Point has perished.

User avatar
LaserGuy
Posts: 4301
Joined: Thu Jan 15, 2009 5:33 pm UTC

Re: First Tesla Autopilot Death

Postby LaserGuy » Tue Jul 05, 2016 9:12 pm UTC

KnightExemplar wrote:
LaserGuy wrote:No, we're never going to have perfect computers or perfect sensors. Will we have computers and sensors that can perform this task significantly better than a human? Almost certainly.


As noted in one of my earlier posts: Tesla has one-death in 130 Million miles driven. In comparison, drivers in general have roughly one-death every 100 Million miles driven.

No. This system is not even as good as the crappy humans we are. Maybe one day in the future we will have autonomous drivers that are better than us. But lets not drink the kool-aid and pretend that the day has come yet.


Well, based on your own figures, the Telsa system technically already does better than average humans in terms of casualties per mile, though as others have noted, drawing statistics from a single fatality isn't a particularly meaningful exercise--by this argument, Google's self-driving cars are infinitely safer than humans, having no fatalities whatsoever, albeit on only 2 million miles driven.

That said, this particular system isn't designed to replace a human driver, and nobody has ever claimed that it was*. It's designed to assist in certain basic driving functions, but a person is supposed to be in control of the vehicle at all times. That's not to say that it is impossible to design a system that can outperform humans, or even impossible to do so in the near-future. I wouldn't be surprised if fully autonomous vehicles hit the road commercially somewhere like Japan or South Korea within the next ten years or so--the huge geography of the United States works against it in terms of early adoption, IMHO, and Japan in particular has a much stronger incentive in terms of its rapidly aging demographics to move in this direction.

cphite wrote:First off, it's not "almost certain" that these systems will outperform even average drivers in the foreseeable future. Certainly that's what the manufacturers want people to believe, but the evidence simply isn't there. Better than grandpa is a more reasonable benchmark.


Better than grandpa is an important bench in and of itself though. Obviously not every car is going to be autonomous, maybe ever. But there are a huge number of people out there who are terrible drivers, and a huge number of people out there who legally can't drive because of age/disability/whatever. To those people in particular, driverless cars represent an enormous improvement in quality of life, one which, at present, we don't have a particularly good alternative for.

[*]Except maybe Telsa's marketing department, which is certainly a problem, but that's a human problem, not a technology problem.

KnightExemplar
Posts: 5203
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: First Tesla Autopilot Death

Postby KnightExemplar » Tue Jul 05, 2016 9:51 pm UTC

Except maybe Telsa's marketing department, which is certainly a problem, but that's a human problem, not a technology problem.


I'm not 100% sure of that. I think they imply it by using the name "Autopilot". And they turn a blind eye to the Youtube videos of people with their hands off the steering wheel. But technically speaking, to turn on Autopilot, it looks like there is an EULA that states "You understand this software is beta".

Tesla remains technically in the moral clear zone here, but just barely. I think they need to work on the correct messaging to prevent future misunderstandings.
First Strike +1/+1 and Indestructible.

User avatar
ucim
Posts: 5096
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: First Tesla Autopilot Death

Postby ucim » Tue Jul 05, 2016 10:09 pm UTC

HES wrote:People will continue to hold machines to vastly higher standards than people - and this is a good thing, because it will push the industry as close as possible to that perfect, unbreakable system.
(italics mine)
No, that's not the reason.

The reason is that you can't hold a machine accountable at all. You can't influence its behavior. You can't coax, discipline, coddle, punish, or in any other way get it to obey you. It does what it is made to do, period. You can't get it to "do better next time". Machines are tools that you have to be able to count on to just do their thing, and because of this, you can behave appropriately around them. But once the things that machines do become complex enough, they almost become social entities - you have to guess what they'll do rather than know what they do.

Nobody has to guess what a hammer does. Some people have to guess what a GPS does. What operating systems do is almost entirely guesswork as far as users are concerned. That is why we hold self driving cars to a higher standard. People have to know that they will "just work", and not mistake a truck for the sky without even saying "oops".

Further, since these machines are marketed as powerful tools, and the profit made by the manufacturers is dependent on the power the public perceives them as having, there is built-in incentive to allow puzzling failure modes due to inflating people's expectations (and the price, and the profits).

Tyndmyr wrote:People hit obvious obstacles all the time.
But that's because they aren't looking, not because they can't see. In the case of the Tesla, it's because it couldn't see, and didn't know it couldn't see.

LaserGuy wrote:Google's self-driving cars are infinitely safer than humans, having no fatalities whatsoever, albeit on only 2 million miles driven.
What speeds are the google cars driving at?

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

KnightExemplar
Posts: 5203
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: First Tesla Autopilot Death

Postby KnightExemplar » Tue Jul 05, 2016 10:16 pm UTC

ucim wrote:
LaserGuy wrote:Google's self-driving cars are infinitely safer than humans, having no fatalities whatsoever, albeit on only 2 million miles driven.
What speeds are the google cars driving at?

Jose


I think LaserGuy's point is that Google Car estimates need another billion miles on them before we have a statistic. Not that we should take the current statistic seriously.

And I can agree to that. Tesla probably needs ~1 billion miles before we come to any conclusion. But ~130 Million miles gives us an idea where they are.
First Strike +1/+1 and Indestructible.

User avatar
ucim
Posts: 5096
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: First Tesla Autopilot Death

Postby ucim » Tue Jul 05, 2016 10:33 pm UTC

KnightExemplar wrote:I think LaserGuy's point is that Google Car estimates need another billion miles on them before we have a statistic.
Perhaps. But in making that point, he draws a (probably false) equivalence between the two. My understanding is that the google cars are being driven at city speeds, and the Teslas are being driven at highway speeds. This will have a bearing on the survivability of a crash, and thus on the statistic being examined (deaths per mile).

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
sardia
Posts: 5502
Joined: Sat Apr 03, 2010 3:39 am UTC

Re: First Tesla Autopilot Death

Postby sardia » Tue Jul 05, 2016 10:44 pm UTC

https://en.wikipedia.org/wiki/Autonomou ... sification
FYI Standardized classification systems of autonomous cars, levels 1 to 4. Four is the best for the US.
In the United States, the National Highway Traffic Safety Administration (NHTSA) has proposed a formal classification system:[10] NHTSA automated vehicle classifications:
The Volvo S60 Drive Me autonomous test vehicle is considered Level 3 autonomous driving.[11]

Level 0: The driver completely controls the vehicle at all times.
Level 1: Individual vehicle controls are automated, such as electronic stability control or automatic braking.
Level 2: At least two controls can be automated in unison, such as adaptive cruise control in combination with lane keeping.
Level 3: The driver can fully cede control of all safety-critical functions in certain conditions. The car senses when conditions require the driver to retake control and provides a "sufficiently comfortable transition time" for the driver to do so.
Level 4: The vehicle performs all safety-critical functions for the entire trip, with the driver not expected to control the vehicle at any time. As this vehicle would control all functions from start to stop, including all parking functions, it could include unoccupied cars.

An alternative classification system based on six different levels (ranging from driver assistance to fully automated systems) has been published by Society of Automotive Engineers (SAE), an automotive standardisation body.[12] This classification system is based on the amount of driver intervention and attentiveness required, rather than the vehicle capabilities, although these are very closely related.

SAE automated vehicle classifications:

Level 0: Automated system has no vehicle control, but may issue warnings.
Level 1: Driver must be ready to take control at anytime. Automated system may include features such as Adaptive Cruise Control (ACC), Parking Assistance with automated steering, and Lane Keeping Assistance (LKA) Type II in any combination.
Level 2: The driver is obliged to detect objects and events and respond if the automated system fails to respond properly. The automated system executes accelerating, braking, and steering. The automated system can deactivate immediately upon takeover by the driver.
Level 3: Within known, limited environments (such as freeways), the driver can safely turn their attention away from driving tasks.
Level 4: The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.
Level 5: Other than setting the destination and starting the system, no human intervention is required. The automatic system can drive to any location where it is legal to drive.

KnightExemplar
Posts: 5203
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: First Tesla Autopilot Death

Postby KnightExemplar » Tue Jul 05, 2016 10:47 pm UTC

ucim wrote:
KnightExemplar wrote:I think LaserGuy's point is that Google Car estimates need another billion miles on them before we have a statistic.
Perhaps. But in making that point, he draws a (probably false) equivalence between the two. My understanding is that the google cars are being driven at city speeds, and the Teslas are being driven at highway speeds. This will have a bearing on the survivability of a crash, and thus on the statistic being examined (deaths per mile).

Jose


The full quote is:

drawing statistics from a single fatality isn't a particularly meaningful exercise--by this argument, Google's self-driving cars are infinitely safer than humans, having no fatalities whatsoever, albeit on only 2 million miles driven.


I don't think he's trying to draw any equivalence.
First Strike +1/+1 and Indestructible.

User avatar
LaserGuy
Posts: 4301
Joined: Thu Jan 15, 2009 5:33 pm UTC

Re: First Tesla Autopilot Death

Postby LaserGuy » Tue Jul 05, 2016 10:58 pm UTC

KnightExemplar wrote:
ucim wrote:
LaserGuy wrote:Google's self-driving cars are infinitely safer than humans, having no fatalities whatsoever, albeit on only 2 million miles driven.
What speeds are the google cars driving at?

Jose


I think LaserGuy's point is that Google Car estimates need another billion miles on them before we have a statistic. Not that we should take the current statistic seriously.

And I can agree to that. Tesla probably needs ~1 billion miles before we come to any conclusion. But ~130 Million miles gives us an idea where they are.


Yes, that's my point. It's impossible to say anything about the relative safety of Google's vehicles (beyond that they aren't stupendously unsafe), at least in terms of fatal crashes, since the Google fleet, collectively, hasn't driven enough miles for the metric to make any sense.

ucim wrote:Perhaps. But in making that point, he draws a (probably false) equivalence between the two. My understanding is that the google cars are being driven at city speeds, and the Teslas are being driven at highway speeds. This will have a bearing on the survivability of a crash, and thus on the statistic being examined (deaths per mile).

Jose


As I noted above, my point was simply about the uselessness of trying to derive meaningful statistics from small numbers. But actually, highway driving is safer* than urban driving. You're more likely to get injured in a crash due to the high speeds, sure, but you're so much less likely to actually get into a crash (due to straighter roads, simpler traffic patterns, divided highways, consistent speeds, no intersections, etc.) that deaths per mile on highway is about a third of that on urban streets.

[edit][*]I'm having trouble verifying this figure, so I'm less confident about it than when I had originally posted.
Last edited by LaserGuy on Tue Jul 05, 2016 11:16 pm UTC, edited 1 time in total.

elasto
Posts: 3028
Joined: Mon May 10, 2010 1:53 am UTC

Re: First Tesla Autopilot Death

Postby elasto » Tue Jul 05, 2016 11:00 pm UTC

Today, while I was out walking, I had a car reverse out of a parking space and hit me. A self-driving car would never have done that.

Over a million dead per year, people. And goodness knows how many injured.

Human drivers suck.

User avatar
ucim
Posts: 5096
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: First Tesla Autopilot Death

Postby ucim » Tue Jul 05, 2016 11:43 pm UTC

elasto wrote:Today, while I was out walking, I had a car reverse out of a parking space and hit me. A self-driving car would never have done that.
A self driving car just hit a truck. I think your faith is misplaced (despite the waffling about the Tesla not "really" being "self" driving).

We don't know what a self-driving car would not do.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
sardia
Posts: 5502
Joined: Sat Apr 03, 2010 3:39 am UTC

Re: First Tesla Autopilot Death

Postby sardia » Tue Jul 05, 2016 11:57 pm UTC

ucim wrote:
elasto wrote:Today, while I was out walking, I had a car reverse out of a parking space and hit me. A self-driving car would never have done that.
A self driving car just hit a truck. I think your faith is misplaced (despite the waffling about the Tesla not "really" being "self" driving).

We don't know what a self-driving car would not do.

Jose

Actually, we don't know what a human driver would not do, we only know human drivers cause accidents. We know way more what a automated car would do because it's recorded.

morriswalters
Posts: 6506
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: First Tesla Autopilot Death

Postby morriswalters » Wed Jul 06, 2016 1:16 am UTC

ucim wrote:A self driving car just hit a truck. I think your faith is misplaced (despite the waffling about the Tesla not "really" being "self" driving).
That isn't waffling. The driver of the car was at fault, full stop. Add him to the list of other human caused accidents. There is no fully tested autonomous car, I believe you need a waiver from a given state to put them on the road at all, for testing. Had the car been fully autonomous it would have been driving the speed limit. Tesla's system is more similar to super cruise control. The fix here is for Tesla to remove the ability for the driver to take his hands from the steering wheel. They call that a dead man with good reason. If you really think it was autonomous consider that if he had dropped dead in that car, it would have continued until the system figured out that he was incapacitated if it can. We don't really know and Elon Musk isn't saying.

KnightExemplar
Posts: 5203
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: First Tesla Autopilot Death

Postby KnightExemplar » Wed Jul 06, 2016 3:01 am UTC

elasto wrote:Today, while I was out walking, I had a car reverse out of a parking space and hit me. A self-driving car would never have done that.


Uhhhh.... http://www.theverge.com/2016/5/11/11656 ... mode-crash

I had this link in my first post. Tesla crashes into parked trailer while in "summon" mode. See this for more details: https://www.ksl.com/?sid=39727592

Sure, the driver accidentally put the Tesla into "summon" mode (arguably Tesla engineering fault IMO. The man probably hit "park" twice then left the vehicle, which activated summon mode), but the car similarly didn't see the trailer and decided to blindly ram it.

Why Tesla cars are engineered to activate its AI mode when you hit the "Park" button twice is... probably why this feature is in beta. Herp derp. Still, when I learned that Tesla cars were blind to trailers a few months ago from that story, I knew that something like this death was going to happen, involving the Tesla's blatant blind spot for objects at windshield height.
First Strike +1/+1 and Indestructible.

User avatar
ucim
Posts: 5096
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: First Tesla Autopilot Death

Postby ucim » Wed Jul 06, 2016 3:12 am UTC

We have a lot more data on what human drivers do and don't do. And we are each human too; we have an understanding of humans that we don't have of (complex enough) machines. And we don't know what self driving cars won't do; we just know what any given self-driving car did do. And bottom line, it's the engineers at Tesla that know this. Pedestrians don't. Drivers don't.

morriswalters wrote: The driver of the [Tesla] was at fault, full stop.
Not disputed. But the fact is, the autopilot was driving. It shouldn't have been, but it was. And it did poorly, because it couldn't see, and it didn't know it couldn't see. I will grant you that the Tesla isn't "self driving" by our standards, but the car doesn't know this.

And lest I seem simply ludditic, it's not the machine that's deadly. It's the combination of person and machine, where the machine is "smart enough". In aviation, a similar problem is called "instructor in command".
Spoiler:
"Pilot in command" is the term for the person ultimately responsible for the flight. This is sometimes the pilot flying, but in crew situations, the pilot in command may not be the one with his hands on the yoke. Being pilot in command is a big responsibility, and must be taken seriously. Now, consider the case of an inexperienced pilot flying with an instructor, and learning something new (or getting a refresher). The instructor is usually much more experienced, and is there (in part) to take over if the pilot undergoing instruction manages to ch*rp it up. But at the same time, the pilot undergoing instruction is also the pilot in command. So the instructor asks the pilot to do something that's outside the pilot's comfort zone. Normally the pilot wouldn't do it, but the instructor is there to teach, so the pilot complies, but mishandles it. The instructor may expect the pilot to recognize this and correct it, but physics has its say before either of them steps in to fix the mustard. Neither of them was taking responsibility for the flight, and you read about it in the papers. It's something aviators of all stripes have to be cognizant of.
The problem of self driving cars with people expected to override them is that nobody has control when it counts.

That's the same kind of issue I have with nuclear power. It's wonderful and all, but real people aren't careful and dedicated enough to pull it off with the degree of safety that's necessary, and computers aren't at the level where we can just hand a power plant to Artoo Detoo and walk away. When that day comes there will be rainbows and ponies, but in exchange, we'll all be pets. It's the same issue in different cloth.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

elasto
Posts: 3028
Joined: Mon May 10, 2010 1:53 am UTC

Re: First Tesla Autopilot Death

Postby elasto » Wed Jul 06, 2016 8:03 am UTC

ucim wrote:We have a lot more data on what human drivers do and don't do.

Yes, they cause 1.25m fatalities and 20m-50m injuries per year, according to WHO. Let those numbers sink in for a moment. We could have a nuclear power station explode every year (heck, perhaps every month) and not reach those numbers.

We don't need a world of 'rainbows and ponies'. We don't need perfection. All we need is 'better than us'.

And I stand by my assertion that once self-driving cars are out on the streets (Tesla's is not, and it's not nit-picky to point that out - close but no cigar as they say. Google's is closer), a self-driving car reversing into a pedestrian will simply never occur. It's just too simple a task. It'd be like thinking that if you only played it enough times, Deep Blue would eventually fall for a Fool's Mate.

Sure, there will always be bizarre circumstances where automation fails, in every walk of life. Thankfully the goal isn't perfect automation.

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7210
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: First Tesla Autopilot Death

Postby Zamfir » Wed Jul 06, 2016 9:39 am UTC

The 1.25 million number is a tad misleading. If you want to tackle that number, you don't need just self driving cars. You need $1,000 self driving cars, maintenance free, with crumple zones and airbags and the works, that can navigate Indian roads and Indian traffic, and (en passant ) you'll have to upgrade India's emergency health care system.

That would a good thing, no doubt. But it's not really on the horizon. The realistic promise is to improve the safety record of richer countries, and those already account for a small fraction of that 1.25 million deaths.

morriswalters
Posts: 6506
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: First Tesla Autopilot Death

Postby morriswalters » Wed Jul 06, 2016 9:57 am UTC

ucim wrote:Not disputed. But the fact is, the autopilot was driving. It shouldn't have been, but it was. And it did poorly, because it couldn't see, and it didn't know it couldn't see. I will grant you that the Tesla isn't "self driving" by our standards, but the car doesn't know this.
The weakest link with cars is the people driving them, the most dangerous stage for self driving cars is a mix of people and self driving cars. In the end, in this instance, Elon Musk had a high degree of hubris. You don't beta test your devices in a live environment. I suspect someone will bleed him for it. In any case I don't want to convince you that self driving cars can be safe, any more than you could convince me that a 600 hp car needs to be in the hands of mortals, but I will disagree with you and say that they are certainly technically possible, given time.

KnightExemplar
Posts: 5203
Joined: Sun Dec 26, 2010 1:58 pm UTC

Re: First Tesla Autopilot Death

Postby KnightExemplar » Wed Jul 06, 2016 10:42 am UTC

Zamfir wrote:The 1.25 million number is a tad misleading. If you want to tackle that number, you don't need just self driving cars. You need $1,000 self driving cars, maintenance free, with crumple zones and airbags and the works, that can navigate Indian roads and Indian traffic, and (en passant ) you'll have to upgrade India's emergency health care system.


The cheap Indian cars thing is relevant, because of the utter shit cars they have over there.

https://www.youtube.com/watch?v=iFei7XFOdIw

A 64 km/hr (40mph) collision is just not survivable. No air bags, no crumple zones, no cage. And these cars are shit because they don't want to pay for these basic safety features. (I don't live over there. Presumably, cars are still safer than the status-quo, which are those electric bikes or something)
First Strike +1/+1 and Indestructible.

cphite
Posts: 1083
Joined: Wed Mar 30, 2011 5:27 pm UTC

Re: First Tesla Autopilot Death

Postby cphite » Wed Jul 06, 2016 1:34 pm UTC

elasto wrote:Today, while I was out walking, I had a car reverse out of a parking space and hit me. A self-driving car would never have done that.


Based on what evidence? The subject of this thread is one of these systems failing to notice a tractor trailer... what makes you think they can't fail to notice a person?

Over a million dead per year, people. And goodness knows how many injured.


And before we introduce a whole lot of automated cars we should take the time to make sure they won't increase those numbers.

Human drivers suck.


Agreed. But they're still the best we have.

User avatar
sardia
Posts: 5502
Joined: Sat Apr 03, 2010 3:39 am UTC

Re: First Tesla Autopilot Death

Postby sardia » Wed Jul 06, 2016 1:46 pm UTC

Zamfir wrote:The 1.25 million number is a tad misleading. If you want to tackle that number, you don't need just self driving cars. You need $1,000 self driving cars, maintenance free, with crumple zones and airbags and the works, that can navigate Indian roads and Indian traffic, and (en passant ) you'll have to upgrade India's emergency health care system.

That would a good thing, no doubt. But it's not really on the horizon. The realistic promise is to improve the safety record of richer countries, and those already account for a small fraction of that 1.25 million deaths.

For the US, it would be 32719 deaths a year, or 10 per 100,000. Autonomous cars are promising 1/10 the rate. So roughly 20000 to 30000 lives saved, plus whatever bonuses from reduced accidents on top of that. (deaths don't count accidents where you lose money or get injured only).

BattleMoose
Posts: 1993
Joined: Tue Nov 13, 2007 8:42 am UTC

Re: First Tesla Autopilot Death

Postby BattleMoose » Wed Jul 06, 2016 1:57 pm UTC

Just think of the insurance premiums!

User avatar
ucim
Posts: 5096
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: First Tesla Autopilot Death

Postby ucim » Wed Jul 06, 2016 2:05 pm UTC

sardia wrote:Autonomous cars are promising 1/10 the rate. So roughly 20000 to 30000 lives saved...
Promises aren't data. Sure, autonomous cars will get better. They will probably get better than people. They are not there yet, and where they are is not inspiring unless you are inspired by marketing.

The claim is "save lives" but the goal is "make money". We just saw what happens when the two collide.

I suppose the same thing could have been said of automatic elevators though.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

BattleMoose
Posts: 1993
Joined: Tue Nov 13, 2007 8:42 am UTC

Re: First Tesla Autopilot Death

Postby BattleMoose » Wed Jul 06, 2016 2:23 pm UTC

When it is established that autonomous automobiles are somewhat safer, insurance companies will start offering lower premiums for people who use autonomous automobiles. People are probably much more likely to give up control for lower premiums than the idea of greater safety, even though they will get both.

EDIT: Especially parents buying cars for their children, such an easy sell for an autonomous vehicle!

Tyndmyr
Posts: 10119
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: First Tesla Autopilot Death

Postby Tyndmyr » Wed Jul 06, 2016 3:01 pm UTC

ucim wrote:
HES wrote:People will continue to hold machines to vastly higher standards than people - and this is a good thing, because it will push the industry as close as possible to that perfect, unbreakable system.
(italics mine)
No, that's not the reason.

The reason is that you can't hold a machine accountable at all. You can't influence its behavior. You can't coax, discipline, coddle, punish, or in any other way get it to obey you. It does what it is made to do, period. You can't get it to "do better next time". Machines are tools that you have to be able to count on to just do their thing, and because of this, you can behave appropriately around them. But once the things that machines do become complex enough, they almost become social entities - you have to guess what they'll do rather than know what they do.


Sure you can. You patch the code.

Tyndmyr wrote:People hit obvious obstacles all the time.
But that's because they aren't looking, not because they can't see. In the case of the Tesla, it's because it couldn't see, and didn't know it couldn't see.


....what's the difference?

ucim wrote:
elasto wrote:Today, while I was out walking, I had a car reverse out of a parking space and hit me. A self-driving car would never have done that.
A self driving car just hit a truck. I think your faith is misplaced (despite the waffling about the Tesla not "really" being "self" driving).

We don't know what a self-driving car would not do.

Jose


There's not really a lot of mystery here. We test it, we know what it does. Same as people. People have been driving a lot, so we have a very good idea of how they fail.

There's nothing intrinsic to automation that makes this more difficult to track on vehicles. If anything, it's easier, because it's way easier to look at the code and hardware of a robot car than of a human.

cphite wrote:
elasto wrote:Today, while I was out walking, I had a car reverse out of a parking space and hit me. A self-driving car would never have done that.


Based on what evidence? The subject of this thread is one of these systems failing to notice a tractor trailer... what makes you think they can't fail to notice a person?


Humans tend to not be hovering in the air, where the apparent sensory weakness is. There's no particular reason to believe the weakness at detecting tractor trailers would apply to pedestrians.

User avatar
Dauric
Posts: 3737
Joined: Wed Aug 05, 2009 6:58 pm UTC
Location: In midair, traversing laterally over a container of sharks. No water, just sharks, with lasers.

Re: First Tesla Autopilot Death

Postby Dauric » Wed Jul 06, 2016 3:23 pm UTC

Tyndmyr wrote:
Tyndmyr wrote:People hit obvious obstacles all the time.
But that's because they aren't looking, not because they can't see. In the case of the Tesla, it's because it couldn't see, and didn't know it couldn't see.


....what's the difference?



When a device detects a fault it throws an error, the error can then instigate a warning to the driver, reduce speed, record the error, and/or make some other response to the error itself that changes the way the device is operating.

When an error isn't thrown by the system, then the device continues operating normally without responding to the fact that it isn't operating normally.
We're in the traffic-chopper over the XKCD boards where there's been a thread-derailment. A Liquified Godwin spill has evacuated threads in a fourty-post radius of the accident, Lolcats and TVTropes have broken free of their containers. It is believed that the Point has perished.

User avatar
Neil_Boekend
Posts: 3215
Joined: Fri Mar 01, 2013 6:35 am UTC
Location: Yes.

Re: First Tesla Autopilot Death

Postby Neil_Boekend » Wed Jul 06, 2016 3:31 pm UTC

(spoilered for offtopicness regarding bikes)
Spoiler:
ucim wrote:
Zamfir wrote:Can you post some pictures (or street view links) of roads that are incompatible?
Pretty much any rural two lane two way road with a double yellow line, on which cars drive at natural car speeds. Many of the roads connecting towns in California are like that. Sure, you can ride on them, and cars can drive on them, but a car at 55 mph passing a bicyclist at 10 mph is going to be a problem when there's no legal place for the car to go to give the bike a safe buffer.

Side streets aren't a problem really. Despite speeding cars, they can cross the center to pass a cyclist. But inter-town roads (which often are major arteries in more rural areas) often have one lane each direction, a double yellow line between, curves and hills, cars (legally) doing highway speed, and no place to widen the road to add a proper bike lane without major construction.

Here's an example pulled more or less at random.

Jose

While I prefer such problems to be solved with a meter more asphalt or a separate bike path it is not a hard necessity. Here in the Netherlands we have similar problems with tractors on 80 km/h (50 mph) roads. They can only drive 45 km/h (30 mph) legally and on some roads overtaking is not allowed. On those roads tractors are usually exempt.
Image("uitgezonderd" means except)

KnightExemplar wrote:
Spoiler:
LaserGuy wrote:No, we're never going to have perfect computers or perfect sensors. Will we have computers and sensors that can perform this task significantly better than a human? Almost certainly.


As noted in one of my earlier posts: Tesla has one-death in 130 Million miles driven. In comparison, drivers in general have roughly one-death every 100 Million miles driven.

Including drunk drivers, improper seatbelts, and freak accidents like airbags turning into shrapnel and killing someone. Not all of those accidents were human error to reach 1-death every 100 million miles.

You don't count drunk driving as human error? How odd. Especially in this case.
You have a point with the mechanical failure. The Teslas are so new the chances of a death to mecahnical failure in those 130 Mmiles was largely irrelevant, while cars are much older on average. I can't find much in the way of statistics on it, would you happen to have a reliable source?
KnightExemplar wrote:And this "Tesla Autopilot" is an optional, beta system that is only being used for highway driving, and is supposed to be used with an attentive driver ready to take over "at any time" (because its still buggy as shit)
Spoiler:
No. This system is not even as good as the crappy humans we are. Maybe one day in the future we will have autonomous drivers that are better than us. But lets not drink the kool-aid and pretend that the day has come yet.

Yes. The failure mode of the Tesla sensor array is different from the failure mode of a human driver. That is not relevant. Human drivers often try to overtake a car while they can't see shit or miss estimate speeds and distances. What is relevant is what your chances are of dying in a car in case of a human driving versus yourself driving.
Dauric wrote:
Tyndmyr wrote:
Tyndmyr wrote:People hit obvious obstacles all the time.
But that's because they aren't looking, not because they can't see. In the case of the Tesla, it's because it couldn't see, and didn't know it couldn't see.


....what's the difference?



When a device detects a fault it throws an error, the error can then instigate a warning to the driver, reduce speed, record the error, and/or make some other response to the error itself that changes the way the device is operating.

When an error isn't thrown by the system, then the device continues operating normally without responding to the fact that it isn't operating normally.

What's the effective difference between a human not paying attention and missing something and a sensor array missing something and not knowing it missed something?
Mikeski wrote:A "What If" update is never late. Nor is it early. It is posted precisely when it should be.

patzer's signature wrote:
flicky1991 wrote:I'm being quoted too much!

he/him/his

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7210
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: First Tesla Autopilot Death

Postby Zamfir » Wed Jul 06, 2016 3:32 pm UTC

When it is established that autonomous automobiles are somewhat safer, insurance companies will start offering lower premiums for people who use autonomous automobiles.

Most (if not all) of the promised gains in safety are due to systems that do not require autonomy. Collision avoidance for example will be widespread, probabably before practical autonomous cars even exist.

For insurance premiums, the relevant comparison is not to the current average car- it’s to another new-bought car from the future. With already a subset of the sensors and capabilities of the autonomous car installed, covering all the low-hanging fruit when it comes to safety gains. That latter car with a human driver might well be safer than the autonomous car , even when the autonomous car is safer than current cars with human drivers.

Tyndmyr
Posts: 10119
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: First Tesla Autopilot Death

Postby Tyndmyr » Wed Jul 06, 2016 3:38 pm UTC

Dauric wrote:
Tyndmyr wrote:
Tyndmyr wrote:People hit obvious obstacles all the time.
But that's because they aren't looking, not because they can't see. In the case of the Tesla, it's because it couldn't see, and didn't know it couldn't see.


....what's the difference?



When a device detects a fault it throws an error, the error can then instigate a warning to the driver, reduce speed, record the error, and/or make some other response to the error itself that changes the way the device is operating.

When an error isn't thrown by the system, then the device continues operating normally without responding to the fact that it isn't operating normally.


And when a person doesn't see the thing, he continues to operate the car normally.

There's no difference.

Getting hung up on "looking" vs "seeing" when comparing humans and computers is irrelevant. We don't actually *care* if the computer really understands in some metaphysical sense.

Either the autopilot or the driver missing a semi trailer and driving into it has the same outcome.

User avatar
Dauric
Posts: 3737
Joined: Wed Aug 05, 2009 6:58 pm UTC
Location: In midair, traversing laterally over a container of sharks. No water, just sharks, with lasers.

Re: First Tesla Autopilot Death

Postby Dauric » Wed Jul 06, 2016 3:40 pm UTC

Neil_Boekend wrote:
Dauric wrote:
Tyndmyr wrote:
Tyndmyr wrote:People hit obvious obstacles all the time.
But that's because they aren't looking, not because they can't see. In the case of the Tesla, it's because it couldn't see, and didn't know it couldn't see.


....what's the difference?



When a device detects a fault it throws an error, the error can then instigate a warning to the driver, reduce speed, record the error, and/or make some other response to the error itself that changes the way the device is operating.

When an error isn't thrown by the system, then the device continues operating normally without responding to the fact that it isn't operating normally.

What's the effective difference between a human not paying attention and missing something and a sensor array missing something and not knowing it missed something?


First off: as noted elsewhere if an individual human being misses something it's that individual that misses it, others may catch the circumstance and alert the person who missed the detail. When a machine misses something -all- of the same machines are going to miss the same circumstance.

Secondly: If a human being misses some detail, they miss it in that one instance and may learn from the experience to pay attention to that detail. If a machine misses something it will continue to miss that circumstance until the error is addressed (by upgrading software or hardware).

If the machine doesn't know it missed a detail then it didn't throw an error code for the developers to track down the fault, which makes those upgrades to stop missing a critical detail that much more difficult to narrow down and address with a functional fix.

Computers don't learn from "OH SHIT!" moments as quickly as people do.
We're in the traffic-chopper over the XKCD boards where there's been a thread-derailment. A Liquified Godwin spill has evacuated threads in a fourty-post radius of the accident, Lolcats and TVTropes have broken free of their containers. It is believed that the Point has perished.


Return to “News & Articles”

Who is online

Users browsing this forum: No registered users and 8 guests