KrytenKoro wrote:People text, and read, and eat, and do make-up while driving cars now, so it's ludicrous to be saying "but if you have autopilot they may not always be paying attention to the road!" unless you can provide data.
It's not "they may not always be paying attention to the road," it's "they may not ever
be paying attention to the road." Are a lot of people now not as careful as they should be? Yep. Some of them are even pretty terrible about it. But as it stands they still have an immediate, concrete incentive to be at least partially
engaged, or they don't stay driving for very long. Once they can rely on their car to do it all for them (or once they think they can,) that will go right out the window. And that's a huge, huge problem if we design with the assumption that the "driver" is going to remain available as a failsafe for what the computer can't handle.
For the purposes of all these examples, replace your name with any human's name and the point is the same. Who to blame when an accident occurs is not relevant here, how to reduce the accidents across the board is.
Except I'm not talking about who gets blamed for accidents, I'm talking about the quality of automated navigation. The difference between J. Random Human and the manufacturer of an automated car is that J. Random Human is not charging money for their services as a navigator; the manufacturer is
, therefore they have a responsibility to be reliable as a navigator. Hopefully automated cars will at least be able to avoid turning onto streets that aren't there in the event of faulty map data, but if you're paying robocar prices for navigation service, you have every right to expect that not to happen in the first place, and no amount of "well, maps aren't perfect!" changes that.
Re:your other responses: If you're clarifying that you're not actually claiming that it's unlikely computers will be able to handle adverse conditions, and that you're just uncertain whether they have run enough tests at present to justify yourself buying the car now, then fine, I guess no one's actually disputing that, neh?
Both, actually. There isn't enough data outside of essentially optimal conditions, and I'm skeptical about the ability of computers to cope with the wide, wide range of possible issues that crop up in real driving circumstances. People's repetitions of "well, all we have to do to solve this complex problem is solve this complex problem, I'm sure that'll happen inevitably because it's inevitable that it'll happen!" do little to reassure me.
PeteP wrote:But honestly nothing in a self-driving car should be designed to rely on fast reflexes for the driver. Though if there isn't a sudden system failure in the worst moment the driver probably has a moment to react and you need something to get their attention.
Again, though, the problem is that pretty much any situation serious enough to require intervention is not a situation in which "a moment to react" is an available luxury.
Tyndmyr wrote:If solution A is better than previously existing solution B, are you gonna freak out because the person who came up with solution A wants to make a buck off it?
No, I'm going to freak out because I'm still not past the if
in that sentence and yet people are proposing putting solution A in charge of 1500+ pounds of machine hurtling down the road at 60 MPH.