> But how can you react promptly if you're not ready? I just don't get this.
You cannot, that's the simple truth. You're supposed to focus on the road anyways and should be able to take over once any sort of autopilot or assist system starts working erroneously, yet in practice many people simply assume that those systems being there in the first place mean that you can simply stop focusing on the road altogether.
It feels like the claim of "fully self driving vehicle" is at odds with actual safety, or at least will remain so until the technology actually progresses far enough to be on average safer than human drivers, moral issues aside. Whether that will take 15, 50 or 500 years, i cannot say, however.
That said, currently such functionality could be good enough for the driver to take a sip from a drink, or fiddle around with a message on their phone, or even mess around on the navigation system or the radio - things that would get done regardless because people are irresponsible, but making which a little bit safer is feasible.
It's nothing (well certainly not everything) to do with people's assumptions. There's a ton of research around how people simply stop paying attention when there's no reason for them to pay attention 99% of the time. It doesn't even need to be about them pulling out a book or watching a movie. It can simply be zoning out.
Maybe, as you say, it's feasible today or soon to better handle brief distractions but once you allow that it's probably dangerous to assume that people won't stretch out those distractions.
We have empirical data showing how safe actual level 2 self driving cars are in practice. So there’s no reason to work from base assumptions. Yes, level 2 self driving cars cause avoidable accidents, but overall rate is very close to the rate people do. The only way that’s happing is they are causing and preventing roughly similar numbers of accidents.
Which means people are either paying enough attention or these self driving systems are quite good. My suspicion is it’s a mix of both, where people tend to zone out in less hazardous driving conditions and start paying attention when things start looking dangerous. Unfortunately, that’s going to cause an equilibrium where people pay less attention as these systems get better.
> We have empirical data showing how safe actual level 2 self driving cars are in practice.
Do we? Where does that come from? The data Tesla provides is hopelessly non-representative because it makes the assumption that the safety of any given road is independent of whether a driver chooses to switch on the system there.
Only overall numbers actually mater here, if self driving is off then that’s just the default risk from human driving in those conditions. Talk to your insurance company, they can give you a break down by make, model, and trim levels.
You cannot, that's the simple truth. You're supposed to focus on the road anyways and should be able to take over once any sort of autopilot or assist system starts working erroneously, yet in practice many people simply assume that those systems being there in the first place mean that you can simply stop focusing on the road altogether.
It feels like the claim of "fully self driving vehicle" is at odds with actual safety, or at least will remain so until the technology actually progresses far enough to be on average safer than human drivers, moral issues aside. Whether that will take 15, 50 or 500 years, i cannot say, however.
That said, currently such functionality could be good enough for the driver to take a sip from a drink, or fiddle around with a message on their phone, or even mess around on the navigation system or the radio - things that would get done regardless because people are irresponsible, but making which a little bit safer is feasible.