- Self-driving car technology has long been lauded for its ability to prevent crashes related to human error.
- So the logic goes: If you remove people from the equation, far fewer crashes will happen.
- But that doesn't take into account the unexpected, a new study from the Insurance Institute for Highway Safety says, nor does it account for the way that self-driving systems are built.
- About two-thirds of all crashes in the study would still have occurred even if every car on the road were a "fully self-driving" vehicle.
- "For self-driving vehicles to live up to their promise of eliminating most crashes," the study says, "they will have to be designed to focus on safety rather than rider preference when those two are at odds."
- Visit Business Insider's homepage for more stories.
In the not-so-distant future, every car on the road will drive itself while you do as you please. Car crashes will be rare, if they happen at all.
That's the promise, anyway, of so-called "fully self-driving" cars. But a new study from the Insurance Institute for Highway Safety calls into question that utopian vision for the future of driving.
Even if all cars were fully autonomous, able to whisk you from place to place without any driver input, a full two-thirds of car crashes would still occur, according to the study.
"According to a national survey of police-reported crashes, driver error is the final failure in the chain of events leading to more than 9 out of 10 crashes," the study says. "But the Institute's analysis suggests that only about a third of those crashes were the result of mistakes that automated vehicles would be expected to avoid simply because they have more accurate perception than human drivers and aren't vulnerable to incapacitation."
Put more simply: Self-driving cars can account for some human error, but can't yet account for more complex, prediction-based scenarios, according to the study.
When drivers are distracted, or visibility is low, or a potential hazard is recognized too late, self-driving cars excel, the study says. The same goes for crashes due to impairment — caused by drugs and/or alcohol, or a medical emergency. But those types of crashes account for just one-third of all crashes.
"The remaining two-thirds might still occur unless autonomous vehicles are also specifically programmed to avoid other types of predicting, decision-making and performance errors," the study says.
So, what types of errors? These are a bit more complex to classify, and include everything from predictive action (like monitoring a cyclist and deciding not to speed past them) to performance errors ("inadequate or incorrect evasive maneuvers, overcompensation and other mistakes in controlling the vehicle").
One example provided comes from Tempe, Arizona, where a 49-year-old woman named Elaine Herzberg was struck and killed by a self-driving vehicle.
Herzberg was hit while walking her bicycle across the street in front of the vehicle, which the car was unable to predict, "and it failed to execute the correct evasive maneuver to avoid striking her when she did so."
In a blog post, industry group Partners for Automated Vehicle Education called the study into question. PAVE — which includes Ford, General Motors, Waymo, Lyft, Daimler, Volkswagen and other automotive industry leaders — said that, contrary to the study's assertions, self-driving systems are already being designed with safety at the forefront.
"In the 'discussion' section, the study's authors write 'Only about a third of serious crashes could be preventable by AVs if they are not designed to respond safely to what they perceive,' which is a bit like saying that a marble won't roll very far if it's not round."