Logic of self-driving car policy escapes RAND corporation

16 April 2016 by Steve Blum
, ,

Control sample.

The RAND corporation published a study about self driving cars that comes to a mathematically obvious conclusion, while completely missing the public policy point. The study starts with the fact that one person dies in a U.S. traffic accident for every 100 million miles driven. Then it dives into a really complex statistical analysis…

Given that fatalities and injuries are rare events, we will show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries.

Duh. If you need to have an autonomous car drive 100 million miles several times to prove that not as many people will die (or be injured, or have their fenders bent) then it’ll take a long time to do it. It’ll take even longer if there aren’t any people in the cars.

If you applied that kind of bloody minded logic to human drivers, then you’d require a couple billion miles of drivers ed training before giving anyone a license. Every human driver is an experiment of one. Every 16 year old that applies for a drivers license presents his or her own unique, beta grade software for testing.

Autonomous cars are different. If a bug is found, it can be fixed permanently and distributed to every car in the fleet. Can’t do that with teenagers. If you pour a dozen shots of ice cold Jaegermeister into a self-driving car, its performance won’t degrade. Doesn’t work that way with teenagers.

If you want to test whether an autonomous car is road worthy, try this experiment: pull up next to it, rev your engine, insult its girlfriend (or lack thereof) and then spin your tires as you accelerate away. Repeat with a 16 year old boy. If the self-driving car responds in a way that’s less likely to result in an accident than the legally sanctioned, public policy compliant benchmark, give it a license.