Danger! Danger! Self-Driving Cars: Statistics vs. Politics in 2024

self-driving cars in collision

On this one, I’m siding with the robots.

Self-driving cars today are already provably safer — far safer — than human drivers. There are a few caveats to this, and several subtleties to grok to understand the shape of the debate. Lets examine the angles:

Self-Driving Cars are Statistically Safer

today, fully automated self-driving cars (Waymo, Cruise, etc) are already far safer than human drivers (stats), when measured on the following key metrics:

    1. accidents per driving hour
    2. accidents per mile
    3. fatalities per driving hour 
    4. fatalities per mile

To quantify that: there are about 50,000 fatalities and 5 million injuries per year caused by vehicular accidents. This is across a total of roughly 3 trillion miles driven (by human drivers) per year. This equates to, on average, about

  • 1 fatality per every 6 million miles driven by human drivers.

To compare: the Waymo automated vehicle fleet has racked up more than 20 million miles of driving since its launch, not including another 20 billion miles driven in simulation (yes, AI cars train in simulators… weird, I know. robots playing videogames)… and during that whole time, there has not been a single fatality. (there have been more than 150 accidents, but that too is below human levels, and of those accidents, human injury was reduced by half and property damage was reduced by 1/3.)

The Goals of A Self-Driving Car Future

That’s not to say that self-driving cars are 100% safe. There is no such thing. What we’re trying to do here is twofold:

  1. transition driving to a robotic affair, thus reducing annual highway fatalities by a significant percentage, perhaps even an order of magnitude.
  2. In doing so, have the added benefit of freeing up massive amounts of time for humans to both work and conversate and play, without having to focus on the task of driving (at all!).

Robot Accidents will appear idiotic to Humans

Some of the accidents and fatalities that robots are involved in will look completely ludicrous to humans, and they will (correctly) say: “see, if a human was driving that car, that would have never happened.” For instance, two recent incidents in San Francisco where a) a Cruise robotaxi ran into a firetruck, and b) where another Cruise vehicle dragged a woman along the road for 20 feet prior to stopping* (this was a very strange scenario… see details below).

But this misses the point. The point is, many accidents that humans would not have successfully avoided will be avoided by the machines. They are faster, smarter, and more vigilant than human drivers, period. They have a different and unique set of weaknesses and blind spots, but that is irrelevant here. On the whole, a robotaxi fleet will cause 1 fatality per 10 million miles driven, while a similar human-piloted fleet will have 4 fatalities across that same distance driven.

The 1 machine fatality might have been human-preventable, but the delta (3 lives) would be saved and spared by the machines.

Some Robotic Accidents will be “Mass Fatality” Events

There will be some horrendous mass casualties from “cascade” effects / collisions of autonomous vehicles. Again, while these will cause massive public outcry and seem like roboterrorism, on the whole, the robots will still be massively safer than humans. What I’m talking about here is, for instance, a cascading error where 200 robotically controlled vehicles are going 80 mph in the HOV lane, with 36″ margins between the bumpers car to car, NASCAR style. Because of inter-car network communication, this will actually be quite safe.

But every once in a while, some harmonic resonance in the system will cause a mass accident amongst a rampaging herd of self-driving cars, potentially involving hundreds of vehicles, injuring hundreds and killing dozens in the aftermath. This will be tragic. But this will also be, zooming out to the national and annual level, a statistically insigificant event. So we must be prepared for these “disasters”. They will happen. On the whole, it will still be safer than humans.

The Hybrid Robot-Human model is Deeply Flawed

the “hybrid” model, where a human monitor sits in the driver’s seat, theoretically vigilant and ready at any moment to manually override the self-driving cars choices, is a total fallacy. These humans may be alert for the first hour on the job, the first week, perhaps even the first 30 days. But after 1,000 hours of flawless, safe, error-free driving, who will still be paying attention? No one. They are conditioned into numbness.

This is like asking a human to watch a field and to push a button immediately — with no delay — when lightning strikes. They sit there through clear days and even rainy weather, staring, waiting, for months… but the lightning never strikes. Then, on the 200th day, a single bolt unleashes from the sky. Can you blame the human for being “asleep at the wheel” for that singular moment?

some background facts:

  • the global economy is, very roughly, $100 trillion (one hundred thousand billion) per year
  • the global automobile manufacture industry has total revenues of roughly $3 trillion in 2023.
  • analysts predict that the autonomous vehicle / self-driving cars market will account for up to $10 trillion (>7% of the global economy) by 2033

some keywords you might want to familiarize yourself with:

  • FSD – “Full Service Driving” — Tesla’s Effort to make your car autonomous
  • embodied AI — formerly known as “robotics,” this is the idea of actuators (arms, wheels, vehicles, aircraft, printers, physical object manipulators) in the real world (our planet earth, our living envionrment) which are fully controlled by AI systems
  • RoboTaxi — a science fiction concept of basically a taxicab, an Uber or a Lyft that operates autonomously, directed by the network and driven by on-board logic / AI, with no human driver or oversight.
  • Waymo (Google / Alphabet), Cruise (GM), — robotaxi fleets that are currently operating in select cities and geographies throughout America.
  • autonomous vehicles — same as self-driving cars
  • “no steering wheel, no gas pedal, no mirrors” — the concept, popularized by Tesla CEO Elon Musk, of cars that have no available control interfaces for humans. Current cars can be “manually overridden” by human drivers. this final generation of the tech will be glass bubbles with passenger seats only. Don’t worry, you never owned your Tesla in the first place.
Exit mobile version