The first death associated with a fully autonomous vehicle occurred Saturday night in Tempe, Arizona, as reported by the Associated Press. While it is not yet determined what happened, and if a human controlled vehicle might have resulted in the same unfolding, this raises a number of questions about how we should now and will over time learn to deal with the death of a human due to computer error.
For me, it feels different than when a human kills a human due to drunk driving or not paying attention while behind the wheel. It feels wrong. And yet, the theory is that if all vehicles were self-driving, the 40,000 vehicular deaths per year would be reduced to almost none. To tabulate lives this way is to reduce humans to a body count, even if it the portrayal of saved lives is accurate.
For our society to accept that a computer mistake, not a human one will be acceptable is going to take a while. How will autonomous vehicle companies deal with law suits? Will the car company as a whole, the supplier of the cameras, sensors, or software designer be responsible? When a person is killed due to an intoxicated driver or distraction we blame that individual and perhaps society for allowing substance abuse or texting to continue. But when a computer is at fault, a system designed to be faster and safer than a human, will we still seek to blame or simply justify the error by running the numbers?