Autonomous Vehicles

More coming soon. Meanwhile:

Considerations of one aspect of autonomous vehicles has led me to mull over the different standards that are often set in different contexts, sometimes for good reasons, sometimes for not so good reasons, sometimes for reasons of human stupidity and selfishness. There is rightly concern about whether autonomous vehicles will have adequate capacities to perceive the road conditions around them, especially whether they can deal with unexpected events and correctly interpret what to do when faced with unfamiliar objects. But, mutadis mutandis, I’ve often wondered about what seems to me to be the pretty inadequate test for the eyesight of human drivers. In the UK, you simply have to be able to read a licence plate with certain size letters at a certain distance, the test usually being administered when you are safely standing on the pavement (since if you fail, you can’t even take the test and hence get into the car); this presupposing that a doctor has not already told you that for reasons of vision, you can’t drive.

If an autonomous car’s vision was tested in this way, I jolly well hope there would be an outcry.

Because when you are driving, you need to be able to process fast moving images, often in poor conditions of visibility, and always when you are also doing something else, and making fast decisions. You sometimes need to be able to decide what an unfamiliar shape means. Reading the alphabet is child’s play in comparison to interpreting unusual and unexpected images. Our brains have learned ways of extrapolating from broken bits of text so we’re very good at guessing  letters and numbers. Moreover, ask anyone with sub-standard eyesight, say, moderate myopia or astigmatism, and many will tell you that if they stare at something for a short while, it’s easier to see what it is. But when driving, staring at something for a short while is not good enough.

Moreover, many accidents are caused by the deterioration in a driver’s eyesight from the time it was last tested.

Are we applying different standards for machines and for humans here?

Here’s one reason to think this may be an example of this. Especially in those parts of the world which are heavily dependent upon private vehicles, it’s a common attitude that humans have a right, a need, to drive. It’s unlikely that anyone thinks in these terms for autonomous vehicles. Individually, we also don’t tend to get into vehicles thinking we are a danger to other road users. Collectively, although we don’t want to be run over by another driver, we want to drive ourselves, and if the standards for eyesight, and for driving skills, were too high, too many of us would be ruled out. So, we’re likely to be softer on humans than on machines in this regard.

But this also gives a reason to think that autonomous cars of the future will be safer than human driven cars. Because if they’re not safer, they won’t be accepted.

However, it’s not so simple as this. There is a general problem with measures which increase public safety. Realistically, these can  never be perfect. The people who are kept safe are statistics. The people who are killed or injured are visible. Who reading this knows that they personally would have got polio, were it not for polio vaccination programmes? We only know we might have got it – and in many countries, thankfully it’s been so effectively dealt with, that it’s a distant memory for most of us. Who reading this knows that they would  have been run over and killed, were it not for advances in vehicle safety? As Stuart Russell, Daniel Dewey and Max Tegmark put it in their paper, “Research Priorities for Robust and Beneficial Artificial Intelligence” put it: “If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits.”

 

We would like to thank the Future of Life Institute for kindly sponsoring our work.