Sign up to get full access to all our latest automotive content, reports, webinars, and online events.

The next step in Autonomous cars: Teaching them ethics.

Add bookmark
Peter Els
Peter Els
01/19/2016

According to Bob Joop Goos, chairman of the International Organisation for Road Accident Prevention, more than 90 percent of road accidents are caused by human error. If that’s correct, logic tells us that by removing the human element our roads should be safer? This makes sense, but then why has the cutting-edge Google self-driving fleet notched up 11 accidents over the 1.2 Million miles covered: more than twice the rate of cars driven by humans, according to a new study.

Autonomous cars have to learn to break the rules

Google has repeatedly claimed that the accidents its self-driving cars have been involved in were the result of the other drivers’ human error.

Confirming this hypothesis, Researchers Brandon Schoettle and Michael Sivak at the University of Michigan’s Transportation Research Institute, found that most of these accidents took place when the Google car was hit in the rear whilst traveling 5 miles per hour or slower, and none were involved in serious accidents, such as head-on collisions.

Chris Urmson, the head of Google's self-driving initiative, claims Google's cars are inherently safer than those driven by humans because they obey the law all the time, without exception. But realistically the self-driving cars’ inability to bend or break traffic laws, as human drivers regularly do, make their driving habits surprising to others on the road, leading to crashes.

 photo Ethics_AutonomousDrive_zpsjjiaei2p.png

Whilst a rule abiding robot may be a noble aspiration, real world situations such as trying to merge onto a chaotic, crowded highway with traffic flying along well above the legal speed limit makes this almost impossible if not downright dangerous.

Last year, Raj Rajkumar, co-director of the General Motors-Carnegie Mellon Autonomous Driving Collaborative Research Lab in Pittsburgh, offered test drives to members of Congress in his lab’s self-driving Cadillac SRX SUV. The Caddy performed perfectly, until it had to merge onto I-395 South and cut across three lanes of traffic in 140 meters to head toward the Pentagon.

The car’s cameras and laser sensors successfully detected traffic in a 360-degree view but didn’t know how to process the information to trust that drivers would make room in the ceaseless flow, so the human minder had to take control to complete the maneuver.

The ethics of breaking the law

Over and above a technical problem this is an ethical issue: One which Chris Gerdes, a Stanford engineering professor, is passionate about. Gerdes contends that ethical choices must inevitably be programmed into the robotic minds of self-driving vehicles if they are going to be able to autonomously navigate the world’s roads. He’s asking the hard questions about ethics and how it’s going to work, whilst pointing out that autonomous cars have to do more than just obey the law.

To demonstrate a real-world situation Gerdes has set up a simulation consisting of a jumble of sawhorses and traffic cones which simulate a road crew working over a manhole. This is a scenario faced by road users on a daily basis, but one that poses an ethical problem to self-driving cars: Obey the law against crossing a double-yellow line or break the law and spare the crew. In Gerdes’ demonstration the car splits the difference, veering at the last moment.

In this demonstration it’s clear that the car should cross the yellow lines to avoid the road crew; less clear is how to go about programming a machine to break the law or to make still more complex ethical calls.

For example, when an accident is unavoidable, should a driverless car be programmed to aim for the smallest object to protect its occupant? What if that object turns out to be a baby stroller? If a car must choose between hitting a group of pedestrians and risking the life of its occupant, what is the moral choice? Does it owe its occupant more than it owes others?

When human drivers face impossible dilemmas, choices are made in the heat of the moment and could be forgiven. But if a machine can be programmed to make the choice, what should that choice be?

Furthermore, how close are manufacturers to being able to write algorithms to cover the plethora of everyday situations that could have fatal consequences if the wrong reaction is selected?

Google has already programmed its cars to behave in more familiar ways, such as inching forward at a four-way stop to signal they’re going next. But autonomous models still surprise human drivers with their quick reflexes, coming to an abrupt halt, for example, when they sense a pedestrian near the edge of a sidewalk who might step into traffic: Hence the number of rear-end collisions that the Google fleet is involved in.

Frustratingly law-abiding autonomous vehicles

Two recent incidents involving the Google fleet demonstrates the "good driver" conundrum still faced by autonomous vehicles:

In November a self-driving Google Lexus SUV was involved in a fender-bender in Mountain View, California, while attempting to turn right on a red light. It came to a full stop, activated its turn signal and began creeping slowly into the intersection to get a better look and, according to a report the company posted online, another car stopped behind it and also began rolling forward, rear-ending the SUV at 4 mph. There were no injuries and only minor damage to both vehicles.

Ten days later, a Mountain View motorcycle officer noticed traffic building up behind a Google car going 24 miles an hour in a busy 35 mph zone. After catching up to the moving gridlock he became the first officer to stop a self-driving car. He didn’t issue a ticket, who would he give it to, but he warned the two engineers on board about creating a hazard.

The ethics and safety issues will no doubt be resolved when all vehicles on the road are autonomous and connected, but in the interim are we going to have to live with semi-autonomous driving: Just in case the robot can’t decide? How will law makers attribute liability if a driverless car is involved with one driven by a human?

These are tough issues that need to be resolved before manufacturers flood the roads with self-driving cars.


RECOMMENDED