Ethical Questions in Code Writing
When considering self-driving cars and trucks, one question most people have relates to what is often referred to as “ethical algorithms,” meaning what choice the car would be programmed to make in an ethical dilemma. If 5 pedestrians suddenly appeared in front of the vehicle and there was no time to brake, would the vehicle run into the pedestrians or veer into the neighboring concrete wall, thus putting the driver/rider at risk? If an accident was imminent, would the vehicle’s computer system choose to run over 6 adults or 3 children? A recent survey showed that while most people believe a car should sacrifice its passengers for the greater good, most people would choose to buy a car that always protected its riders at any cost if given the option.
There is a strong argument that these grand ethical dilemmas are actually irrelevant when considering the development of code for autonomous cars. For one thing, these situations almost never come up in the real world. How often have you, as a driver, had to choose between running into 6 adults or 3 children? Further, the advanced camera systems on driverless cars will see things far earlier and with more precision than a human would. If there are pedestrians ahead, a self-driving car will recognize and respond to that so far in advance that, theoretically, the car will have more than enough time to brake and avoid an incident.
At least one car company has come out to say that it will program its cars to always protect the passengers in the vehicle. In 2016, Mercedes-Benz stated that in a situation where either pedestrians or the driver would be put at risk, its Level 4 and 5 vehicles will be programmed to save the driver. The company also pointed out, though, that the goal is to improve technology to the point that these scenarios almost never arise.
For an article going into more detail about why “ethical algorithms” are unimportant for self-driving cars, see http://techcrunch.com/2015/11/23/the-myth-of-autonomous-vehicles-new-craze-ethical-algorithms/?ncid=rss.
Ethical questions, however, can also arise on a smaller level. Will the driverless car swerve for deer? For dogs? For squirrels? Will the autonomous car attempt to avoid all animals, just some animals, or no animals? Animals typically come into view very quickly, so the advanced camera system would likely not put the car at an advantage by seeing an animal significantly earlier than a human would. Would the car swerve if there were no cars around but hit the animal if another car were nearby? Who would make the decision about how such code should be written?
Or consider road debris. If an autonomous car is traveling behind a trailer and a piece of furniture falls onto the road, does the car swerve, thereby putting other vehicles at risk, or hit the furniture, putting its own driver/rider at risk?
Ethical dilemmas can involve non-safety issues as well. For instance, if one car is attempting to pull out of a parking lot onto a street with heavy traffic, will other autonomous cars stop to allow that vehicle in? Will the cars automatically make such a decision, or will driverless cars take into consideration their own driver/rider’s schedule and timeliness before foregoing the right of way?
The potential hypotheticals here are endless, and it’s impossible to plan for them all. As autonomous cars become more prevalent, though, code writers and car manufacturers will be forced to make tough decisions about how a driverless car would react in a variety of situations. The impact on travel will be significant, particularly in the transitional period when the road is occupied by Level 1, Level 2, Level 3, and Level 4 vehicles.
Car manufacturers will also have to consider how the ethical codes will be communicated to consumers. At least some in the field argue that consumers should be told how their car is programmed to respond to an imminent crash, but it is unclear how such a disclosure could be made. Would the information be contained in an owners' manual? Would there be a user agreement that a consumer had to accept before operating an autonomous car? Would there be any way to ensure the consumer actually read the agreement, rather than scrolling through and clicking "agree?" For an interesting article on this topic, visit this article.