Autonomous cars are on the brink of being everywhere. From, Tesla, to Google, to Uber, these big corporations are all having a piece of the action that autonomy promises to deliver. But the one thing none of them seem to have considered is the moral aspect of autonomous cars and how they would act in a life or death situation.
Recently, when Google’s director of engineering, Ray Kurzweil was asked to consider the question of how a car that is about to hit another with three people inside will decide what is the best course of action to take. Will it swerve to avoid the collision with the car, but hit three children on the sidewalk in the process? In short, the answer was pretty much that they don’t know.
The age-old commandment “Thou Shalt Not Kill” does not apply in all real life all of the time. If someone is about to wipe out an entire city, then it is quite reasonable to take them down. A good place to start for moral rules for autonomous robots would be with Asimov’s Three Rules of Robotics. These are:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
A robot must obey orders give to it by human beings except where such orders would conflict with the First Law.
A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
The same question about morality was then forwarded to Andrew Chatham, a principal engineer at Google. He doesn’t feel this would be a problem that would come up very often, if at all. His role as a moral software engineer is to prevent the car from getting into that situation in the first place. If it does find itself in that kind of dilemma, then it shows he has in some way failed in his job. In any dangerous situation in an autonomous car, almost 100 percent of the time, the car will apply the brakes. Researchers have recently created a Moral Machine to see what humans would do in certain driverless car dilemmas. Whether this will be used to improve autonomous cars morals better, is yet to be seen.