Actually, I like dogs, but if I were a self-propelled car, a dog walking on the road would probably be as good as dead. This at least suggests my result of an online test of the US-American MIT, which deals with a much-discussed problem of autonomous driving.
In 13 different scenarios, the test participant has to decide how an autonomous car should act. There are two possibilities, for example, to run over a dog or crash against a wall, but also to run over a group of young people or a group of older people. At the latest at the extreme situation the test lives up to its name: Moral Machine.
The self-propelled car is one of the developments that experts believe will actually come. The car industry is already working on various models, Google has been testing its self-propelled vehicle for several years, and Tesla is supplying a driving assistant which it cannot yet call “self-propelled” only for legal reasons.
With acceptance, it is a matter of course. In technically progressive countries such as the USA or Japan, many people like to make themselves available as test drivers because they want to participate in the new development. Germany, on the other hand, which is in love with cars, prefers to drive itself and is skeptical about developments. An utterly unjustified skepticism, however, which at the end of the day is nothing more than a defense of the right to stand in traffic as long as possible. There is also significant agreement that autonomously regulated traffic would run much more smoothly. Equally, there is agreement that autonomous driving will significantly reduce the frequency of accidents. Even autonomous driving alone would not be able to reduce the number of accidents to zero. And this is precisely where the problem begins, which MIT also deals with utilizing the Moral Machine.
The autonomous car drives so well because it can predict the traffic situation much faster than any human intuition can. So it can also predict that in a situation a right turn will save the driver’s life but kill a pedestrian or a left corner will shock the passer-by, but the driver must die. The car doesn’t care, it’s rewinding the commands it entered for this case. Theoretically, with the help of machine learning, one could actually leave the decision to the car, but one does not want to give the algorithm so much of one’s own decision-making power. This creates another problem: Man must determine beforehand exactly to what shall happen.
There are supposedly simple scenarios through which we would probably rapidly emerge into more than one. Suppose the car has to choose between the life of its 40-year-old driver and that of a young mother crossing the street with her pram. Instinctively, the majority of us would say that the mother and child must be saved. But the same instinct would also prevent us from even putting one foot in a car that decides to let us die if the worst comes to the worst.
Our instinct actually seems to tend towards a unitarian decision. Just when a child is in play, it’s primal human to do anything to protect the child. That’s what man already did when he stood on two legs for the first time to look over the African steppe. Of course, instinct also protects us from the excesses of utilitarianism when, with a modern representative of this philosophy, a Peter Singer places the life of an animal above that of an infant just born, because the infant still lags behind the animal in its development. Our common sense knows that by protecting the child, we also protect the child’s potential from the beginning.
What is so tempting about utilitarianism for programmers is its proverbial predictability. In fact, it would be relatively easy to create a table in which, for example, the life of a child is higher than that of an adult, or the death of two people is worse than the death of one person. At the latest there we are again with Peter Singer, we evaluate the life of an individual human being – we give his life value, and this can be lower than the cost of another human being. You don’t have to read your Immanuel Kant to see a problem behind it. With Kant’s philosophy, however, the dilemma will be even more challenging to solve. And if we are honest, of course, our society values people’s lives. Alone, as for example in the abortion issue, this evaluation of a person’s life was a process, not a decision to be consciously taken at one blow.
So how can we solve this dilemma? There are approaches, but they all sound half-baked. One could pass the responsibility on to the driver, who has to make this decision himself before he can use an autonomous car. But if you take it seriously, he’d probably have to do it again and again almost every time, wouldn’t he? You could give the driver back control of the car at the moment of danger, but what if he just dozes off? You could let chance decide… you could, you could, you could. A solution is not yet in sight, the heads are smoking, and many a developer tries to avoid the problem by making autonomous driving as safe as possible. However, we know one thing for sure, accidents cannot even completely rule out autonomous driving.
Take the test yourself: http://moralmachine.mit.edu/hl/