A computer, however, does makes a decision, even when given very little time to do so.
The choices a computer makes are a direct reflection of how it was programmed to behave by the humans that created it. This brings us to the self-driving car. Autonomous vehicles need a moral baseline from which they can make difficult either-or choices in case of an accident.
The question is, whose morals should they be using?
The Ethics Commission at the German Ministry of Transport and Digital Infrastructure believes that the primary aim of self-driving cars should be to improve the safety of everyone. It's literally rule Number 1 in a report published by the comission this July.
"The primary purpose of partly and fully automated transport systems is to improve safety for all road users," according to the report's first rule. "Another purpose is to increase mobility opportunities and to make further benefits possible. Technological development obeys the principle of personal autonomy, which means that individuals enjoy freedom of action for which they themselves are responsible."
There are a total of 15 rules.
Other rules outlined in the report narrow in on more specific matters.
"The public sector is responsible for guaranteeing the safety of the automated and connected systems introduced and licensed in the public street environment," reads Rule 3. This means the German government has taxed itself with the the heavy responsibility of ensuring that private business, in its eagerness to introduce new products, does not release underdeveloped autonomous vehicles on the road.
From an ethical standpoint, Rule 7 does the most to address an issue many have wondered about.
"In hazardous situations that prove to be unavoidable, despite all technological precautions being taken, the protection of human life enjoys top priority in a balancing of legally protected interests," it reads. "Thus, within the constraints of what is technologically feasible, the systems must be programmed to accept damage to animals or property in a conflict if this means that personal injury can be prevented."
In short, that means if a self-driving car is faced with deciding between whether to hit a dog or a human, it will hit the dog.
As driverless car technology -- and public acceptance of it --advances at a rate faster than most thought possible, governments and private organizations around the world are scrambling to address unanswered questions about how self-driving cars will function when they're on public roads in large numbers.
German researchers have determined that once we know how we want these cars to act from an ethical perspective, it will be fairly easy to program them to do so.
At MIT, the aptly titled Moral Machine lets users choose how they would act in difficult traffic scenarios if given the time to do so logically. Several large companies have also teamed together to form the Partnership on AI, which concerns itself with these same questions and more.
With so many groups working to address these issues, it's likely that we will be able to achieve consensus before autonomous vehicles take the road en masse. And as that world takes shape in the coming years, various societies will be able to approach AVs with a popular ethical perspective.