Coronavirus and the Three Laws of Robotics
Is inaction better than action if either outcome will cause harm?

Way back in the 1940’s, science fiction writer Isaac Asimov started developing his Three Laws of Robotics through a series of short stories and novels. These Laws are hard-wired in the robots’ brains, and the stories explored situations in which these laws would apply in different ways, and sometimes in conflict with each other.
The First Law, the most important in the hierarchy, is:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The bit about inaction is relevant. Because of the First Law robots are compelled to prevent harm; they must take positive action, not stand idly by. A robot would (gently) smack that donut right out of your hands then delete your Facebook account.
The Trolley Problem

“There is a runaway train rolling down the tracks. There are 5 railway workers standing on the tracks in its path.
You are standing next to a switch that will divert the train to a second set of tracks. But you notice a child playing on those tracks.
If you do nothing the 5 railway workers will be hit and killed. If you pull the switch the 5 workers will be saved but the child will be killed instead.
What is the right thing to do?”
This is my summarised version of the ethical dilemma known as the Trolley Problem. (In the traditional version they use the word “trolley” instead of “train”. I updated it.) The key here is that if you do nothing, 5 people will die. If you decide to act you can save the 5 people but one person will die because of your actions.
When we discussed this dilemma in a group of MBA students I could divide the responses into three groups:
- People who were clear and decisive on which choice to make;
- People who were indecisive and unwilling to commit to a choice;
- People who refused to accept the scenario and sought to create a 3rd option with a different outcome (i.e. nobody dies).
I find the third group fascinating. They reject the premise, derailing the discussion. (Pun!) The point is not to find a creative solution, it’s supposed to be about ethics and morality and responsibility.
At the same time, in the real world, this third group can be important to involve in discussions and decision-making. They’re disruptive. They’re persistent. They can make you take a step back and look at things from a different angle. “What if we don’t accept that these are the only options?” Diversity of perspective is good, if you have the time. The more ideas you explore the more likely you are to have the best options on the table.
Which is not to say that members of the 3rd group should make the decisions. That, as we will see, can be a disaster.
The 2015 movie Eye in the Sky is a great exploration of a Trolley Problem situation. In the movie, the characters must decide whether or not to execute a drone strike on a terrorist cell. If they don’t do it a large number of innocent people will be killed. If they do act they will kill a child in the attack. The movie is incredibly tense, with each of the three types of responses in conflict with each other. It’s a riveting movie, but not at all relaxing.
The key moral question posed by the Trolley Problem is: does death by your action weigh more on your conscience than death by your inaction?
Real-life Robot Laws coming soon…

I first learned about the Trolley Problem in the context of driverless cars.
“A driverless car is about to collide with a pedestrian. The car can swerve around the pedestrian and into a wall, which would kill the passenger. Or it can do nothing to prevent the collision, which would kill the pedestrian.
What is the right thing to do?”
In this scenario the car will make the decision. So which should it choose? How should it be programmed to decide? Because, as with Asimov’s robots, the Laws that govern their behaviour are hard-wired into their brains by humans.
What if it’s one pedestrian and two passengers? What if — to use the numbers from the Trolley Problem — it’s five pedestrians and one passenger.
The First Law of Robotics would probably weigh 5 vs 1 and swerve, killing the lone passenger, because inaction would cause more harm to humans than taking action. Is that how we should program our self-driving cars?
From a legal liability perspective, the courts or legislature will need to decide who should be held responsible for such decisions. Maybe it’s the company (or person) who owns the car; or who operates the car; or who built the car; or who programmed the algorithm; or who created the technology. Where does the chain of causation stop? For a court to decide liability it must first decide which decision the car should have made. Should always make.
Which life should it spare in a 1 vs 1 situation? Does action weigh more heavily than inaction? What is the right thing to do?
In Australian tort law — the type of law for civil wrongs, such as negligence and trespass — there is a precedent of sorts. If someone is injured or at risk of being injured there is no obligation to offer assistance. You can’t be held liable for doing nothing to help someone unless you have an existing duty of care. This points to the law taking the side of deciding not to act; inaction is the right thing to do if you’re trying to avoid legal liability.
Once upon a time you could be held liable if you tried to help and caused some further harm. In most jurisdictions a “good samaritan law” generally protects you now. Which brings us back to square one, really.
From Robots to COVID: runaway trains and infectious disease
The COVID-19 pandemic has often been politicised as follows: governors who impose restrictions in response to COVID are ruining the economy; or, governors who prioritise the economy are allowing people to die. It’s COVID or the Economy.

Let me re-frame that:
“An infectious disease is about to collide with a population. The government can pull a switch to avoid the outbreak, which would kill the economy. Or it can do nothing to prevent the outbreak, which would kill the population.
What is the right thing to do?”
This is important. If either outcome is bad, and action weighs more than inaction, then some leaders would prefer to keep their hands clean. “Sure, people died, but that was the virus; you can’t blame me for that.” I’ve had members of my own family tell me that our government ruined the economy by forcing restaurants to only serve takeaway.
What are the types of response we have seen from leaders around the world?

- Leaders who were clear and decisive on which choice to make. Some made a clear choice to avoid the outbreak as much as possible: China locked down Wuhan, New Zealand imposed heavy restrictions on its populace, Ghana stepped up to the plate to eliminate infections. Although I don’t know any leaders who have outright stated “we are choosing the economy” I’m sure we can point to actions and declarations that amount to the same thing.
- Leaders who were indecisive and unwilling to commit to a choice. We saw some leaders wait a long time before responding, or only partially responding.
- Leaders who refused to accept the scenario and sought to create a 3rd option with a different outcome. I guess a “suppression” strategy fits here. And the “it’ll be gone before you know it so open up” camp is a version of “I’ll shout really loudly so the workers get out of the way” solution to the Trolley Problem. “I refuse to accept that either bad outcome is inevitable. Let’s improvise!”
I’m not sure how to categorise Sweden’s response.
Unlike the Trolley Problem scenario, right now with COVID we don’t have perfect information. We don’t know for sure that not acting will result in devastating health consequences, just like we don’t know for sure that doing nothing will save the economy.
But I find it interesting to see how people deal with the question of what to do. Of which outcome is morally better. Of how much weight to put on whether action makes them more responsible than inaction.
If they walk away from the switch without changing the outcome will they take responsibility? Will we hold them accountable?