Member-only story
Coronavirus and the Three Laws of Robotics
Is inaction better than action if either outcome will cause harm?

Way back in the 1940’s, science fiction writer Isaac Asimov started developing his Three Laws of Robotics through a series of short stories and novels. These Laws are hard-wired in the robots’ brains, and the stories explored situations in which these laws would apply in different ways, and sometimes in conflict with each other.
The First Law, the most important in the hierarchy, is:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The bit about inaction is relevant. Because of the First Law robots are compelled to prevent harm; they must take positive action, not stand idly by. A robot would (gently) smack that donut right out of your hands then delete your Facebook account.
The Trolley Problem

“There is a runaway train rolling down the tracks. There are 5 railway workers standing on the tracks in its path.
You are standing next to a switch that will divert the train to a second set of tracks. But you notice a child playing on those tracks.
If you do nothing the 5 railway workers will be hit and killed. If you pull the switch the 5 workers will be saved but the child will be killed instead.
What is the right thing to do?”
This is my summarised version of the ethical dilemma known as the Trolley Problem. (In the traditional version they use the word “trolley” instead of “train”. I updated it.) The key here is that if you do nothing, 5 people will die. If you decide to act you can save the 5 people but one person will die because of your actions.
When we discussed this dilemma in a group of MBA students I could divide the responses into three groups:
- People who were clear and decisive on which choice to make;
- People who were indecisive and unwilling to…