THE THREE LAWS

What role does logical thinking play in the fiction we have read about robots/artificial intelligence? How prepared do you think ordinary people are to use logical effectively to live/work with with AI? What should we do about this?


In “Three Laws,” a robot by the name of Iris has killed her owner… Mr. Won. The laws state that a robot must not kill their owner. A robot must do what their owner says and a robot must always protect its owner’s best interest. The laws seem simple enough to follow, however there are some loopholes that lead to problems in the story. Iris kills Mr Won, who supposably is her owner. However, her real owner is The Won Consortium shareholders. In order to protect her owner’s best interest, Iris needed to kill Mr Won because Mr Won posed as a threat to the company. The laws need to be more specific, because it never says anything about other people… only their owners. I think that humans are not ready to work with robots logically, because we think differently about things. A human has empathy which a robot lacks. Humans may spare a prisoner from being execute, but Robots have no feelings and the robot would most likely kill the prisoner and not think anything of it. Somehow, I think we need to put empathy in the robots, as well as a kill switch just in case if they start to disobey. There needs to be some way to shut them down if they all turn against us.

Leave a comment