This week we began our reading of Isaac Asimov’s I, Robot, a famous and influential collection of short stories oriented around the theme of human interaction with robots in the near future. Asimov wrote these stories in the 1940’s and 50’s, but he was remarkably prescient about some of the issues and concerns those in the future (us) would have about the advances in this kind of technology. The stories begin with robots that interact with humans but cannot talk, to robots that make active predictions, invent religions, and learn to lie which pose a host of problems for their human inventors. I don’t see Asimov as warning his readers so much as informing them that the advance of robots, whether ultimately for good or ill, is inevitable.
Asimov has robots developed with three laws:
- A robot may not injure a human being, or through inaction allow a human being to come to harm
- A robot must obey the orders of a human being, unless that order conflicts with the first law
- A robot must protect its own existence so long as doing so does not conflict with the other two laws.
The laws are hierarchically structured, so that Law 1 takes precedence over Law 2, and so forth.
The three laws look solid. Their simplicity is their strength. But stories deal with the implications of these three laws, as the technology develops. One of the strenghts of Asimov’s work is that what looks simple on its face becomes complex as we interact with the new technology.
Some time ago a friend who worked as a computer programmer said something to the effect of, “Computers will do exactly what you tell them to do. When we have a problem with a computer, we likely either a) Do not understand what we told the computer to do, or b) We told the computer something different than what we thought we told them.” This holds true in the stories we have read so far.
The first story involves a robot that a family buys to serve as a companion for a young child. Obviously, they want the robot to be safe, and to look out for the safety of the child. The robot would have to be accommodating to get along with the child. But this in turn means that the robot would then enjoy what the child enjoys as a robot would, which means . . . all the time. What child wouldn’t want a companion that essentially plays with you and accommodates you whenever you want.
The child, naturally, would bond to the robot and forget about other children. The mother in the story sounds exactly like parents today. The concern is the same, only the particulars have changed. Every mother who worries about their child’s attachment to their phones, computers, or video games (why can’t they play with children instead of machines?), sounds just like the mother in the opening story, “Robbie.”
But . . . if the child is happy and if the robot protects her from disaster (which he does), and if robots are the way of the future and are simply part of how kids grow up these days, then the presence of robots becomes inevitable eventually. In this way, the mother in the story comes across slightly as the “bad guy” and such is the subversive nature of Asimov’s first story. Asimov wants us, I think, to be precise about the nature of our objection to robots.
- Is it that we dislike change? But change in any society is inevitable.
- Is it that we dislike the speed of change? The change may be uncomfortably fast, but if others are doing it, won’t we have to adapt to keep up? Civilizations that fall behind often get absorbed by other civilizations.
- Is it that we dislike this particular form of technology? Ok, but how would a robot differ qualitatively from other technology that we already use? For example, a dishwasher is a robot that does not move or talk, though it does communicate with us. Our phones cannot move but can talk back to us on some level.
My impression is that with these stories, Asimov wants to force us to come to a clear understanding of what our views of life, technology, and “progress” actually are. We can’t dislike something just because it is new, or just because it is shocking or unnerving. For example, when cars were an extremely disruptive technology when first introduced, but are now just part of society. But I am also guessing that Asimov would not simply agree that any new thing must therefore be adopted. The hard question remains—where to draw the line, and why?
In the first story, “Robbie” (the robot) becomes more human like the longer he interacts with the child. For example, he learns to have favorite stories. But for humans to interact with robots, they have to learn to think according to the 3 Laws, which means, thinking like robots think. In time, some kind of overlap between robot and human “psychology” and behavior become inevitable, another unintended consequence of technology.
In all the stories, Asimov sets up the narrative so that the robot cannot really be blamed. They follow instructions. The problem is that we cannot anticipate all the ways in which they might follow those instructions, and how that will change society and humanity all at once.
In one story this means that robots learn to lie in ways similar to humans. As robot technology advances they interact more socially with humans. When we interact with those we know, we do not always tell each other the unvarnished truth. We might tell a friend that an outfit looks good even if we don’t think so, as just one example. After all, we don’t want to “harm” our friend by telling them what we really think. As the stories progess and our interaction with robots gets more complex, the robots’ ability to follow Law 1 (no harm to humans) increases. This, in turn, means that robots start to tell people what they think they want to hear, which leads to great confusion.