I discovered last summer when I read Isaac Asimov’s Foundation that he makes great summer reading. I mean this in the highest respect. To create bad summer fiction might be easy, but to keep it light, entertaining, but thought provoking enough to prevent the reader from feeling like a total sellout — that requires a graceful touch.
His I, Robot achieves this same delicate balance. The book has a remarkable coherence for the fact that he culled it together from several short stories written over a period of about 15 years. Like the best science fiction, it seems to grow only more relevant as time marches on. As a special bonus, he anticipates the rise of Asia and the decline of Europe. But nothing he wrote could top those sideburns.
Asimov muddies the waters well and creates complex questions, but comes down on the side that robots benefit mankind. I remain unconvinced, though more for gut level reactions than anything absolute. As technology progresses in the stories, robots become superior to humans in many ways. They are faster, stronger, more durable, and more efficient than humans. To help reinforce their control and perception of robots, humans build into their programming that robots call humans “master,” while many of the male characters call robots, “boy,” which Asimov surely knows conjures up connotations of slavery. Perhaps we should not think so much about robots rising up and taking over. Perhaps we should think what damage we would do to our own souls if we created servants to do our every bidding.
But if we treat robots with deference and respect, would that make them our equals, and essentially human? Not necessarily — we can treat trees with respect. But treating trees or even dogs with respect does not threaten us because such interactions do not threaten our sense of humanity. The likely proliferation of walking, talking robots within the next few decades raises even the question over the tone of how we address them. As Brian Christian noted, part of the confusion regarding our humanity may lie not just in the increase of technology, but in the fact that we are worse at being human than previously.
Asimov also makes us realize that the very terms we use make a difference in our perceptions. Is a robot essentially a computer? If so, then are computers robots? Very few of us, I think, would be comfortable with this. I use a computer, not a robot, thank you very much.
As time marches on within I, Robot, the machines get more advanced and more integrated into society. Eventually they come to direct the world’s economy and much of governance itself. If the first law of robotics entails that robots may not harm humans, or allow humans to come to harm, then why fear anything they do? For Asimov, with robots in charge the world unites, war stops, and people get more productive.
At the very end, Asimov tips his hand as to why he believes robots will be beneficial for us. According to him, much of the misery mankind has suffered has resulted from impersonal factors like geographic resource distribution and macro-economics, rather than personal choice by individuals. He asserts that mankind has always been at the mercy of forces beyond his control. Forces beyond our comprehension drive us at times to try and destroy each other. Well, robots/computers, with their vastly more efficient brains, can manage those things for us. Factors that brought conflict in the past get effectively managed by robotic brains.
This is the root of why I fear the possible coming of increased computer/robotic domination. Abdication of responsibility to robots means a denial of part of our humanity. If we put a robot in charge of our economy, it would be akin to moral and intellectual laziness–a denial of part of the image of God within us. I find views of history that make us passive dangerous. Should we reduce ourselves to, “Can’t somebody else do it?”
Of course, I’m probably overreacting and blind to the ways in which I rely on computers/robots all the time. But still, Asimov tips the scales towards something problematic.
Another issue: why was Asimov so high on science in the direct aftermath of the atomic bomb, but today we seem to be much warier? Movies like Terminator and Matrix series, Blade Runner, the new Battlestar Galactica all proclaim doom for the future because of our continuing dependence on technology. Even the recent Will Smith version of I, Robot strongly modifies Asimov’s original message in a more negative direction (while also strongly changing elements of the story, as you might expect).
But even an edgy show like The Outer Limits 50 years ago goes even further than Asimov in proclaiming the “robots = good,” message. The prosecutor and sheriff represent pure ignorant anti-science sentiment in this episode. . .
We use computers much more than they did 50 years ago. Why do we proclaim our fear out of one side of our mouths, while rejoicing in the latest gadget with the other? How can we make sense of this? Why did an era that lived within the shadow of nuclear annihilation to a much greater degree than us believe much more in robots? Many have claimed that Hiroshima marked the high-water mark of the scientific worldview in the west. Is this true, or do we still live within an era dominated by an Enlightenment oriented scientific era?
I, for one, do not have the answers, but would be curious for any feedback.
Blessings,
Dave