8th Grade Literature: Inevitable v. “Evitable”

This week we finished our look at Asimov’s I Robot with it’s final story, “The Evitable Conflict.” Our first story involved a companion robot that could move but not speak. The last story moves decades into the future to a world where AI/robots essentially run the world though extensive and precise economic planning and coordination. War is a thing of the past, as is unemployment. Unquestionably humanity fares better in this new world, but accompanied with trade-offs

The final story has a somewhat banal premise. The seamless economy, the “perfect” robots, have performed wonders, but a few of their calculations have been slightly off. These errors have not caused any serious problems. Workers displaced from one industry, for example, quickly find work in another. But could these small errors presage the collapse of machine driven learning and governance? If so, would the people’s of earth (nations do not really exist anymore) descend into chaos?

The speed of AI advances has brought the subject of technology and human autonomy to the forefront of our minds, but the question is an old one. Many myths deal with this question, as does the Bible.

For example, both Hesiod and Ovid in their mythologies write about technology that comes soon after a golden age, a nod to the idea of Edenic paradise. Hesiod writes,

First of all the deathless gods who dwell on Olympus made a golden race of mortal men who lived in the time of Cronos when he was reigning in heaven. And they lived like gods without sorrow of heart, remote and free from toil and grief: miserable age rested not on them; but with legs and arms never failing they made merry with feasting beyond the reach of all evils. When they died, it was as though they were overcome with sleep, and they had all good things; for the fruitful earth unforced bare them fruit abundantly and without stint. They dwelt in ease and peace upon their lands with many good things, rich in flocks and loved by the blessed gods.

He goes on to write,

But when earth had covered this generation also — they are called blessed spirits of the underworld by men, and, though they are of second order, yet honour attends them also — Zeus the Father made a third generation of mortal men, a brazen race, sprung from ash-trees; and it was in no way equal to the silver age, but was terrible and strong. They loved the lamentable works of Ares and deeds of violence; they ate no bread, but were hard of heart like adamant, fearful men. Great was their strength and unconquerable the arms which grew from their shoulders on their strong limbs. Their armour was of bronze, and their houses of bronze, and of bronze were their implements: there was no black iron. These were destroyed by their own hands and passed to the dank house of chill Hades, and left no name: terrible though they were, black Death seized them, and they left the bright light of the sun.

We see in Genesis this same pattern linking the devlopment of technology with violence. In Genesis 4 the line of Cain first developed the implements of civilization, including cities, tools, and the arts. After killing his brother, Cain was condemned to be a wanderer. Adam and Eve had been covered with “garments of skin” after the Fall, for they could no longer be naked (we should read this in literal, but also metaphorical terms). We could no longer have a direct relationship with creation or God Himself. Cain’s punishment was meant to return him to “nakedness,” to help reconnect with God and the enormity of his terrible deed. Cain rejected that and immediately began to make “coverings” for himself in the form of a city and other implements of civilization.

We see that cities/”civilization” have a bad rap in the first section of Genesis. This obviously starts with Cain, but continues with the Tower of Babel, Sodom and Gomorrah, and Egypt. There are hints of another possible path, however, with Melchizadek in Genesis 14. The same tools that are used to isolate Cain’s line from God can also be used to build the tabernacle and other sacred vessels, which brought the Israelites potentially closer to God. We get a hint of the redemption of cities when King David takes Jerusalem and makes it his capital. Finally, in Revelation 21 we see that redemption means something more than a mere return to the Garden. Instead we have a garden enclosed with a city, which indicates that though technology involves “coverings” that come from the Fall, that too becomes part of the redemptive story.

Our relationship with technology should reflect this tension and this hope. Technology enhances human power and potential, which is not always a bad thing. But at the same time, those same advances make us reliant on the tool itself. For example, the Israelites were not forbidden from using chariots (a significant ancient military technology) but they had to limit their use of them and other means of obtaining power (Dt. 17:16, Is. 31:1, etc.). For centuries technology developed, but at a measurable pace. Over the last 150 years, and perhaps especially in the last 60-75 years, the speed of development and its immediate integration into society has made it difficult to know where the line between help and hindrance might be. Many of us might choose to limit our interactions with phones, for example, but nearly all of us have to interact with computers, cars, and a host of other technologies all the time to function at a baseline in the modern world.

In “The Evitable Conflict” Asimov shows himself as essentially an optimist about technology. We can coexist and thrive even with highly advanced and “intelligent” machines guiding us. Many fear that AI will stifle human activity and imagination. Asimov envisions a world where AI instead unlocks and spurs human innovation, and in this Asimov anticipated our modern debates by some 75 years.

In Asimov’s world we design all artificial intelligence to obey the three laws of robotics, and the first law means that robots are not allowed to harm humans. They have to obey humans, and preserve their own existence as well. In the story the characters grapple with the following options as to what happened:

  • The machines have made wrong conclusions. If true, this means that the premise that we should follow the advice of the machines may be faulty as well.
  • The machines have been fed false information deliberately by an unknown source. If true, this would mean that the machines were being delibearately led to false conclusions, which also would mean that the advice of the machines should always be viewed with suspicion.

By definition the machines cannot be wrong. But even if they were, they have developed too far too quickly for humanity to have any idea how to find the error and repair them. In an ironic twist, the advancement of machines means that only advanced machines can fix the machines.*

The protagonist of the stories, robopsychologist Dr. Susan Calvin, suggests that the robots may be giving slightly inaccurate information on purpose, in a sense, to protect themselves and to preserve human flourishing. At the end of the series of stories in I, Robot, the robots have learned to understand human emotions as well as the fluctuations of human behavior. It is perhaps possible that the AI can make things “perfect” in certain ways but it also “knows” that humanity won’t really accept that. So–the machines adjust, much like in the movie The Matrix.

The Head Coordinator suspects that some other regional heads are actually part of the Society for Humanity, which attempts to push back against our reliance on the machines. He then suggests that such people should be arrested and the organization banned.

Dr. Calvin advises against this. For one, such an action might make them martyrs and inspire more resistance. But her main objection is that the machines have taken such factors into account already. The small “errors” of the machines are in fact there on purpose to allow for humanity to have enough sense of autonomy that they do not rebel against the machines.

Asimov seems to be in favor of this state of affairs, but admittedly, I find it hard to be sure what he thinks.

Last week I showed the movie Primer to the class, for a variety of reasons. I wanted to expose them to sci-fi in another format, and I wanted to show them that great stories (the movie won many awards, and is a favorite of mine, so I am biased) do not need fancy effects or locales to achieve their purpose. But my main purpose in doing so was so that we could discuss the movie alongside of “The Evitable Conflict.”

The movie’s two main characters have two different perspectives on our relationship to time and causality. Without too many spollers, the story has two friends Aaron and Abe inadvertantly invent a means to go back in time for a day or two. They both realize the possibilities inherent with this for good or bad. They decide to use the machine a few times to get rich. But soon a rift develops bewteen them.

  • Aaron wants to unlock the metaphysical possibilities of the device. He doesn’t mind getting rich, but it’s not what really motivates him. Above all, he hates sameness and routine. His job, his middle class life, family, etc.—he chafes against it all. Could things have been different? Now, the machine allows one to try and see.
  • Abe believes that the order of things must be preserved at all cost. He is willing, through a very careful process, alter their lives (and no one else’s). But messing with order itself introduces the possibility of endless permutations that could destroy reality as we know it.

Most major technology that gets mainstreamed into society seems to have a paradoxical effect on us.

  • On the one hand, the new technology seems to offer nearly limitless possibility
  • On the other hand, it seems to create more uniformity of our human experience, not more diversity.

For example, Netflix offers thousands of different things to see. But most of the time, most of us,

  • Browse around for 10 minutes, then
  • End up watching one of the top shows advertised by Netflix

One might think that we would constantly be running into people telling us about the great show they unearthed on a streaming platform that you had not heard of before. But for most people, this rarely happens.

Another example . . . when the internet first became mainstream most believed that consumers would now have almost unlimited options about where to shop. We would no longer be confined to whatever stores we located nearby. And yet, most of us most of the time go to Amazon and buy what we need there. Amazon is so effecient and convenient, we feel no need to “shop around” like we might have 30-40 years ago.

Whether consciously or otherwise, Asimov tapped into this paradox with the last story in I, Robot. On one hand, following the advice of the machines leads to, by most means of measuring, greater human flourishing. But this state of affairs seems so homogenous, so routine, that it seems to be something “less” than human. Is Asimov then suggesting that machines are overall “better” than humans? Perhaps not, but I do think he believes that both have something to teach the other. That in itself is quite the controversial claim.

DM

*My dad has told me often that “back in the day” more or less everyone could repair their car with a small amount of knowledge and a few tools. Now, fixing cars requires specialized knowledge and tools. In many cases, fixing cars first means hooking your car up to a computer for it to diagnose the problem.

8th Grade Literature: Robots on the Brain

This week we began our reading of Isaac Asimov’s I, Robot, a famous and influential collection of short stories oriented around the theme of human interaction with robots in the near future. Asimov wrote these stories in the 1940’s and 50’s, but he was remarkably prescient about some of the issues and concerns those in the future (us) would have about the advances in this kind of technology. The stories begin with robots that interact with humans but cannot talk, to robots that make active predictions, invent religions, and learn to lie which pose a host of problems for their human inventors. I don’t see Asimov as warning his readers so much as informing them that the advance of robots, whether ultimately for good or ill, is inevitable.

Asimov has robots developed with three laws:

  • A robot may not injure a human being, or through inaction allow a human being to come to harm
  • A robot must obey the orders of a human being, unless that order conflicts with the first law
  • A robot must protect its own existence so long as doing so does not conflict with the other two laws.

The laws are hierarchically structured, so that Law 1 takes precedence over Law 2, and so forth.

The three laws look solid. Their simplicity is their strength. But stories deal with the implications of these three laws, as the technology develops. One of the strenghts of Asimov’s work is that what looks simple on its face becomes complex as we interact with the new technology.

Some time ago a friend who worked as a computer programmer said something to the effect of, “Computers will do exactly what you tell them to do. When we have a problem with a computer, we likely either a) Do not understand what we told the computer to do, or b) We told the computer something different than what we thought we told them.” This holds true in the stories we have read so far.

The first story involves a robot that a family buys to serve as a companion for a young child. Obviously, they want the robot to be safe, and to look out for the safety of the child. The robot would have to be accommodating to get along with the child. But this in turn means that the robot would then enjoy what the child enjoys as a robot would, which means . . . all the time. What child wouldn’t want a companion that essentially plays with you and accommodates you whenever you want.

The child, naturally, would bond to the robot and forget about other children. The mother in the story sounds exactly like parents today. The concern is the same, only the particulars have changed. Every mother who worries about their child’s attachment to their phones, computers, or video games (why can’t they play with children instead of machines?), sounds just like the mother in the opening story, “Robbie.”

But . . . if the child is happy and if the robot protects her from disaster (which he does), and if robots are the way of the future and are simply part of how kids grow up these days, then the presence of robots becomes inevitable eventually. In this way, the mother in the story comes across slightly as the “bad guy” and such is the subversive nature of Asimov’s first story. Asimov wants us, I think, to be precise about the nature of our objection to robots.

  • Is it that we dislike change? But change in any society is inevitable.
  • Is it that we dislike the speed of change? The change may be uncomfortably fast, but if others are doing it, won’t we have to adapt to keep up? Civilizations that fall behind often get absorbed by other civilizations.
  • Is it that we dislike this particular form of technology? Ok, but how would a robot differ qualitatively from other technology that we already use? For example, a dishwasher is a robot that does not move or talk, though it does communicate with us. Our phones cannot move but can talk back to us on some level.

My impression is that with these stories, Asimov wants to force us to come to a clear understanding of what our views of life, technology, and “progress” actually are. We can’t dislike something just because it is new, or just because it is shocking or unnerving. For example, when cars were an extremely disruptive technology when first introduced, but are now just part of society. But I am also guessing that Asimov would not simply agree that any new thing must therefore be adopted. The hard question remains—where to draw the line, and why?

In the first story, “Robbie” (the robot) becomes more human like the longer he interacts with the child. For example, he learns to have favorite stories. But for humans to interact with robots, they have to learn to think according to the 3 Laws, which means, thinking like robots think. In time, some kind of overlap between robot and human “psychology” and behavior become inevitable, another unintended consequence of technology.

In all the stories, Asimov sets up the narrative so that the robot cannot really be blamed. They follow instructions. The problem is that we cannot anticipate all the ways in which they might follow those instructions, and how that will change society and humanity all at once.

In one story this means that robots learn to lie in ways similar to humans. As robot technology advances they interact more socially with humans. When we interact with those we know, we do not always tell each other the unvarnished truth. We might tell a friend that an outfit looks good even if we don’t think so, as just one example. After all, we don’t want to “harm” our friend by telling them what we really think. As the stories progess and our interaction with robots gets more complex, the robots’ ability to follow Law 1 (no harm to humans) increases. This, in turn, means that robots start to tell people what they think they want to hear, which leads to great confusion.