This week we finished our look at Asimov’s I Robot with it’s final story, “The Evitable Conflict.” Our first story involved a companion robot that could move but not speak. The last story moves decades into the future to a world where AI/robots essentially run the world though extensive and precise economic planning and coordination. War is a thing of the past, as is unemployment. Unquestionably humanity fares better in this new world, but accompanied with trade-offs
The final story has a somewhat banal premise. The seamless economy, the “perfect” robots, have performed wonders, but a few of their calculations have been slightly off. These errors have not caused any serious problems. Workers displaced from one industry, for example, quickly find work in another. But could these small errors presage the collapse of machine driven learning and governance? If so, would the people’s of earth (nations do not really exist anymore) descend into chaos?
The speed of AI advances has brought the subject of technology and human autonomy to the forefront of our minds, but the question is an old one. Many myths deal with this question, as does the Bible.
For example, both Hesiod and Ovid in their mythologies write about technology that comes soon after a golden age, a nod to the idea of Edenic paradise. Hesiod writes,
First of all the deathless gods who dwell on Olympus made a golden race of mortal men who lived in the time of Cronos when he was reigning in heaven. And they lived like gods without sorrow of heart, remote and free from toil and grief: miserable age rested not on them; but with legs and arms never failing they made merry with feasting beyond the reach of all evils. When they died, it was as though they were overcome with sleep, and they had all good things; for the fruitful earth unforced bare them fruit abundantly and without stint. They dwelt in ease and peace upon their lands with many good things, rich in flocks and loved by the blessed gods.
He goes on to write,
But when earth had covered this generation also — they are called blessed spirits of the underworld by men, and, though they are of second order, yet honour attends them also — Zeus the Father made a third generation of mortal men, a brazen race, sprung from ash-trees; and it was in no way equal to the silver age, but was terrible and strong. They loved the lamentable works of Ares and deeds of violence; they ate no bread, but were hard of heart like adamant, fearful men. Great was their strength and unconquerable the arms which grew from their shoulders on their strong limbs. Their armour was of bronze, and their houses of bronze, and of bronze were their implements: there was no black iron. These were destroyed by their own hands and passed to the dank house of chill Hades, and left no name: terrible though they were, black Death seized them, and they left the bright light of the sun.
We see in Genesis this same pattern linking the devlopment of technology with violence. In Genesis 4 the line of Cain first developed the implements of civilization, including cities, tools, and the arts. After killing his brother, Cain was condemned to be a wanderer. Adam and Eve had been covered with “garments of skin” after the Fall, for they could no longer be naked (we should read this in literal, but also metaphorical terms). We could no longer have a direct relationship with creation or God Himself. Cain’s punishment was meant to return him to “nakedness,” to help reconnect with God and the enormity of his terrible deed. Cain rejected that and immediately began to make “coverings” for himself in the form of a city and other implements of civilization.
We see that cities/”civilization” have a bad rap in the first section of Genesis. This obviously starts with Cain, but continues with the Tower of Babel, Sodom and Gomorrah, and Egypt. There are hints of another possible path, however, with Melchizadek in Genesis 14. The same tools that are used to isolate Cain’s line from God can also be used to build the tabernacle and other sacred vessels, which brought the Israelites potentially closer to God. We get a hint of the redemption of cities when King David takes Jerusalem and makes it his capital. Finally, in Revelation 21 we see that redemption means something more than a mere return to the Garden. Instead we have a garden enclosed with a city, which indicates that though technology involves “coverings” that come from the Fall, that too becomes part of the redemptive story.
Our relationship with technology should reflect this tension and this hope. Technology enhances human power and potential, which is not always a bad thing. But at the same time, those same advances make us reliant on the tool itself. For example, the Israelites were not forbidden from using chariots (a significant ancient military technology) but they had to limit their use of them and other means of obtaining power (Dt. 17:16, Is. 31:1, etc.). For centuries technology developed, but at a measurable pace. Over the last 150 years, and perhaps especially in the last 60-75 years, the speed of development and its immediate integration into society has made it difficult to know where the line between help and hindrance might be. Many of us might choose to limit our interactions with phones, for example, but nearly all of us have to interact with computers, cars, and a host of other technologies all the time to function at a baseline in the modern world.
In “The Evitable Conflict” Asimov shows himself as essentially an optimist about technology. We can coexist and thrive even with highly advanced and “intelligent” machines guiding us. Many fear that AI will stifle human activity and imagination. Asimov envisions a world where AI instead unlocks and spurs human innovation, and in this Asimov anticipated our modern debates by some 75 years.
In Asimov’s world we design all artificial intelligence to obey the three laws of robotics, and the first law means that robots are not allowed to harm humans. They have to obey humans, and preserve their own existence as well. In the story the characters grapple with the following options as to what happened:
- The machines have made wrong conclusions. If true, this means that the premise that we should follow the advice of the machines may be faulty as well.
- The machines have been fed false information deliberately by an unknown source. If true, this would mean that the machines were being delibearately led to false conclusions, which also would mean that the advice of the machines should always be viewed with suspicion.
By definition the machines cannot be wrong. But even if they were, they have developed too far too quickly for humanity to have any idea how to find the error and repair them. In an ironic twist, the advancement of machines means that only advanced machines can fix the machines.*
The protagonist of the stories, robopsychologist Dr. Susan Calvin, suggests that the robots may be giving slightly inaccurate information on purpose, in a sense, to protect themselves and to preserve human flourishing. At the end of the series of stories in I, Robot, the robots have learned to understand human emotions as well as the fluctuations of human behavior. It is perhaps possible that the AI can make things “perfect” in certain ways but it also “knows” that humanity won’t really accept that. So–the machines adjust, much like in the movie The Matrix.
The Head Coordinator suspects that some other regional heads are actually part of the Society for Humanity, which attempts to push back against our reliance on the machines. He then suggests that such people should be arrested and the organization banned.
Dr. Calvin advises against this. For one, such an action might make them martyrs and inspire more resistance. But her main objection is that the machines have taken such factors into account already. The small “errors” of the machines are in fact there on purpose to allow for humanity to have enough sense of autonomy that they do not rebel against the machines.
Asimov seems to be in favor of this state of affairs, but admittedly, I find it hard to be sure what he thinks.
Last week I showed the movie Primer to the class, for a variety of reasons. I wanted to expose them to sci-fi in another format, and I wanted to show them that great stories (the movie won many awards, and is a favorite of mine, so I am biased) do not need fancy effects or locales to achieve their purpose. But my main purpose in doing so was so that we could discuss the movie alongside of “The Evitable Conflict.”
The movie’s two main characters have two different perspectives on our relationship to time and causality. Without too many spollers, the story has two friends Aaron and Abe inadvertantly invent a means to go back in time for a day or two. They both realize the possibilities inherent with this for good or bad. They decide to use the machine a few times to get rich. But soon a rift develops bewteen them.
- Aaron wants to unlock the metaphysical possibilities of the device. He doesn’t mind getting rich, but it’s not what really motivates him. Above all, he hates sameness and routine. His job, his middle class life, family, etc.—he chafes against it all. Could things have been different? Now, the machine allows one to try and see.
- Abe believes that the order of things must be preserved at all cost. He is willing, through a very careful process, alter their lives (and no one else’s). But messing with order itself introduces the possibility of endless permutations that could destroy reality as we know it.
Most major technology that gets mainstreamed into society seems to have a paradoxical effect on us.
- On the one hand, the new technology seems to offer nearly limitless possibility
- On the other hand, it seems to create more uniformity of our human experience, not more diversity.
For example, Netflix offers thousands of different things to see. But most of the time, most of us,
- Browse around for 10 minutes, then
- End up watching one of the top shows advertised by Netflix
One might think that we would constantly be running into people telling us about the great show they unearthed on a streaming platform that you had not heard of before. But for most people, this rarely happens.
Another example . . . when the internet first became mainstream most believed that consumers would now have almost unlimited options about where to shop. We would no longer be confined to whatever stores we located nearby. And yet, most of us most of the time go to Amazon and buy what we need there. Amazon is so effecient and convenient, we feel no need to “shop around” like we might have 30-40 years ago.
Whether consciously or otherwise, Asimov tapped into this paradox with the last story in I, Robot. On one hand, following the advice of the machines leads to, by most means of measuring, greater human flourishing. But this state of affairs seems so homogenous, so routine, that it seems to be something “less” than human. Is Asimov then suggesting that machines are overall “better” than humans? Perhaps not, but I do think he believes that both have something to teach the other. That in itself is quite the controversial claim.
DM
*My dad has told me often that “back in the day” more or less everyone could repair their car with a small amount of knowledge and a few tools. Now, fixing cars requires specialized knowledge and tools. In many cases, fixing cars first means hooking your car up to a computer for it to diagnose the problem.