In Of Robots, Empires and Pencils by Sally Morem, she wrote, “The Foundation series and the Robots stories, along with Arthur C. Clarke‘s Childhood’s End, will probably be remembered as the last great and most eloquent arguments put forth for the idea of collectivism in the literature of science fiction.” Morem compares the invisible hand of the market in Leonard Read’s essay I, Pencil to the invisible hand of Hari Seldon in Asimov’s Foundation Trilogy.
I’ve never read Childhood’s End, but I’ve long appreciated Clarke’s ability to put humanity in perspective by playing the science fiction what if? game (“Perhaps the crispest definition is that science fiction is a literature of ‘what if?’” – Christopher Evans). If Childhood’s End is an argument for collectivism, then how does that argument go?
In Childhood’s End, aliens arrive on Earth and they transform human children into a form that can merge into “a cosmic mind amalgamated from ancient galactic civilizations, freed from the limitations of ordinary matter“. Morem has suggested that this is “the ultimate dream of socialism. The adults are wholly demoralized. They no longer have children. Most kill themselves. The rest die of old age as the children dance.” Asimov wrote a famous story (The Last Question) in which all humans ended up as part of a group mind. I thought socialism was a theoretical economic system where an attempt would be made to share resources among all individuals. Little did I know that sharing ultimately leads to mass suicide and transcendence to a group mind.
Morem began her essay with the rather Homo sapiens-centric boast: “Human society is the most astonishing and perplexing of all the universe’s life-forming, self-organizing processes…“. A major part of Clarke’s “what if?” explorations concerned his imagining of “self-organizing processes” that were far beyond “human society”. Is there such a thing as “human society”? Asimov was interested in exploring the many forms that a human society might take in response to scientific advances and technological change. Such thinking “outside of the box” is often unsettling for conservatives who want to defend the “good old ways” that they are comfortable with.
The Foundation Trilogy was a “what if?” exploration of the idea that vast human populations might behave according to mathematically precise laws of “psychohistory“. Many years after Asimov created the Foundation Trilogy, he extended his Foundation stories to include the idea that telepathic robots were guiding humanity towards the formation of a vast group mind, Galaxia. Morem feels that Asimov’s fictional universe where humans spread through the galaxy and were “controlled by robotic minds” is fundamentally flawed, so flawed as to, “stretch believability to the breaking point and beyond“.
Asimov imagined telepathic robots who “invented” the Zeroth Law of Robotics, which then compelled them to protect humanity. In order to protect humanity, the robots worked to engineer humanity into the group mind of Galaxia. In Foundation and Earth, it was suggested that by forming Galaxia, humanity could be protected from aliens.
Telepathic robots with positronic brains who used time travel to make sure that humans could colonize the galaxy, telepathic humans forming a group mind, faster-than-light spaceships, unseen aliens threatening humanity, the dead had of Hari Seldon pushing humanity to its fate by psychohistorical necessity…so, just what is there in all this to stretch believability? Morem seems to suggest that the problem with Asimov’s fantasy futures is that he did not realistically appreciate the full complexity of a market economy. Morem is unhappy with “The Lawgiver imagined by Asimov” who has the power to shape human destiny. As we all know, a market economy is so complex that it, “cannot be forced, commanded or ruled from the center. It can only Be.”
In making this argument, Morem shifted the center of discussion from the Foundation and telepathic robots to a short story called The Evitable Conflict, a story in which Asimov imagines supercomputers that could manage the economy of Earth better than people can. What is the argument? We should not use computers to help us manage the world economy? No. “The future state of entire dynamic systems are also impossible to know in advance.” I suspect that Asimov would have granted this…in a science book. However, where was Asimov going in his fiction?
Arthur C. Clarke proclaimed the idea that “Any sufficiently advanced technology is indistinguishable from magic.” What magical technologies did Asimov throw into the mix of his Foundation Saga? What if Asimov was imagining a future where time travel was possible? When Asimov extended his Foundation Trilogy, he suggested (in Foundation’s Edge) the idea that positronic robots had used time travel to select a Reality in which humans, rather than extraterrestrials, took control of the galaxy.
In The Start of Eternity, a fan fiction sequel to Asimov’s time travel novel, I follow along with Asimov and imagine that there are positronic robots who can look into the future of the Foundation and see what will happen. Those robots (including Daneel) helped Hari Seldon establish the Foundations. I like to think that psychohistory was a “cover story” that was used to hide the existence of time traveling robots. If so, was Asimov putting forward arguments for collectivism in his Foundation Saga?
The economics of the Foundation always puzzled me. Here was a future human civilization with microfusion and the ability to hurl spaceships across the galaxy. Asimov wrote about determining the genomes of people and transferring a robotic mind into the brain of a human. With all these wonders, Asimov never tried to depict a post-scarcity economy. Instead, the Galactic Empire remained fully capitalistic and just a heart beat away from bust and falling back on fossil fuels.
In The Start of Eternity, the alien Huaoshy run a billion year old intergalactic civilization that makes use of additional technologies such as nanotechnology. However, that civilization also seems to defy Clarke’s idea of a future in which technologically advanced beings transcend their physical limitations. The idea of a pending technological singularity is now popular in science fiction, a point in the future where life as we know it will end because of accumulating technological advances. Is there a way to prevent a technological singularity and cause humans to be frozen in a culture that will always have an economy like the one we have now?
Morem wrote, “Human societies cannot be grasped as wholes“. I believe that Asimov tried to suggest that positronic robots could grasp Galaxia as a whole, and that is why they worked for 20,000 years to form Galaxia. Unfortunately, Asimov was taken from us before he was able to continue his Foundation Saga towards a completed Galaxia.
What about the billion year old Huaoshy civilization? Did they learn to use genetic engineering and nanotechnology (and anything else they accumulated in a billion years) to move past market economics as we know it? Does the dead hand of Adam Smith “defeat the best intentions of would-be planners every time”? Is it inevitable that a billion years from now we will not have progressed past the type of economic system known to Morem?
Science fiction would seem to be a strange enterprise if we were allowed to play “what if?” with hyperjump spaceships, time travel, nanobots and telepathy, but because economics are “emergent” we cannot say “what if?” with respect to future economics. Do we need Capitalistic Anthropics, the theory that our universe was designed to make 20th century capitalism inevitable…for all time?