Yes, Virginia, there is a Singularity. Plot elements such as molecular nanotechnology, artificial superintelligence, indefinite life-extension are fun topics to explore in science fiction stories. When do rational people cross the boundary between, 1) the idea of a technological singularity as an interesting issue that we should pay attention to and 2) predicting “the” date when the technological singularity will arrive? Part of the answer to this puzzler is: follow the money. Singularianism is pushed in much the same way that the USA was pushed to war in Iraq by false claims about weapons of mass destruction. Why is it so hard for people (and I mean all of us) to sort fact from fantasy?
At the Accelerating Future blog Michael Anissimov says, “I’m just writing this post to let my readers know that I no longer think that technological change is reliably accelerating.” That kind of talk reminds me of the time after the invasion of Iraq when all the rats started jumping off of the Bush administration ship and started saying, “we were just kidding about the weapons of mass destruction”.
People are susceptible to wishful thinking and propaganda. This is why skepticism is so important. What should a skeptic say about artificial superintelligence and indefinite life-extension? First, these are biology issues. A skeptic should ask: is it non-biologists who are the most enthusiastic about progress towards these possible future technologies? How many biologists who enthusiastically talk about life extension are just trying to get their hands on research funding? Second, what do we actually know about the biological basis of human intelligence and human aging? Is there anything like a logical set of steps that could form a path from neuroscience to “superintelligence”? Does the fact that people live longer now than in the past in any way suggest that “indefinite life-extension” will become possible or desirable? What about “molecular nanotechnology”? How can anyone predict what will become possible for such a new field?
Having said all of the above, I’m still interested in the question of why some human cultures support scientific research and experience technological progress while others do not. Why is the importance of experimentation and observation (and skepticism) recognized and welcomed in some cultures while being ignored or even fought against by others?
In the case of artificial intelligence research there might be many interesting social reasons for slow progress. In a world where many research dollars for artificial intelligence research have come from government agencies that want technology for military purposes, do most scientists simply avoid the field? Artificial intelligence research is at the intersection of biology and computer science. Do the people who could do the needed interdisciplinary research get funding or does funding just go to microspecialists who will never make any real progress?
In 1833 William Miller predicted that the end was near. For adherents of Millerism, it was a “Great Disappointment” that the world did not end. Most people alive today lived through millenialism in the year 2000. Now we have to suffer until the end of 2012. What will be the next year after 2012 to be milked by millenarism? Alan Turing was among many to predict machines with human-like intelligence. He expected such devices by the start of this century. Now the adherents of singularianism have made predictions for the date of arrival of the “singularity”. Here is my prediction: we will see these predictions slip 10 years further into the future for each 5 years that passes.
My hope is that science fiction writers will continue to write stories that explore the implications of technological change without being greatly influenced by the folks who imagine that the end of the world as we know it is near. Sure, there is $$$$$ to be made in stories about the end of the world, but after a while all that noise becomes boring.
Image. Public domain. Source.