The Pluralism of the Singularity

Ray Kurzweil has a vision for the future. And it is quite the vision indeed. The Kurzweilian future is conceptualized by such enticing nouns  as omniscience, immortality, and utopia. It will mark a new era in human evolution, an era when humans finally transcend the constraints of a biological existence that is imperfect and doomed to fade away in a denouement more anticlimactic than even the most trite and contrived fiction. This new era, this monumental epoch, is referred to as The Singularity.

 So what is The Singularity? It is a future period in which technological change will be so rapid, and it’s impact so profound, that every aspect of human life will be irreversibly transformed. No longer will there be a clear demarcation between human and machine. Instead, life, all life, will become a hybrid of biological and non-biological intelligence.

This profound change will inevitably arise as a logical implication of The Law of Accelerating Returns, which states that the nature of technological progress is exponential. How significant is exponential progress compared to linear progress? Well, if you count linearly 30 steps, you arrive at, hold your breath, 30. If, however, you count exponentially, after 30 steps you arrive at 1.07 billion. So, to answer the question: very significant. Technological progress is exponential because we use the latest technology to build the next generation. Each generation grows exponentially in capability and the speed of that progress accelerates over time. These exponential increases are true in general of evolutionary processes, even biological evolution. The first paradigm of biological evolution was the advent of DNA, which took about one billion years. Once evolution adopted DNA, the next stage, the Cambrian Explosion, went 100 times faster and only took 10 million years. The next step, the evolution of homosapiens, only took 100,000 years. This step marked the rise of the first technological species, and the process shifted from emphasizing biological evolution to focusing primarily on technological evolution. But the Law of Accelerating Returns still holds sway.

Think about it. The telephone was patented in 1876. The first cellphone was invented in 1973. And the first iPhone debuted in 2007.

And the progress continues. Every two years, we can put twice as many components on a chip, and because they are closer together, they run faster, and computers become twice as capable for the same price. In the next 25 years, we will make another billion-fold increase in price performance and shrink the size of these technologies by 100,000 fold.

In the next 25 years, these technologies will be the size of a blood cell.

The exponential nature of technological progress implied by the Law of Accelerating Returns allows us to plot a trajectory of technological revolutions, affording preliminary, albeit speculative, answers to the question of when these revolutions will take place. The Singularity will occur due to three overlapping revolutions in genetics, nanotechnology, and robotics.

Revolutionary advances in the field of genetics, or bio-technology, will allow us to master the information processes of our biology, thereby enabling us to reprogram the body away from disease and aging. Nanotechnology, predicted to manifest in the next 25 years, is essentially blood cell sized devices that we can inject into the blood stream and implant in the brain. The nanomachines in the blood stream will be able to download programs, literally anti-virus software, allowing continual assaults on diseases and and decomposition. In the brain, nanomachines connected to synapses will increase the brain’s natural computational abilities, and instantly download information directly to our conscious minds and sensory systems, effectively merging biological intelligence with non-biological intelligence. The revolution in robotics heralds the advent of artificial intelligence, estimated to occur by 2029. By this time, artificially intelligent machines will match or go beyond human intelligence. And with this super-human intelligence, we will be able to solve the problems presently threatening our sustained and comfortable existence. With the combined effects of these three overlapping revolutions, we can look forward to a time when we can back up our brains, have brains that are largely non-biological, stop the aging process, and live indefinitely by overcoming our biological limitations through technological integration.

Or so Ray Kurzweil believes. The consensus, however, is far from unanimous. The Kurzweilian vision of finally creating a utopian existence is an enticing, intriguing, and comforting proposition. It is a pill that we want to easily swallow. However, it is a pill which may prove to be less benign and more caustic, less like Prozac and more like the rage virus from 28 Days Later. A cursory exposition of some of the salient objections and a few broad brush strokes on some other, overlooked, dystopian implications of The Singularity paint a different scenario entirely.

Firstly, there is the problem of the prospect of artificial intelligence. I have already covered some of these conceptual and philosophical issues in my article The Robotic Bull in the China Shop and again in the article Godel’s Theorem: Beyond ComputationIn the interest of space, I will not reiterate the arguments here, but I do encourage inquiring readers to click on the links for a comprehensive analysis. 

Secondly, many detractors have raised concern over Kurzweil’s pervasive, and somewhat myopic, optimism, calling his predictions naive or blindly idealistic. Kurzweil insists that with greater technology, greater problems can be solved. I, along with his many vociferous critics, do not think it is quite that simple. There is no logical reason to conclude that with more technology there will be fewer problems. The converse is just as likely. I do not believe that the mitigating factor deferring progress is a lack of sophisticated technology. We have sophisticated technology. Our problems are too diverse, the solutions too variegated, the interests too conflicting, and the stakes too high to assert that profound change will occur through any one single solution. That is bold reductionism. And it’s bad form.

Thirdly, we have to keep in mind that the Singularity hinges on technological development, on technological devices, built by corporations, conglomerates, private contractors, and governments. These are not technologies that superficially enhance the quality of life through easier access to leisure and entertainment. They are technologies which enable the user to transcend human biology, to eradicate disease, to reverse or terminate the aging process. With these technologies, information and knowledge become limitless. Power boundless. And power is a commodity rarely wrenched from the insatiable hands that grasps, wield, and posses it; or, to put it another way, from the hands that it grasps, wields, and possesses. When speaking of an omnipotence that never fades, never wanes, never dies, only increases exponentially, it is not imprudent to replace the word rarely with the word never. Such power will not come cheap. The classes of haves and have-nots will not disappear; their referents will simply change. The signified of haves will no longer refer to those with an abundance of material possessions, a superfluity of monetary wealth. Instead, it will shift, and refer to the enhanced, the evolved, those transmigrated to a higher plane of reality and existence. And the signified of have-nots? Everyone else.

Understandably, the question has been raised as to whether or not we should even create such technologies. A single grain of sugar, when it is nano-teched at one bit per atom, has more computing power than our human brain by a factor of about one billion. If humanity decides to build this kind of intelligence, we are building gods. It is possible that these beings will be benevolent deities, looking upon us with a mixture of loving pity and unconditional compassion. Or, they could look upon us as we look upon an importunate insect. And what right or argument, either logical or experiential, can we possibly invoke for either contingency? We have none. Benevolence and disgust are equally likely possibilities in such uncharted waters. Even so, I think the question of whether or not humanity should create such beings is entirely irrelevant. Any student of history or philosophy is well aware of the deep, engulfing chasm between what is and what ought to be. Humanity has a poor track record of acting in its own best interest and innumerable precedents of acting simply because it is capable of doing so. It is simply not a matter of what ought to happen; we can be assured that it will happen.

In summation, the Kurzwelian conceptualization of the post-Singularity world is a beatific utopia. But there is no good reason to believe that it is the necessary logical consequence of such a profound technological epoch. There are some who doubt the Singularity’s occurrence. I do not. I am, however, aggressively skeptical of the implications of the Singularity. Perhaps the only truism concerning the implications of the Singularity is that we cannot possibly know the implications until it happens. And at that point, there is no going back.

gridteam

gridteam

gridteam

Latest Articles by gridteam (see all)

You Might Also Like