Part 23 (2/2)
CHARLES: Yes, exactly. So if the immune-system software is modified by a hacker to simply turn on its self-replication ability without end- Yes, exactly. So if the immune-system software is modified by a hacker to simply turn on its self-replication ability without end- RAY: -yes, well, we'll have to be careful about that, won't we? -yes, well, we'll have to be careful about that, won't we?
MOLLY 2004: I'll say. I'll say.
RAY: We have the same problem with our biological immune system. Our immune system is comparably powerful, and if it turns on us that's an autoimmune disease, which can be insidious. But there's still no alternative to having an immune system. We have the same problem with our biological immune system. Our immune system is comparably powerful, and if it turns on us that's an autoimmune disease, which can be insidious. But there's still no alternative to having an immune system.
MOLLY 2004: So a software virus could turn the nan.o.bot immune system into a stealth destroyer? So a software virus could turn the nan.o.bot immune system into a stealth destroyer?
RAY: That's possible. It's fair to conclude that software security is going to be the decisive issue for many levels of the human-machine civilization. With everything becoming information, maintaining the software integrity of our defensive technologies will be critical to our survival. Even on an economic level, maintaining the business model that creates information will be critical to our well-being. That's possible. It's fair to conclude that software security is going to be the decisive issue for many levels of the human-machine civilization. With everything becoming information, maintaining the software integrity of our defensive technologies will be critical to our survival. Even on an economic level, maintaining the business model that creates information will be critical to our well-being.
MOLLY 2004: This makes me feel rather helpless. I mean, with all these good and bad nan.o.bots battling it out, I'll just be a hapless bystander. This makes me feel rather helpless. I mean, with all these good and bad nan.o.bots battling it out, I'll just be a hapless bystander.
RAY: That's hardly a new phenomenon. How much influence do you have in 2004 on the disposition of the tens of thousands of nuclear weapons in the world? That's hardly a new phenomenon. How much influence do you have in 2004 on the disposition of the tens of thousands of nuclear weapons in the world?
MOLLY 2004: At least I have a voice and a vote in elections that affect foreign-policy issues. At least I have a voice and a vote in elections that affect foreign-policy issues.
RAY: There's no reason for that to change. Providing for a reliable nanotechnology immune system will be one of the great political issues of the 2020s and 2030s. There's no reason for that to change. Providing for a reliable nanotechnology immune system will be one of the great political issues of the 2020s and 2030s.
MOLLY 2004: Then what about strong AI? Then what about strong AI?
RAY: The good news is that it will protect us from malevolent nanotechnology because it will be smart enough to a.s.sist us in keeping our defensive technologies ahead of the destructive ones. The good news is that it will protect us from malevolent nanotechnology because it will be smart enough to a.s.sist us in keeping our defensive technologies ahead of the destructive ones.
NED LUDD: a.s.suming it's on our side. a.s.suming it's on our side.
RAY: Indeed. Indeed.
CHAPTER NINE.
Response to Critics
The human mind likes a strange idea as little as the body likes a strange protein and resists it with a similar energy.-W. I. BEVERIDGE If a ... scientist says that something is possible he is almost certainly right, but if he says that it is impossible he is very probably wrong.-ARTHUR C. CLARKE
A Panoply of Criticisms
In The Age of Spiritual Machines The Age of Spiritual Machines, I began to examine some of the accelerating trends that I have sought to explore in greater depth in this book. ASM inspired a broad variety of reactions, including extensive discussions of the profound, imminent changes it considered (for example, the promise-versus-peril debate prompted by Bill Joy's Wired Wired story, ”Why the Future Doesn't Need Us,” as I reviewed in the previous chapter). The response also included attempts to argue on many levels why such transformative changes would not, could not, or should not happen. Here is a summary of the critiques I will be responding to in this chapter: story, ”Why the Future Doesn't Need Us,” as I reviewed in the previous chapter). The response also included attempts to argue on many levels why such transformative changes would not, could not, or should not happen. Here is a summary of the critiques I will be responding to in this chapter:
The ”criticism from Malthus”: It's a mistake to extrapolate exponential trends indefinitely, since they inevitably run out of resources to maintain the exponential growth. Moreover, we won't have enough energy to power the extraordinarily dense computational platforms forecast, and even if we did they would be as hot as the sun It's a mistake to extrapolate exponential trends indefinitely, since they inevitably run out of resources to maintain the exponential growth. Moreover, we won't have enough energy to power the extraordinarily dense computational platforms forecast, and even if we did they would be as hot as the sun. Exponential trends do reach an asymptote, but the matter and energy resources needed for computation and communication are so small per compute and per bit that these trends can continue to the point where nonbiological intelligence is trillions of trillions of times more powerful than biological intelligence. Reversible computing can reduce energy requirements, as well as heat dissipation, by many orders of magnitude. Even restricting computation to ”cold” computers will achieve nonbiological computing platforms that vastly outperform biological intelligence.The ”criticism from software”: We're making exponential gains in hardware, but software is stuck in the mud We're making exponential gains in hardware, but software is stuck in the mud. Although the doubling time for progress in software is longer than that for computational hardware, software is also accelerating in effectiveness, efficiency, and complexity. Many software applications, ranging from search engines to games, routinely use AI techniques that were only research projects a decade ago. Substantial gains have also been made in the overall complexity of software, in software productivity, and in the efficiency of software in solving key algorithmic problems. Moreover, we have an effective game plan to achieve the capabilities of human intelligence in a machine: reverse engineering the brain to capture its principles of operation and then implementing those principles in brain-capable computing platforms. Every aspect of brain reverse engineering is accelerating: the spatial and temporal resolution of brain scanning, knowledge about every level of the brain's operation, and efforts to realistically model and simulate neurons and brain regions.The ”criticism from a.n.a.log processing”: Digital computation is too rigid because digital bits are either on or off. Biological intelligence is mostly a.n.a.log, so subtle gradations can be considered Digital computation is too rigid because digital bits are either on or off. Biological intelligence is mostly a.n.a.log, so subtle gradations can be considered. It's true that the human brain uses digital-controlled a.n.a.log methods, but we can also use such methods in our machines. Moreover, digital computation can simulate a.n.a.log transactions to any desired level of accuracy, whereas the converse statement is not true.The ”criticism from the complexity of neural processing”: The information processes in the interneuronal connections (axons, dendrites, synapses) are far more complex than the simplistic models used in neural nets The information processes in the interneuronal connections (axons, dendrites, synapses) are far more complex than the simplistic models used in neural nets. True, but brain-region simulations don't use such simplified models. We have achieved realistic mathematical models and computer simulations of neurons and interneuronal connections that do capture the nonlinearities and intricacies of their biological counterparts. Moreover, we have found that the complexity of processing brain regions is often simpler than the neurons they comprise. We already have effective models and simulations for several dozen regions of the human brain. The genome contains only about thirty to one hundred million bytes of design information when redundancy is considered, so the design information for the brain is of a manageable level.The ”criticism from microtubules and quantum computing”: The microtubules in neurons are capable of quantum computing, and such quantum computing is a prerequisite for consciousness. To ”upload” a personality, one would have to capture its precise quantum state The microtubules in neurons are capable of quantum computing, and such quantum computing is a prerequisite for consciousness. To ”upload” a personality, one would have to capture its precise quantum state. No evidence exists to support either of these statements. Even if true, there is nothing that bars quantum computing from being carried out in nonbiological systems. We routinely use quantum effects in semiconductors (tunneling in transistors, for example), and machine-based quantum computing is also progressing. As for capturing a precise quantum state, I'm in a very different quantum state than I was before writing this sentence. So am I already a different person? Perhaps I am, but if one captured my state a minute ago, an upload based on that information would still successfully pa.s.s a ”Ray Kurzweil” Turing test.The ”criticism from the Church-Turing thesis”: We can show that there are broad cla.s.ses of problems that cannot be solved by any Turing machine. It can also be shown that Turing machines can emulate any possible computer (that is, there exists a Turing machine that can solve any problem that any computer can solve), so this demonstrates a clear limitation on the problems that a computer can solve. Yet humans are capable of solving these problems, so machines will never emulate human intelligence We can show that there are broad cla.s.ses of problems that cannot be solved by any Turing machine. It can also be shown that Turing machines can emulate any possible computer (that is, there exists a Turing machine that can solve any problem that any computer can solve), so this demonstrates a clear limitation on the problems that a computer can solve. Yet humans are capable of solving these problems, so machines will never emulate human intelligence. Humans are no more capable of universally solving such ”unsolvable” problems than machines. Humans can make educated guesses to solutions in certain instances, but machines can do the same thing and can often do so more quickly.The ”criticism from failure rates”: Computer systems are showing alarming rates of catastrophic failure as their complexity increases. Thomas Ray writes that we are ”pus.h.i.+ng the limits of what we can effectively design and build through conventional approaches.” We have developed increasingly complex systems to manage a broad variety of mission-critical tasks, and failure rates in these systems are very low. However, imperfection is an inherent feature of any complex process, and that certainly includes human intelligence.The ”criticism from 'lock-in' ”: The pervasive and complex support systems (and the huge investments in these systems) required by such fields as energy and transportation are blocking innovation, so this will prevent the kind of rapid change envisioned for the technologies underlying the Singularity The pervasive and complex support systems (and the huge investments in these systems) required by such fields as energy and transportation are blocking innovation, so this will prevent the kind of rapid change envisioned for the technologies underlying the Singularity. It is specifically information processes that are growing exponentially in capability and price-performance. We have already seen rapid paradigm s.h.i.+fts in every aspect of information technology, unimpeded by any lock-in phenomenon (despite large infrastructure investments in such areas as the Internet and telecommunications). Even the energy and transportation sectors will witness revolutionary changes from new nanotechnology-based innovations.The ”criticism from ontology”: John Searle describes several versions of his Chinese Room a.n.a.logy. In one formulation a man follows a written program to answer questions in Chinese. The man appears to be answering questions competently in Chinese, but since he is ”just mechanically following a written program, he has no real understanding of Chinese and no real awareness of what he is doing. The ”man” in the room doesn't understand anything, because, after all, ”he is just a computer,” according to Searle. So clearly, computers cannot understand what they are doing, since they are just following rules John Searle describes several versions of his Chinese Room a.n.a.logy. In one formulation a man follows a written program to answer questions in Chinese. The man appears to be answering questions competently in Chinese, but since he is ”just mechanically following a written program, he has no real understanding of Chinese and no real awareness of what he is doing. The ”man” in the room doesn't understand anything, because, after all, ”he is just a computer,” according to Searle. So clearly, computers cannot understand what they are doing, since they are just following rules. Searle's Chinese Room arguments are fundamentally tautological, as they just a.s.sume his conclusion that computers cannot possibly have any real understanding. Part of the philosophical sleight of hand in Searle's simple a.n.a.logies is a matter of scale. He purports to describe a simple system and then asks the reader to consider how such a system could possibly have any real understanding. But the characterization itself is misleading. To be consistent with Searle's own a.s.sumptions the Chinese Room system that Searle describes would have to be as complex as a human brain and would, therefore, have as much understanding as a human brain. The man in the a.n.a.logy would be acting as the central-processing unit, only a small part of the system. While the man may not see it, the understanding is distributed across the entire pattern of the program itself and the billions of notes he would have to make to follow the program. Consider that I understand English, but none of my neurons do. My understanding is represented in vast patterns of neurotransmitter strengths, synaptic clefts, and interneuronal connections.The ”criticism from the rich-poor divide”: It's likely that through these technologies the rich may obtain certain opportunities that the rest of humankind does not have access to It's likely that through these technologies the rich may obtain certain opportunities that the rest of humankind does not have access to. This, of course, would be nothing new, but I would point out that because of the ongoing exponential growth of price-performance, all of these technologies quickly become so inexpensive as to become almost free.The ”criticism from the likelihood of government regulation”: Governmental regulation will slow down and stop the acceleration of technology Governmental regulation will slow down and stop the acceleration of technology. Although the obstructive potential of regulation is an important concern, it has had as of yet little measurable effect on the trends discussed in this book. Absent a worldwide totalitarian state, the economic and other forces underlying technical progress will only grow with ongoing advances. Even controversial issues such as stem-cell research end up being like stones in a stream, the flow of progress rus.h.i.+ng around them .The ”criticism from theism”: According to William A. Dembski, ”contemporary materialists such as Ray Kurzweil ... see the motions and modifications of matter as sufficient to account for human mentality.” But materialism is predictable, whereas reality is not. Predictability [is] materialism's main virtue ... and hollowness [is] its main fault.” According to William A. Dembski, ”contemporary materialists such as Ray Kurzweil ... see the motions and modifications of matter as sufficient to account for human mentality.” But materialism is predictable, whereas reality is not. Predictability [is] materialism's main virtue ... and hollowness [is] its main fault.” Complex systems of matter and energy are not predictable, since they are based on a vast number of unpredictable quantum events. Even if we accept a ”hidden variables” interpretation of quantum mechanics (which says that quantum events only appear to be unpredictable but are based on undetectable hidden variables), the behavior of a complex system would still be unpredictable in practice. All of the trends show that we are clearly headed for nonbiological systems that are as complex as their biological counterparts. Such future systems will be no more ”hollow” than humans and in many cases will be based on the reverse engineering of human intelligence. We don't need to go beyond the capabilities of patterns of matter and energy to account for the capabilities of human intelligence . Complex systems of matter and energy are not predictable, since they are based on a vast number of unpredictable quantum events. Even if we accept a ”hidden variables” interpretation of quantum mechanics (which says that quantum events only appear to be unpredictable but are based on undetectable hidden variables), the behavior of a complex system would still be unpredictable in practice. All of the trends show that we are clearly headed for nonbiological systems that are as complex as their biological counterparts. Such future systems will be no more ”hollow” than humans and in many cases will be based on the reverse engineering of human intelligence. We don't need to go beyond the capabilities of patterns of matter and energy to account for the capabilities of human intelligence .The ”criticism from holism”: To quote Michael Denton, organisms are ”self-organizing, ... self-referential, ... self-replicating, ... reciprocal, ... self-formative, and ... holistic.” Such organic forms can be created only through biological processes, and such forms are ”immutable, ... impenetrable, and ... fundamental realities of existence.” To quote Michael Denton, organisms are ”self-organizing, ... self-referential, ... self-replicating, ... reciprocal, ... self-formative, and ... holistic.” Such organic forms can be created only through biological processes, and such forms are ”immutable, ... impenetrable, and ... fundamental realities of existence.”1 It's true that biological design represents a profound set of principles. However, machines can use-and already are using-these same principles, and there is nothing that restricts nonbiological systems from harnessing the emergent properties of the patterns found in the biological world. It's true that biological design represents a profound set of principles. However, machines can use-and already are using-these same principles, and there is nothing that restricts nonbiological systems from harnessing the emergent properties of the patterns found in the biological world.
I've engaged in countless debates and dialogues responding to these challenges in a diverse variety of forums. One of my goals for this book is to provide a comprehensive response to the most important criticisms I have encountered. Most of my rejoinders to these critiques on feasibility and inevitability have been discussed throughout this book, but in this chapter I want to offer a detailed reply to several of the more interesting ones.
The Criticism from Incredulity
Perhaps the most candid criticism of the future I have envisioned here is simple disbelief that such profound changes could possibly occur. Chemist Richard Smalley, for example, dismisses the idea of nan.o.bots being capable of performing missions in the human bloodstream as just ”silly.” But scientists' ethics call for caution in a.s.sessing the prospects for current work, and such reasonable prudence unfortunately often leads scientists to shy away from considering the power of generations of science and technology far beyond today's frontier. With the rate of paradigm s.h.i.+ft occurring ever more quickly, this ingrained pessimism does not serve society's needs in a.s.sessing scientific capabilities in the decades ahead. Consider how incredible today's technology would seem to people even a century ago.
A related criticism is based on the notion that it is difficult to predict the future, and any number of bad predictions from other futurists in earlier eras can be cited to support this. Predicting which company or product will succeed is indeed very difficult, if not impossible. The same difficulty occurs in predicting which technical design or standard will prevail. (For example, how will the wireless-communication protocols WiMAX, CDMA, and 3G fare over the next several years?) However, as this book has extensively argued, we find remarkably precise and predictable exponential trends when a.s.sessing the overall effectiveness (as measured by price-performance, bandwidth, and other measures of capability) of information technologies. For example, the smooth exponential growth of the price-performance of computing dates back over a century. Given that the minimum amount of matter and energy required to compute or transmit a bit of information is known to be vanis.h.i.+ngly small, we can confidently predict the continuation of these information-technology trends at least through this next century. Moreover, we can reliably predict the capabilities of these technologies at future points in time.
Consider that predicting the path of a single molecule in a gas is essentially impossible, but predicting certain properties of the entire gas (composed of a great many chaotically interacting molecules) can reliably be predicted through the laws of thermodynamics. a.n.a.logously, it is not possible to reliably predict the results of a specific project or company, but the overall capabilities of information technology (comprised of many chaotic activities) can nonetheless be dependably antic.i.p.ated through the law of accelerating returns.
Many of the furious attempts to argue why machines-nonbiological systems-cannot ever possibly compare to humans appear to be fueled by this basic reaction of incredulity. The history of human thought is marked by many attempts to refuse to accept ideas that seem to threaten the accepted view that our species is special. Copernicus's insight that the Earth was not at the center of the universe was resisted, as was Darwin's that we were only slightly evolved from other primates. The notion that machines could match and even exceed human intelligence appears to challenge human status once again.
In my view there is something essentially special, after all, about human beings. We were the first species on Earth to combine a cognitive function and an effective opposable appendage (the thumb), so we were able to create technology that would extend our own horizons. No other species on Earth has accomplished this. (To be precise, we're the only surviving species in this ecological niche-others, such as the Neanderthals, did not survive.) And as I discussed in chapter 6, we have yet to discover any other such civilization in the universe.
The Criticism from Malthus
Exponential Trends Don't Last Forever. The cla.s.sical metaphorical example of exponential trends. .h.i.tting a wall is known as ”rabbits in Australia.” A species happening upon a hospitable new habitat will expand its numbers exponentially until its growth hits the limits of the ability of that environment to support it. Approaching this limit to exponential growth may even cause an overall reduction in numbers-for example, humans noticing a spreading pest may seek to eradicate it. Another common example is a microbe that may grow exponentially in an animal body until a limit is reached: the ability of that body to support it, the response of its immune system, or the death of the host. The cla.s.sical metaphorical example of exponential trends. .h.i.tting a wall is known as ”rabbits in Australia.” A species happening upon a hospitable new habitat will expand its numbers exponentially until its growth hits the limits of the ability of that environment to support it. Approaching this limit to exponential growth may even cause an overall reduction in numbers-for example, humans noticing a spreading pest may seek to eradicate it. Another common example is a microbe that may grow exponentially in an animal body until a limit is reached: the ability of that body to support it, the response of its immune system, or the death of the host.
Even the human population is now approaching a limit. Families in the more developed nations have mastered means of birth control and have set relatively high standards for the resources they wish to provide their children. As a result population expansion in the developed world has largely stopped. Meanwhile people in some (but not all) underdeveloped countries have continued to seek large families as a means of social security, hoping that at least one child will survive long enough to support them in old age. However, with the law of accelerating returns providing more widespread economic gains, the overall growth in human population is slowing.
So isn't there a comparable limit to the exponential trends that we are witnessing for information technologies?
The answer is yes, but not before the profound transformations described throughout this book take place. As I discussed in chapter 3, the amount of matter and energy required to compute or transmit one bit is vanis.h.i.+ngly small. By using reversible logic gates, the input of energy is required only to transmit results and to correct errors. Otherwise, the heat released from each computation is immediately recycled to fuel the next computation.
As I discussed in chapter 5, nanotechnology-based designs for virtually all applications-computation, communication, manufacturing, and transportation-will require substantially less energy than they do today. Nanotechnology will also facilitate capturing renewable energy sources such as sunlight. We could meet all of our projected energy needs of thirty trillion watts in 2030 with solar power if we captured only 0.03 percent (three ten-thousandths) of the sun's energy as it hit the Earth. This will be feasible with extremely inexpensive, lightweight, and efficient nanoengineered solar panels together with nano-fuel cells to store and distribute the captured energy.
A Virtually Unlimited Limit. As I discussed in chapter 3 an optimally organized 2.2-pound computer using reversible logic gates has about 10 As I discussed in chapter 3 an optimally organized 2.2-pound computer using reversible logic gates has about 1025 atoms and can store about 10 atoms and can store about 1027 bits. Just considering electromagnetic interactions between the particles, there are at least 10 bits. Just considering electromagnetic interactions between the particles, there are at least 1015 state changes per bit per second that can be harnessed for computation, resulting in about 10 state changes per bit per second that can be harnessed for computation, resulting in about 1042 calculations per second in the ultimate ”cold” 2.2-pound computer. This is about 10 calculations per second in the ultimate ”cold” 2.2-pound computer. This is about 1016 times more powerful than all biological brains today. If we allow our ultimate computer to get hot, we can increase this further by as much as 10 times more powerful than all biological brains today. If we allow our ultimate computer to get hot, we can increase this further by as much as 108-fold. And we obviously won't restrict our computational resources to one kilogram of matter but will ultimately deploy a significant fraction of the matter and energy on the Earth and in the solar system and then spread out from there.
Specific paradigms do reach limits. We expect that Moore's Law (concerning the shrinking of the size of transistors on a flat integrated circuit) will hit a limit over the next two decades. The date for the demise of Moore's Law keeps getting pushed back. The first estimates predicted 2002, but now Intel says it won't take place until 2022. But as I discussed in chapter 2, every time a specific computing paradigm was seen to approach its limit, research interest and pressure increased to create the next paradigm. This has already happened four times in the century-long history of exponential growth in computation (from electromagnetic calculators to relay-based computers to vacuum tubes to discrete transistors to integrated circuits). We have already achieved many important milestones toward the next (sixth) paradigm of computing: three-dimensional self-organizing circuits at the molecular level. So the impending end of a given paradigm does not represent a true limit.
<script>