Part 10 (2/2)
I cannot say that Allen and similar critics would necessarily have been convinced by the arguments I made in that book, but at least he and others could have responded to what I actually wrote. Allen argues that ”the Law of Accelerating Returns (LOAR)...is not a physical law.” I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a lower level. A cla.s.sic example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, it models each particle as following a random walk, so by definition we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are quite predictable to a high degree of precision, according to the laws laws of thermodynamics. So it is with the law of accelerating returns: Each technology project and contributor is unpredictable, yet the overall trajectory, as quantified by basic measures of price/performance and capacity, nonetheless follows a remarkably predictable path. of thermodynamics. So it is with the law of accelerating returns: Each technology project and contributor is unpredictable, yet the overall trajectory, as quantified by basic measures of price/performance and capacity, nonetheless follows a remarkably predictable path.
If computer technology were being pursued by only a handful of researchers, it would indeed be unpredictable. But it's the product of a sufficiently dynamic system of compet.i.tive projects that a basic measure of its price/performance, such as calculations per second per constant dollar, follows a very smooth exponential path, dating back to the 1890 American census as I noted in the previous chapter previous chapter. While the theoretical basis for the LOAR is presented extensively in The Singularity Is Near The Singularity Is Near, the strongest case for it is made by the extensive empirical evidence that I and others present.
Allen writes that ”these 'laws' work until they don't.” Here he is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining, for example, the trend of creating ever smaller vacuum tubes-the paradigm for improving computation in the 1950s-it's true that it continued until it didn't. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm. The technology of transistors kept the underlying trend of the exponential growth of price/performance of computation going, and that led to the fifth paradigm (Moore's law) and the continual compression of features on integrated circuits. There have been regular predictions that Moore's law will come to an end. The semiconductor industry's ”International Technology Roadmap for Semiconductors” projects seven-nanometer features by the early 2020s.2 At that point key features will be the width of thirty-five carbon atoms, and it will be difficult to continue shrinking them any farther. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, computing in three dimensions, to continue exponential improvement in price/performance. Intel projects that three-dimensional chips will be mainstream by the teen years; three-dimensional transistors and 3-D memory chips have already been introduced. This sixth paradigm will keep the LOAR going with regard to computer price/performance to a time later in this century when a thousand dollars' worth of computation will be trillions of times more powerful than the human brain. At that point key features will be the width of thirty-five carbon atoms, and it will be difficult to continue shrinking them any farther. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, computing in three dimensions, to continue exponential improvement in price/performance. Intel projects that three-dimensional chips will be mainstream by the teen years; three-dimensional transistors and 3-D memory chips have already been introduced. This sixth paradigm will keep the LOAR going with regard to computer price/performance to a time later in this century when a thousand dollars' worth of computation will be trillions of times more powerful than the human brain.3 (It appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain.) (It appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain.)4 Allen then goes on to give the standard argument that software is not progressing in the same exponential manner as hardware. In The Singularity Is Near The Singularity Is Near I addressed this issue at length, citing different methods of measuring complexity and capability in software that do demonstrate a similar exponential growth. I addressed this issue at length, citing different methods of measuring complexity and capability in software that do demonstrate a similar exponential growth.5 One recent study (”Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology,” by the President's Council of Advisors on Science and Technology) states the following: One recent study (”Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology,” by the President's Council of Advisors on Science and Technology) states the following: Even more remarkable-and even less widely understood-is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed. The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade.... Here is just one example, provided by Professor Martin Grotschel of Konrad-Zuse-Zentrum fur Informationstechnik Berlin. Grotschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later-in 2003-this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grotschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and a.n.a.lysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science.
Note that the linear programming that Grotschel cites above as having benefited from an improvement in performance of 43 million to 1 is the mathematical technique that is used to optimally a.s.sign resources in a hierarchical memory system such as HHMM that I discussed earlier. I cite many other similar examples like this in The Singularity Is Near The Singularity Is Near.6 Regarding AI, Allen is quick to dismiss IBM's Watson, an opinion shared by many other critics. Many of these detractors don't know anything about Watson other than the fact that it is software running on a computer (albeit a parallel one with 720 processor cores). Allen writes that systems such as Watson ”remain brittle, their performance boundaries are rigidly set by their internal a.s.sumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific areas.”
First of all, we could make a similar observation about humans. I would also point out that Watson's ”specific areas” include all all of Wikipedia plus many other knowledge bases, which hardly const.i.tute a narrow focus. Watson deals with a vast range of human knowledge and is capable of dealing with subtle forms of language, including puns, similes, and metaphors in virtually all fields of human endeavor. It's not perfect, but neither are humans, and it was good enough to be victorious on of Wikipedia plus many other knowledge bases, which hardly const.i.tute a narrow focus. Watson deals with a vast range of human knowledge and is capable of dealing with subtle forms of language, including puns, similes, and metaphors in virtually all fields of human endeavor. It's not perfect, but neither are humans, and it was good enough to be victorious on Jeopardy! Jeopardy! over the best human players. over the best human players.
Allen argues that Watson was a.s.sembled by the scientists themselves, building each link of narrow knowledge in specific areas. This is simply not true. Although a few areas of Watson's data were programmed directly, Watson acquired the significant majority of its knowledge on its own by reading natural-language doc.u.ments such as Wikipedia. That represents its key strength, as does its ability to understand the convoluted language in Jeopardy! Jeopardy! queries (answers in search of a question). queries (answers in search of a question).
As I mentioned earlier, much of the criticism of Watson is that it works through statistical probabilities rather than ”true” understanding. Many readers interpret this to mean that Watson is merely gathering statistics on word sequences. The term ”statistical information” in the case of Watson actually refers to distributed coefficients and symbolic connections in self-organizing methods such as hierarchical hidden Markov models. One could just as easily dismiss the distributed neurotransmitter concentrations and redundant connection patterns in the human cortex as ”statistical information.” Indeed we resolve ambiguities in much the same way that Watson does-by considering the likelihood of different interpretations of a phrase.
Allen continues, ”Every structure [in the brain] has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors.”
This contention that every structure and neural circuit in the brain is unique and there by design is simply impossible, for it would mean that the blueprint of the brain would require hundreds of trillions of bytes of information. The brain's structural plan (like that of the rest of the body) is contained in the genome, and the brain itself cannot contain more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) does not appreciably add to the amount of information in the genome. Experience and learning do add significantly to the amount of information contained in the brain, but the same can be said of AI systems like Watson. I show in The Singularity Is Near The Singularity Is Near that, after lossless compression (due to ma.s.sive redundancy in the genome), the amount of design information in the genome is about 50 million bytes, roughly half of which (that is, about 25 million bytes) pertains to the brain. that, after lossless compression (due to ma.s.sive redundancy in the genome), the amount of design information in the genome is about 50 million bytes, roughly half of which (that is, about 25 million bytes) pertains to the brain.7 That's not simple, but it is a level of complexity we can deal with and represents less complexity than many software systems in the modern world. Moreover much of the brain's 25 million bytes of genetic design information pertain to the biological requirements of neurons, not to their information-processing algorithms. That's not simple, but it is a level of complexity we can deal with and represents less complexity than many software systems in the modern world. Moreover much of the brain's 25 million bytes of genetic design information pertain to the biological requirements of neurons, not to their information-processing algorithms.
How do we arrive at on the order of 100 to 1,000 trillion connections in the brain from only tens of millions of bytes of design information? Obviously, the answer is through ma.s.sive redundancy. Dharmendra Modha, manager of Cognitive Computing for IBM Research, writes that ”neuroanatomists have not found a hopelessly tangled, arbitrarily connected network, completely idiosyncratic to the brain of each individual, but instead a great deal of repeating structure within an individual brain and a great deal of h.o.m.ology across species.... The astonis.h.i.+ng natural reconfigurability gives hope that the core algorithms of neurocomputation are independent of the specific sensory or motor modalities and that much of the observed variation in cortical structure across areas represents a refinement of a canonical circuit; it is indeed this canonical circuit we wish to reverse engineer.”8 Allen argues in favor of an inherent ”complexity brake that would necessarily limit progress in understanding the human brain and replicating its capabilities,” based on his notion that each of the approximately 100 to 1,000 trillion connections in the human brain is there by explicit design. His ”complexity brake” confuses the forest with the trees. If you want to understand, model, simulate, and re-create a pancreas, you don't need to re-create or simulate every organelle in every pancreatic islet cell. You would want instead to understand one islet cell, then abstract its basic functionality as it pertains to insulin control, and then extend that to a large group of such cells. This algorithm is well understood with regard to islet cells. There are now artificial pancreases that utilize this functional model being tested. Although there is certainly far more intricacy and variation in the brain than in the ma.s.sively repeated islet cells of the pancreas, there is nonetheless ma.s.sive repet.i.tion of functions, as I have described repeatedly in this book.
Critiques along the lines of Allen's also articulate what I call the ”scientist's pessimism.” Researchers working on the next generation of a technology or of modeling a scientific area are invariably struggling with that immediate set of challenges, so if someone describes what the technology will look like in ten generations, their eyes glaze over. One of the pioneers of integrated circuits was recalling for me recently the struggles to go from 10-micron (10,000 nanometers) feature sizes to 5-micron (5,000 nanometers) features over thirty years ago. The scientists were cautiously confident of reaching this goal, but when people predicted that someday we would actually have circuitry with feature sizes under 1 micron (1,000 nanometers), most of them, focused on their own goal, thought that was too wild to contemplate. Objections were made regarding the fragility of circuitry at that level of precision, thermal effects, and so on. Today Intel is starting to use chips with 22-nanometer gate lengths.
We witnessed the same sort of pessimism with respect to the Human Genome Project. Halfway through the fifteen-year effort, only 1 percent of the genome had been collected, and critics were proposing basic limits on how quickly it could be sequenced without destroying the delicate genetic structures. But thanks to the exponential growth in both capacity and price/performance, the project was finished seven years later. The project to reverse-engineer the human brain is making similar progress. It is only recently, for example, that we have reached a threshold with noninvasive scanning techniques so that we can see individual interneuronal connections forming and firing in real time. Much of the evidence I have presented in this book was dependent on such developments and has only recently been available.
Allen describes my proposal about reverse-engineering the human brain as simply scanning the brain to understand its fine structure and then simulating an entire brain ”bottom up” without comprehending its information-processing methods. This is not my proposition. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of a.n.a.lysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. From my own work in speech recognition, I know that our work was greatly accelerated when we gained insights as to how the brain prepares and transforms auditory information.
The way that the ma.s.sively redundant structures in the brain differentiate is through learning and experience. The current state of the art in AI does in fact enable systems to also learn from their own experience. The Google self-driving cars learn from their own driving experience as well as from data from Google cars driven by human drivers; Watson learned most of its knowledge by reading on its own. It is interesting to note that the methods deployed today in AI have evolved to be mathematically very similar to the mechanisms in the neocortex.
Another objection to the feasibility of ”strong AI” (artificial intelligence at human levels and beyond) that is often raised is that the human brain makes extensive use of a.n.a.log computing, whereas digital methods inherently cannot replicate the gradations of value that a.n.a.log representations can embody. It is true that one bit is either on or off, but multiple-bit words easily represent multiple gradations and can do so to any desired degree of accuracy. This is, of course, done all the time in digital computers. As it is, the accuracy of a.n.a.log information in the brain (synaptic strength, for example) is only about one level within 256 levels that can be represented by eight bits.
In chapter 9 chapter 9 I cited Roger Penrose and Stuart Hameroff's objection, which concerned microtubules and quantum computing. Recall that they claim that the microtubule structures in neurons are doing quantum computing, and since it is not possible to achieve that in computers, the human brain is fundamentally different and presumably better. As I argued earlier, there is no evidence that neuronal microtubules are carrying out quantum computation. Humans in fact do a very poor job of solving the kinds of problems that a quantum computer would excel at (such as factoring large numbers). And if any of this proved to be true, there would be nothing barring quantum computing from also being used in our computers. I cited Roger Penrose and Stuart Hameroff's objection, which concerned microtubules and quantum computing. Recall that they claim that the microtubule structures in neurons are doing quantum computing, and since it is not possible to achieve that in computers, the human brain is fundamentally different and presumably better. As I argued earlier, there is no evidence that neuronal microtubules are carrying out quantum computation. Humans in fact do a very poor job of solving the kinds of problems that a quantum computer would excel at (such as factoring large numbers). And if any of this proved to be true, there would be nothing barring quantum computing from also being used in our computers.
John Searle is famous for introducing a thought experiment he calls ”the Chinese room,” an argument I discuss in detail in The Singularity Is Near The Singularity Is Near.9 In short, it involves a man who takes in written questions in Chinese and then answers them. In order to do this, he uses an elaborate rulebook. Searle claims that the man has no true understanding of Chinese and is not ”conscious” of the language (as he does not understand the questions or the answers) despite his apparent ability to answer questions in Chinese. Searle compares this to a computer and concludes that a computer that could answer questions in Chinese (essentially pa.s.sing a Chinese Turing test) would, like the man in the Chinese room, have no real understanding of the language and no consciousness of what it was doing. In short, it involves a man who takes in written questions in Chinese and then answers them. In order to do this, he uses an elaborate rulebook. Searle claims that the man has no true understanding of Chinese and is not ”conscious” of the language (as he does not understand the questions or the answers) despite his apparent ability to answer questions in Chinese. Searle compares this to a computer and concludes that a computer that could answer questions in Chinese (essentially pa.s.sing a Chinese Turing test) would, like the man in the Chinese room, have no real understanding of the language and no consciousness of what it was doing.
There are a few philosophical sleights of hand in Searle's argument. For one thing, the man in this thought experiment is comparable only to the central processing unit (CPU) of a computer. One could say that a CPU has no true understanding of what it is doing, but the CPU is only part of the structure. In Searle's Chinese room, it is the man with with his rulebook that const.i.tutes the whole system. That system does have an understanding of Chinese; otherwise it would not be capable of convincingly answering questions in Chinese, which would violate Searle's a.s.sumption for this thought experiment. his rulebook that const.i.tutes the whole system. That system does have an understanding of Chinese; otherwise it would not be capable of convincingly answering questions in Chinese, which would violate Searle's a.s.sumption for this thought experiment.
The attractiveness of Searle's argument stems from the fact that it is difficult today to infer true understanding and consciousness in a computer program. The problem with his argument, however, is that you can apply his own line of reasoning to the human brain itself. Each neocortical pattern recognizer-indeed, each neuron and each neuronal component-is following an algorithm. (After all, these are molecular mechanisms that follow natural law.) If we conclude that following an algorithm is inconsistent with true understanding and consciousness, then we would have to also conclude that the human brain does not exhibit these qualities either. You can take John Searle's Chinese room argument and simply subst.i.tute ”manipulating interneuronal connections and synaptic strengths” for his words ”manipulating symbols” and you will have a convincing argument to the effect that human brains cannot truly understand anything.
Another line of argument comes from the nature of nature, which has become a new sacred ground for many observers. For example, New Zealand biologist Michael Denton (born in 1943) sees a profound difference between the design principles of machines and those of biology. Denton writes that natural ent.i.ties are ”self-organizing,...self-referential,...self-replicating,...reciprocal,...self-formative, and...holistic.”10 He claims that such biological forms can only be created through biological processes and that these forms are thereby ”immutable,...impenetrable, and...fundamental” realities of existence, and are therefore basically a different philosophical category from machines. He claims that such biological forms can only be created through biological processes and that these forms are thereby ”immutable,...impenetrable, and...fundamental” realities of existence, and are therefore basically a different philosophical category from machines.
The reality, as we have seen, is that machines can be designed using these same principles. Learning the specific design paradigms of nature's most intelligent ent.i.ty-the human brain-is precisely the purpose of the brain reverse-engineering project. It is also not true that biological systems are completely ”holistic,” as Denton puts it, nor, conversely, do machines need to be completely modular. We have clearly identified hierarchies of units of functionality in natural systems, especially the brain, and AI systems are using comparable methods.
It appears to me that many critics will not be satisfied until computers routinely pa.s.s the Turing test, but even that threshold will not be clear-cut. Undoubtedly, there will be controversy as to whether claimed Turing tests that have been administered are valid. Indeed, I will probably be among those critics disparaging early claims along these lines. By the time the arguments about the validity of a computer pa.s.sing the Turing test do settle down, computers will have long since surpa.s.sed unenhanced human intelligence.
My emphasis here is on the word ”unenhanced,” because enhancement is precisely the reason that we are creating these ”mind children,” as Hans Moravec calls them.11 Combining human-level pattern recognition with the inherent speed and accuracy of computers will result in very powerful abilities. But this is not an alien invasion of intelligent machines from Mars-we are creating these tools to make ourselves smarter. I believe that most observers will agree with me that this is what is unique about the human species: We build these tools to extend our own reach. Combining human-level pattern recognition with the inherent speed and accuracy of computers will result in very powerful abilities. But this is not an alien invasion of intelligent machines from Mars-we are creating these tools to make ourselves smarter. I believe that most observers will agree with me that this is what is unique about the human species: We build these tools to extend our own reach.
EPILOGUE
The picture's pretty bleak, gentlemen...The world's climates are changing, the mammals are taking over, and we all have a brain about the size of a walnut.-Dinosaurs talking, in The Far Side The Far Side by Gary Larson by Gary Larson
Intelligence may be defined as the ability to solve problems with limited resources, in which a key such resource is time. Thus the ability to more quickly solve a problem like finding food or avoiding a predator reflects greater power of intellect. Intelligence evolved because it was useful for survival-a fact that may seem obvious, but one with which not everyone agrees. As practiced by our species, it has enabled us not only to dominate the planet but to steadily improve the quality of our lives. This latter point, too, is not apparent to everyone, given that there is a widespread perception today that life is only getting worse. For example, a Gallup poll released on May 4, 2011, revealed that only ”44 percent of Americans believed that today's youth will have a better life than their parents.”1 If we look at the broad trends, not only has human life expectancy quadrupled over the last millennium (and more than doubled in the last two centuries),2 but per capita GDP (in constant current dollars) has gone from hundreds of dollars in 1800 to thousands of dollars today, with even more p.r.o.nounced trends in the developed world. but per capita GDP (in constant current dollars) has gone from hundreds of dollars in 1800 to thousands of dollars today, with even more p.r.o.nounced trends in the developed world.3 Only a handful of democracies existed a century ago, whereas they are the norm today. For a historical perspective on how far we have advanced, I suggest people read Thomas Hobbes's Only a handful of democracies existed a century ago, whereas they are the norm today. For a historical perspective on how far we have advanced, I suggest people read Thomas Hobbes's Leviathan Leviathan (1651), in which he describes the ”life of man” as ”solitary, poor, nasty, brutish, and short.” For a modern perspective, the recent book (1651), in which he describes the ”life of man” as ”solitary, poor, nasty, brutish, and short.” For a modern perspective, the recent book Abundance Abundance (2012), by X-Prize Foundation founder (and cofounder with me of Singularity University) Peter Diamandis and science writer Steven Kotler, doc.u.ments the extraordinary ways in which life today has steadily improved in every dimension. Steven Pinker's recent (2012), by X-Prize Foundation founder (and cofounder with me of Singularity University) Peter Diamandis and science writer Steven Kotler, doc.u.ments the extraordinary ways in which life today has steadily improved in every dimension. Steven Pinker's recent The Better Angels of Our Nature: Why Violence Has Declined The Better Angels of Our Nature: Why Violence Has Declined (2011) painstakingly doc.u.ments the steady rise of peaceful relations between people and peoples. American lawyer, entrepreneur, and author Martine Rothblatt (born in 1954) doc.u.ments the steady improvement in civil rights, noting, for example, how in a couple of decades same-s.e.x marriage went from being legally recognized nowhere in the world to being legally accepted in a rapidly growing number of jurisdictions. (2011) painstakingly doc.u.ments the steady rise of peaceful relations between people and peoples. American lawyer, entrepreneur, and author Martine Rothblatt (born in 1954) doc.u.ments the steady improvement in civil rights, noting, for example, how in a couple of decades same-s.e.x marriage went from being legally recognized nowhere in the world to being legally accepted in a rapidly growing number of jurisdictions.4 A primary reason that people believe that life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousands of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures). During the nineteenth century there was almost no access to news in a timely fas.h.i.+on for anyone.
The advancement we have made as a species due to our intelligence is reflected in the evolution of our knowledge, which includes our technology and our culture. Our various technologies are increasingly becoming information technologies, which inherently continue to progress in an exponential manner. It is through such technologies that we are able to address the grand challenges of humanity, such as maintaining a healthy environment, providing the resources for a growing population (including energy, food, and water), overcoming disease, vastly extending human longevity, and eliminating poverty. It is only by extending ourselves with intelligent technology that we can deal with the scale of complexity needed to address these challenges.
These technologies are not the vanguard of an intelligent invasion that will compete with and ultimately displace us. Ever since we picked up a stick to reach a higher branch, we have used our tools to extend our reach, both physically and mentally. That we can take a device out of our pocket today and access much of human knowledge with a few keystrokes extends us beyond anything imaginable by most observers only a few decades ago. The ”cell phone” (the term is placed in quotes because it is vastly more than a phone) in my pocket is a million times less expensive yet thousands of times more powerful than the computer all the students and professors at MIT shared when I was an undergraduate there. That's a several billion-fold increase in price/performance over the last forty years, an escalation we will see again in the next twenty-five years, when what used to fit in a building, and now fits in your pocket, will fit inside a blood cell.
In this way we will merge with the intelligent technology we are creating. Intelligent nan.o.bots in our bloodstream will keep our biological bodies healthy at the cellular and molecular levels. They will go into our brains noninvasively through the capillaries and interact with our biological neurons, directly extending our intelligence. This is not as futuristic as it may sound. There are already blood cellsized devices that can cure type I diabetes in animals or detect and destroy cancer cells in the bloodstream. Based on the law of accelerating returns, these technologies will be a billion times more powerful within three decades than they are today.
I already consider the devices I use and the cloud of computing resources to which they are virtually connected as extensions of myself, and feel less than complete if I am cut off from these brain extenders. That is why the one-day strike by Google, Wikipedia, and thousands of other Web sites against the SOPA (Stop Online Piracy Act) on January 18, 2012, was so remarkable: I felt as if part of my brain were going on strike (although I and others did find ways to access these online resources). It was also an impressive demonstration of the political power of these sites as the bill-which looked as if it was headed for ratification-was instantly killed. But more important, it showed how thoroughly we have already outsourced parts of our thinking to the cloud of computing. It is already part of who we are. Once we routinely have intelligent nonbiological intelligence in our brains, this augmentation-and the cloud it is connected to-will continue to grow in capability exponentially.
The intelligence we will create from the reverse-engineering of the brain will have access to its own source code and will be able to rapidly improve itself in an accelerating iterative design cycle. Although there is considerable plasticity in the biological human brain, as we have seen, it does have a relatively fixed architecture, which cannot be significantly modified, as well as a limited capacity. We are unable to increase its 300 million pattern recognizers to, say, 400 million unless we do so nonbiologically. Once we can achieve that, there will be no reason to stop at a particular level of capability. We can go on to make it a billion pattern recognizers, or a trillion.
From quant.i.tative improvement comes qualitative advance. The most important evolutionary advance in h.o.m.o sapiens h.o.m.o sapiens was quant.i.tative: the development of a larger forehead to accommodate more neocortex. Greater neocortical capacity enabled this new species to create and contemplate thoughts at higher conceptual levels, resulting in the establishment of all the varied fields of art and science. As we add more neocortex in a nonbiological form, we can expect ever higher qualitative levels of abstraction. was quant.i.tative: the development of a larger forehead to accommodate more neocortex. Greater neocortical capacity enabled this new species to create and contemplate thoughts at higher conceptual levels, resulting in the establishment of all the varied fields of art and science. As we add more neocortex in a nonbiological form, we can expect ever higher qualitative levels of abstraction.
British mathematician Irvin J. Good, a colleague of Alan Turing's, wrote in 1965 that ”the first ultraintelligent machine is the last invention that man need ever make.” He defined such a machine as one that could surpa.s.s the ”intellectual activities of any man however clever” and concluded that ”since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion.'”
The last invention that biological evolution needed to make-the neocortex-is inevitably leading to the last invention that humanity needs to make-truly intelligent machines-and the design of one is inspiring the other. Biological evolution is continuing but technological evolution is moving a million times faster than the former. According to the law of accelerating returns, by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics as applied to computation.5 We call matter and energy organized in this way ”computronium,” which is vastly more powerful pound per pound than the human brain. It will not just be raw computation but will be infused with intelligent algorithms const.i.tuting all of human-machine knowledge. Over time we will convert much of the ma.s.s and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. Then, to keep the law of accelerating returns going, we will need to spread out to the rest of the galaxy and universe. We call matter and energy organized in this way ”computronium,” which is vastly more powerful pound per pound than the human brain. It will not just be raw computation but will be infused with intelligent algorithms const.i.tuting all of human-machine knowledge. Over time we will convert much of the ma.s.s and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. Then, to keep the law of accelerating returns going, we will need to spread out to the rest of the galaxy and universe.
If the speed of light indeed remains an inexorable limit, then colonizing the universe will take a long time, given that the nearest star system to Earth is four light-years away. If there are even subtle means to circ.u.mvent this limit, our intelligence and technology will be sufficiently powerful to exploit them. This is one reason why the recent suggestion that the muons that traversed the 730 kilometers from the CERN accelerator on the Swiss-French border to the Gran Sa.s.so Laboratory in central Italy appeared to be moving faster than the speed of light was such potentially significant news. This particular observation appears to be a false alarm, but there are other possibilities to get around this limit. We do not even need to exceed the speed of light if we can find shortcuts to other apparently faraway places through spatial dimensions beyond the three with which we are familiar. Whether we are able to surpa.s.s or otherwise get around the speed of light as a limit will be the key strategic issue for the human-machine civilization at the beginning of the twenty-second century.
<script>