Part 10 (1/2)

The highest bandwidth (speed) of the Internet backbone.7

Being an engineer, about three decades ago I started to gather data on measures of technology in different areas. When I began this effort, I did not expect that it would present a clear picture, but I did hope that it would provide some guidance and enable me to make educated guesses. My goal was-and still is-to time my own technology efforts so that they will be appropriate for the world that exists when I complete a project-which I realized would be very different from the world that existed when I started.

Consider how much and how quickly the world has changed only recently. Just a few years ago, people did not use social networks (Facebook, for example, was founded in 2004 and had 901 million monthly active users at the end of March 2012),8 wikis, blogs, or tweets. In the 1990s most people did not use search engines or cell phones. Imagine the world without them. That seems like ancient history but was not so long ago. The world will change even more dramatically in the near future. wikis, blogs, or tweets. In the 1990s most people did not use search engines or cell phones. Imagine the world without them. That seems like ancient history but was not so long ago. The world will change even more dramatically in the near future.

In the course of my investigation, I made a startling discovery: If a technology is an information technology, the basic measures of price/performance and capacity (per unit of time or cost, or other resource) follow amazingly precise exponential trajectories.

These trajectories outrun the specific paradigms they are based on (such as Moore's law). But when one paradigm runs out of steam (for example, when engineers were no longer able to reduce the size and cost of vacuum tubes in the 1950s), it creates research pressure to create the next paradigm, and so another S-curve of progress begins.

The exponential portion of that next S-curve for the new paradigm then continues the ongoing exponential of the information technology measure. Thus vacuum tubebased computing in the 1950s gave way to transistors in the 1960s, and then to integrated circuits and Moore's law in the late 1960s, and beyond. Moore's law, in turn, will give way to three-dimensional computing, the early examples of which are already in place. The reason why information technologies are able to consistently transcend the limitations of any particular paradigm is that the resources required to compute or remember or transmit a bit of information are vanis.h.i.+ngly small.

We might wonder, are there fundamental limits to our ability to compute and transmit information, regardless of paradigm? The answer is yes, based on our current understanding of the physics of computation. Those limits, however, are not very limiting. Ultimately we can expand our intelligence trillions-fold based on molecular computing. By my calculations, we will reach these limits late in this century.

It is important to point out that not every exponential phenomenon is an example of the law of accelerating returns. Some observers misconstrue the LOAR by citing exponential trends that are not information-based: For example, they point out, men's shavers have gone from one blade to two to four, and then ask, where are the eight-blade shavers? Shavers are not (yet) an information technology.

In The Singularity Is Near The Singularity Is Near, I provide a theoretical examination, including (in the appendix to that book) a mathematical treatment of why the LOAR is so remarkably predictable. Essentially, we always use the latest technology to create the next. Technologies build on themselves in an exponential manner, and this phenomenon is readily measurable if it involves an information technology. In 1990 we used the computers and other tools of that era to create the computers of 1991; in 2012 we are using current information tools to create the machines of 2013 and 2014. More broadly speaking, this acceleration and exponential growth applies to any process in which patterns of information evolve. So we see acceleration in the pace of biological evolution, and similar (but much faster) acceleration in technological evolution, which is itself an outgrowth of biological evolution.

I now have a public track record of more than a quarter of a century of predictions based on the law of accelerating returns, starting with those presented in The Age of Intelligent Machines The Age of Intelligent Machines, which I wrote in the mid-1980s. Examples of accurate predictions from that book include: the emergence in the mid- to late 1990s of a vast worldwide web of communications tying together people around the world to one another and to all human knowledge; a great wave of democratization emerging from this decentralized communication network, sweeping away the Soviet Union; the defeat of the world chess champion by 1998; and many others.

I described the law of accelerating returns, as it is applied to computation, extensively in The Age of Spiritual Machines The Age of Spiritual Machines, where I provided a century of data showing the doubly exponential progression of the price/performance of computation through 1998. It is updated through 2009 below.

I recently wrote a 146-page review of the predictions I made in The Age of Intelligent Machines, The Age of Spiritual Machines The Age of Intelligent Machines, The Age of Spiritual Machines, and The Singularity Is Near The Singularity Is Near. (You can read the essay here by going to the link in this endnote.)9The Age of Spiritual Machines included hundreds of predictions for specific decades (2009, 2019, 2029, and 2099). For example, I made 147 predictions for 2009 in included hundreds of predictions for specific decades (2009, 2019, 2029, and 2099). For example, I made 147 predictions for 2009 in The Age of Spiritual Machines The Age of Spiritual Machines, which I wrote in the 1990s. Of these, 115 (78 percent) are entirely correct as of the end of 2009; the predictions that were concerned with basic measurements of the capacity and price/performance of information technologies were particularly accurate. Another 12 (8 percent) are ”essentially correct.” A total of 127 predictions (86 percent) are correct or essentially correct. (Since the predictions were made specific to a given decade, a prediction for 2009 was considered ”essentially correct” if it came true in 2010 or 2011.) Another 17 (12 percent) are partially correct, and 3 (2 percent) are wrong.

Calculations per second per (constant) thousand dollars of different computing devices.10

Floating-point operations per second of different supercomputers.11

Transistors per chip for different Intel processors.12

Bits per dollar for dynamic random access memory chips.13

Bits per dollar for random access memory chips.14

The average price per transistor in dollars.15

The total number of bits of random access memory s.h.i.+pped each year.16

Bits per dollar (in constant 2000 dollars) for magnetic data storage.17

Even the predictions that were ”wrong” were not all wrong. For example, I judged my prediction that we would have self-driving cars to be wrong, even though Google has demonstrated self-driving cars, and even though in October 2010 four driverless electric vans successfully concluded a 13,000-kilometer test drive from Italy to China.18 Experts in the field currently predict that these technologies will be routinely available to consumers by the end of this decade. Experts in the field currently predict that these technologies will be routinely available to consumers by the end of this decade.

Exponentially expanding computational and communication technologies all contribute to the project to understand and re-create the methods of the human brain. This effort is not a single organized project but rather the result of a great many diverse projects, including detailed modeling of const.i.tuents of the brain ranging from individual neurons to the entire neocortex, the mapping of the ”connectome” (the neural connections in the brain), simulations of brain regions, and many others. All of these have been scaling up exponentially. Much of the evidence presented in this book has only become available recently-for example, the 2012 Wedeen study discussed in chapter 4 chapter 4 that showed the very orderly and ”simple” (to quote the researchers) gridlike pattern of the connections in the neocortex. The researchers in that study acknowledge that their insight (and images) only became feasible as the result of new high-resolution imaging technology. that showed the very orderly and ”simple” (to quote the researchers) gridlike pattern of the connections in the neocortex. The researchers in that study acknowledge that their insight (and images) only became feasible as the result of new high-resolution imaging technology.

Brain scanning technologies are improving in resolution, spatial and temporal, at an exponential rate. Different types of brain scanning methods being pursued range from completely noninvasive methods that can be used with humans to more invasive or destructive methods on animals.

MRI (magnetic resonance imaging), a noninvasive imaging technique with relatively high temporal resolution, has steadily improved at an exponential pace, to the point that spatial resolutions are now close to 100 microns (millionths of a meter).

A Venn diagram of brain imaging methods.19

Tools for imaging the brain.20

MRI spatial resolution in microns.21

Spatial resolution of destructive imaging techniques.22

Spatial resolution of nondestructive imaging techniques in animals.23 Destructive imaging, which is performed to collect the connectome (map of all interneuronal connections) in animal brains, has also improved at an exponential pace. Current maximum resolution is around four nanometers, which is sufficient to see individual connections.

Artificial intelligence technologies such as natural-language-understanding systems are not necessarily designed to emulate theorized principles of brain function, but rather for maximum effectiveness. Given this, it is notable that the techniques that have won out are consistent with the principles I have outlined in this book: self-organizing, hierarchical recognizers of invariant self-a.s.sociative patterns with redundancy and up-and-down predictions. These systems are also scaling up exponentially, as Watson has demonstrated.

A primary purpose of understanding the brain is to expand our toolkit of techniques to create intelligent systems. Although many AI researchers may not fully appreciate this, they have already been deeply influenced by our knowledge of the principles of the operation of the brain. Understanding the brain also helps us to reverse brain dysfunctions of various kinds. There is, of course, another key goal of the project to reverse-engineer the brain: understanding who we are.

CHAPTER 11

OBJECTIONS

If a machine can prove indistinguishable from a human, we should award it the respect we would to a human-we should accept that it has a mind.-Stevan Harnad

The most significant source of objection to my thesis on the law of accelerating returns and its application to the amplification of human intelligence stems from the linear nature of human intuition. As I described earlier, each of the several hundred million pattern recognizers in the neocortex processes information sequentially. One of the implications of this organization is that we have linear expectations about the future, so critics apply their linear intuition to information phenomena that are fundamentally exponential. he most significant source of objection to my thesis on the law of accelerating returns and its application to the amplification of human intelligence stems from the linear nature of human intuition. As I described earlier, each of the several hundred million pattern recognizers in the neocortex processes information sequentially. One of the implications of this organization is that we have linear expectations about the future, so critics apply their linear intuition to information phenomena that are fundamentally exponential.

I call objections along these lines ”criticism from incredulity,” in that exponential projections seem incredible given our linear predilection, and they take a variety of forms. Microsoft cofounder Paul Allen (born in 1953) and his colleague Mark Greaves recently articulated several of them in an essay t.i.tled ”The Singularity Isn't Near” published in Technology Review Technology Review magazine. magazine.1 While my response here is to Allen's particular critiques, they represent a typical range of objections to the arguments I've made, especially with regard to the brain. Although Allen references While my response here is to Allen's particular critiques, they represent a typical range of objections to the arguments I've made, especially with regard to the brain. Although Allen references The Singularity Is Near The Singularity Is Near in the t.i.tle of his essay, his only citation in the piece is to an essay I wrote in 2001 (”The Law of Accelerating Returns”). Moreover, his article does not acknowledge or respond to arguments I actually make in the book. Unfortunately, I find this often to be the case with critics of my work. in the t.i.tle of his essay, his only citation in the piece is to an essay I wrote in 2001 (”The Law of Accelerating Returns”). Moreover, his article does not acknowledge or respond to arguments I actually make in the book. Unfortunately, I find this often to be the case with critics of my work.

When The Age of Spiritual Machines The Age of Spiritual Machines was published in 1999, augmented later by the 2001 essay, it generated several lines of criticism, such as: was published in 1999, augmented later by the 2001 essay, it generated several lines of criticism, such as: Moore's law will come to an end; hardware capability may be expanding exponentially but software is stuck in the mud; the brain is too complicated; there are capabilities in the brain that inherently cannot be replicated in software; Moore's law will come to an end; hardware capability may be expanding exponentially but software is stuck in the mud; the brain is too complicated; there are capabilities in the brain that inherently cannot be replicated in software; and several others. One of the reasons I wrote and several others. One of the reasons I wrote The Singularity Is Near The Singularity Is Near was to respond to those critiques. was to respond to those critiques.