Part 14 (1/2)

Entertainment and Sports. In an amusing and intriguing application of GAs, Oxford scientist Torsten Reil created animated creatures with simulated joints and muscles and a neural net for a brain. He then a.s.signed them a task: to walk. He used a GA to evolve this capability, which involved seven hundred parameters. ”If you look at that system with your human eyes, there's no way you can do it on your own, because the system is just too complex,” Reil points out. ”That's where evolution comes in.” In an amusing and intriguing application of GAs, Oxford scientist Torsten Reil created animated creatures with simulated joints and muscles and a neural net for a brain. He then a.s.signed them a task: to walk. He used a GA to evolve this capability, which involved seven hundred parameters. ”If you look at that system with your human eyes, there's no way you can do it on your own, because the system is just too complex,” Reil points out. ”That's where evolution comes in.”210 While some of the evolved creatures walked in a smooth and convincing way, the research demonstrated a well-known attribute of GAs: you get what you ask for. Some creatures figured out novel new ways of pa.s.sing for walking. According to Reil, ”We got some creatures that didn't walk at all, but had these very strange ways of moving forward: crawling or doing somersaults.”

Software is being developed that can automatically extract excerpts from a video of a sports game that show the more important plays.211 A team at Trinity College in Dublin is working on table-based games like pool, in which software tracks the location of each ball and is programmed to identify when a significant shot has been made. A team at the University of Florence is working on soccer. This software tracks the location of each player and can determine the type of play being made (such as free kicking or attempting a goal), when a goal is achieved, when a penalty is earned, and other key events. A team at Trinity College in Dublin is working on table-based games like pool, in which software tracks the location of each ball and is programmed to identify when a significant shot has been made. A team at the University of Florence is working on soccer. This software tracks the location of each player and can determine the type of play being made (such as free kicking or attempting a goal), when a goal is achieved, when a penalty is earned, and other key events.

The Digital Biology Interest Group at University College in London is designing Formula One race cars by breeding them using GAs.212 The AI winter is long since over. We are well into the spring of narrow AI. Most of the examples above were research projects just ten to fifteen years ago. If all the AI systems in the world suddenly stopped functioning, our economic infrastructure would grind to a halt. Your bank would cease doing business. Most transportation would be crippled. Most communications would fail. This was not the case a decade ago. Of course, our AI systems are not smart enough-yet-to organize such a conspiracy.

Strong AI

If you understand something in only one way, then you don't really understand it at all. This is because, if something goes wrong, you get stuck with a thought that just sits in your mind with nowhere to go. The secret of what anything means to us depends on how we've connected it to all the other things we know. This is why, when someone learns ”by rote,” we say that they don't really understand. However, if you have several different representations then, when one approach fails you can try another. Of course, making too many indiscriminate connections will turn a mind to mush. But well-connected representations let you turn ideas around in your mind, to envision things from many perspectives until you find one that works for you. And that's what we mean by thinking!-MARVIN MINSKY213 Advancing computer performance is like water slowly flooding the landscape. A half century ago it began to drown the lowlands, driving out human calculators and record clerks, but leaving most of us dry. Now the flood has reached the foothills, and our outposts there are contemplating retreat. We feel safe on our peaks, but, at the present rate, those too will be submerged within another half century. I propose that we build Arks as that day nears, and adopt a seafaring life! For now, though, we must rely on our representatives in the lowlands to tell us what water is really like.Our representatives on the foothills of chess and theorem-proving report signs of intelligence. Why didn't we get similar reports decades before, from the lowlands, as computers surpa.s.sed humans in arithmetic and rote memorization? Actually, we did, at the time. Computers that calculated like thousands of mathematicians were hailed as ”giant brains,” and inspired the first generation of AI research. After all, the machines were doing something beyond any animal, that needed human intelligence, concentration and years of training. But it is hard to recapture that magic now.One reason is that computers' demonstrated stupidity in other areas biases our judgment. Another relates to our own inept.i.tude. We do arithmetic or keep records so painstakingly and externally that the small mechanical steps in a long calculation are obvious, while the big picture often escapes us. Like Deep Blue's builders, we see the process too much from the inside to appreciate the subtlety that it may have on the outside. But there is a non.o.bviousness in snowstorms or tornadoes that emerge from the repet.i.tive arithmetic of weather simulations, or in rippling tyrannosaur skin from movie animation calculations. We rarely call it intelligence, but ”artificial reality” may be an even more profound concept than artificial intelligence.The mental steps underlying good human chess playing and theorem proving are complex and hidden, putting a mechanical interpretation out of reach. Those who can follow the play naturally describe it instead in mentalistic language, using terms like strategy, understanding and creativity. When a machine manages to be simultaneously meaningful and surprising in the same rich way, it too compels a mentalistic interpretation. Of course, somewhere behind the scenes, there are programmers who, in principle, have a mechanical interpretation. But even for them, that interpretation loses its grip as the working program fills its memory with details too voluminous for them to grasp.As the rising flood reaches more populated heights, machines will begin to do well in areas a greater number can appreciate. The visceral sense of a thinking presence in machinery will become increasingly widespread. When the highest peaks are covered, there will be machines that can interact as intelligently as any human on any subject. The presence of minds in machines will then become self-evident.-HANS MORAVEC214

Because of the exponential nature of progress in information-based technologies, performance often s.h.i.+fts quickly from pathetic to daunting. In many diverse realms, as the examples in the previous section make clear, the performance of narrow AI is already impressive. The range of intelligent tasks in which machines can now compete with human intelligence is continually expanding. In a cartoon I designed for The Age of Spiritual Machines The Age of Spiritual Machines, a defensive ”human race” is seen writing out signs that state what only people (and not machines) can do.215 Littered on the floor are the signs the human race has already discarded because machines can now perform these functions: diagnose an electrocardiogram, compose in the style of Bach, recognize faces, guide a missile, play Ping-Pong, play master chess, pick stocks, improvise jazz, prove important theorems, and understand continuous speech. Back in 1999 these tasks were no longer solely the province of human intelligence; machines could do them all. Littered on the floor are the signs the human race has already discarded because machines can now perform these functions: diagnose an electrocardiogram, compose in the style of Bach, recognize faces, guide a missile, play Ping-Pong, play master chess, pick stocks, improvise jazz, prove important theorems, and understand continuous speech. Back in 1999 these tasks were no longer solely the province of human intelligence; machines could do them all.

On the wall behind the man symbolizing the human race are signs he has written out describing the tasks that were still the sole province of humans: have common sense, review a movie, hold press conferences, translate speech, clean a house, and drive cars. If we were to redesign this cartoon in a few years, some of these signs would also be likely to end up on the floor. When CYC reaches one hundred million items of commonsense knowledge, perhaps human superiority in the realm of commonsense reasoning won't be so clear.

The era of household robots, although still fairly primitive today, has already started. Ten years from now, it's likely we will consider ”clean a house” as within the capabilities of machines. As for driving cars, robots with no human intervention have already driven nearly across the United States on ordinary roads with other normal traffic. We are not yet ready to turn over all steering wheels to machines, but there are serious proposals to create electronic highways on which cars (with people in them) will drive by themselves.

The three tasks that have to do with human-level understanding of natural language-reviewing a movie, holding a press conference, and translating speech-are the most difficult. Once we can take down these signs, we'll have Turing-level machines, and the era of strong AI will have started.

This era will creep up on us. As long as there are any discrepancies between human and machine performance-areas in which humans outperform machines-strong AI skeptics will seize on these differences. But our experience in each area of skill and knowledge is likely to follow that of Kasparov. Our perceptions of performance will s.h.i.+ft quickly from pathetic to daunting as the knee of the exponential curve is reached for each human capability.

How will strong AI be achieved? Most of the material in this book is intended to layout the fundamental requirements for both hardware and software and explain why we can be confident that these requirements will be met in nonbiological systems. The continuation of the exponential growth of the price-performance of computation to achieve hardware capable of emulating human intelligence was still controversial in 1999. There has been so much progress in developing the technology for three-dimensional computing over the past five years that relatively few knowledgeable observers now doubt that this will happen. Even just taking the semiconductor industry's published ITRS road map, which runs to 2018, we can project human-level hardware at reasonable cost by that year.216 I've stated the case in chapter 4 of why we can have confidence that we will have detailed models and simulations of all regions of the human brain by the late 2020s. Until recently, our tools for peering into the brain did not have the spatial and temporal resolution, bandwidth, or price-performance to produce adequate data to create sufficiently detailed models. This is now changing. The emerging generation of scanning and sensing tools can a.n.a.lyze and detect neurons and neural components with exquisite accuracy, while operating in real time.

Future tools will provide far greater resolution and capacity. By the 2020s, we will be able to send scanning and sensing nan.o.bots into the capillaries of the brain to scan it from inside. We've shown the ability to translate the data from diverse sources of brain scanning and sensing into models and computer simulations that hold up well to experimental comparison with the performance of the biological versions of these regions. We already have compelling models and simulations for several important brain regions. As I argued in chapter 4, it's a conservative projection to expect detailed and realistic models of all brain regions by the late 2020s.

One simple statement of the strong AI scenario is that. we will learn the principles of operation of human intelligence from reverse engineering all the brain's regions, and we will apply these principles to the brain-capable computing platforms that will exist in the 2020s. We already have an effective toolkit for narrow AI. Through the ongoing refinement of these methods, the development of new algorithms, and the trend toward combining multiple methods into intricate architectures, narrow AI will continue to become less narrow. That is, AI applications will have broader domains, and their performance will become more flexible. AI systems will develop multiple ways of approaching each problem, just as humans do. Most important, the new insights and paradigms resulting from the acceleration of brain reverse engineering will greatly enrich this set of tools on an ongoing basis. This process is well under way.

It's often said that the brain works differently from a computer, so we cannot apply our insights about brain function into workable nonbiological systems. This view completely ignores the field of self-organizing systems, for which we have a set of increasingly sophisticated mathematical tools. As I discussed in the previous chapter, the brain differs in a number of important ways from conventional, contemporary computers. If you open up your Palm Pilot and cut a wire, there's a good chance you will break the machine. Yet we routinely lose many neurons and interneuronal connections with no ill effect, because the brain is self-organizing and relies on distributed patterns in which many specific details are not important.

When we get to the mid- to late 2020s, we will have access to a generation of extremely detailed brain-region models. Ultimately the toolkit will be greatly enriched with these new models and simulations and will encompa.s.s a full knowledge of how the brain works. As we apply the toolkit to intelligent tasks, we will draw upon the entire range of tools, some derived directly from brain reverse engineering, some merely inspired by what we know about the brain, and some not based on the brain at all but on decades of AI research.

Part of the brain's strategy is to learn information, rather than having knowledge hard-coded from the start. (”Instinct” is the term we use to refer to such innate knowledge.) Learning will be an important aspect of AI, as well. In my experience in developing pattern-recognition systems in character recognition, speech recognition, and financial a.n.a.lysis, providing for the AI's education is the most challenging and important part of the engineering. With the acc.u.mulated knowledge of human civilization increasingly accessible online, future Als will have the opportunity to conduct their education by accessing this vast body of information.

The education of AIs will be much faster than that of unenhanced humans. The twenty-year time span required to provide a basic education to biological humans could be compressed into a matter of weeks or less. Also, because nonbiological intelligence can share its patterns of learning and knowledge, only one AI has to master each particular skill. As I pointed out, we trained one set of research computers to understand speech, but then the hundreds of thousands of people who acquired our speech-recognition software had to load only the already trained patterns into their computers.

One of the many skills that nonbiological intelligence will achieve with the completion of the human brain reverse-engineering project is sufficient mastery of language and shared human knowledge to pa.s.s the Turing test. The Turing test is important not so much for its practical significance but rather because it will demarcate a crucial threshold. As I have pointed out, there is no simple means to pa.s.s a Turing test, other than to convincingly emulate the flexibility, subtlety, and suppleness of human intelligence. Having captured that capability in our technology, it will then be subject to engineering's ability to concentrate, focus, and amplify it.

Variations of the Turing test have been proposed. The annual Loebner Prize contest awards a bronze prize to the chatterbot (conversational bot) best able to convince human judges that it's human.217 The criteria for winning the silver prize is based on Turing's original test, and it obviously has yet to be awarded. The gold prize is based on visual and auditory communication. In other words, the AI must have a convincing face and voice, as transmitted over a terminal, and thus it must appear to the human judge as if he or she is interacting with a real person over a videophone. On the face of it, the gold prize sounds more difficult. I've argued that it may actually be easier, because judges may pay less attention to the text portion of the language being communicated and could be distracted by a convincing facial and voice animation. In fact, we already have real-time facial animation, and while it is not quite up to these modified Turing standards, it's reasonably close. We also have very natural-sounding voice synthesis, which is often confused with recordings of human speech, although more work is needed on prosodies (intonation). We're likely to achieve satisfactory facial animation and voice production sooner than the Turing-level language and knowledge capabilities. The criteria for winning the silver prize is based on Turing's original test, and it obviously has yet to be awarded. The gold prize is based on visual and auditory communication. In other words, the AI must have a convincing face and voice, as transmitted over a terminal, and thus it must appear to the human judge as if he or she is interacting with a real person over a videophone. On the face of it, the gold prize sounds more difficult. I've argued that it may actually be easier, because judges may pay less attention to the text portion of the language being communicated and could be distracted by a convincing facial and voice animation. In fact, we already have real-time facial animation, and while it is not quite up to these modified Turing standards, it's reasonably close. We also have very natural-sounding voice synthesis, which is often confused with recordings of human speech, although more work is needed on prosodies (intonation). We're likely to achieve satisfactory facial animation and voice production sooner than the Turing-level language and knowledge capabilities.

Turing was carefully imprecise in setting the rules for his test, and significant literature has been devoted to the subtleties of establis.h.i.+ng the exact procedures for determining how to a.s.sess when the Turing test has been pa.s.sed.218 In 2002 I negotiated the rules for a Turing-test wager with Mitch Kapor on the Long Now Web site. In 2002 I negotiated the rules for a Turing-test wager with Mitch Kapor on the Long Now Web site.219 The question underlying our twenty-thousand-dollar bet, the proceeds of which go to the charity of the winner's choice, was, ”Will the Turing test be pa.s.sed by a machine by 2029?” I said yes, and Kapor said no. It took us months of dialogue to arrive at the intricate rules to implement our wager. Simply defining ”machine” and ”human,” for example, was not a straightforward matter. Is the human judge allowed to have any nonbiological thinking processes in his or her brain? Conversely, can the machine have any biological aspects? The question underlying our twenty-thousand-dollar bet, the proceeds of which go to the charity of the winner's choice, was, ”Will the Turing test be pa.s.sed by a machine by 2029?” I said yes, and Kapor said no. It took us months of dialogue to arrive at the intricate rules to implement our wager. Simply defining ”machine” and ”human,” for example, was not a straightforward matter. Is the human judge allowed to have any nonbiological thinking processes in his or her brain? Conversely, can the machine have any biological aspects?

Because the definition of the Turing test will vary from person to person, Turing test-capable machines will not arrive on a single day, and there will be a period during which we will hear claims that machines have pa.s.sed the threshold. Invariably, these early claims will be debunked by knowledgeable observers, probably including myself. By the time there is a broad consensus that the Turing test has been pa.s.sed, the actual threshold will have long since been achieved.

Edward Feigenbaum proposes a variation of the Turing test, which a.s.sesses not a machine's ability to pa.s.s for human in casual, everyday dialogue but its ability to pa.s.s for a scientific expert in a specific field.220 The Feigenbaum test (FT) may be more significant than the Turing test because FT-capable machines, being technically proficient, will be capable of improving their own designs. Feigenbaum describes his test in this way: The Feigenbaum test (FT) may be more significant than the Turing test because FT-capable machines, being technically proficient, will be capable of improving their own designs. Feigenbaum describes his test in this way:

Two players play the FT game. One player is chosen from among the elite pract.i.tioners in each of three pre-selected fields of natural science, engineering, or medicine. (The number could be larger, but for this challenge not greater than ten). Let's say we choose the fields from among those covered in the U.S. National Academy....For example, we could choose astrophysics, computer science, and molecular biology. In each round of the game, the behavior of the two players (elite scientist and computer) is judged by another Academy member in that particular domain of discourse, e.g., an astrophysicist judging astrophysics behavior.

Of course the ident.i.ty of the players is hidden from the judge as it is in the Turing test. The judge poses problems, asks questions, asks for explanations, theories, and so on-as one might do with a colleague. Can the human judge choose, at better than chance level, which is his National Academy colleague and which is the computer? Of course Feigenbaum overlooks the possibility that the computer might already be a National Academy colleague, but he is obviously a.s.suming that machines will not yet have invaded inst.i.tutions that today comprise exclusively biological humans. While it may appear that the FT is more difficult than the Turing test, the entire history of AI reveals that machines started with the skills of professionals and only gradually moved toward the language skills of a child. Early AI systems demonstrated their prowess initially in professional fields such as proving mathematical theorems and diagnosing medical conditions. These early systems would not be able to pa.s.s the FT, however, because they do not have the language skills and the flexible ability to model knowledge from different perspectives that are needed to engage in the professional dialogue inherent in the FT.

This language ability is essentially the same ability needed in the Turing test. Reasoning in many technical fields is not necessarily more difficult than the commonsense reasoning engaged in by most human adults. I would expect that machines will pa.s.s the FT, at least in some disciplines, around the same time as they pa.s.s the Turing test. Pa.s.sing the FT in all disciplines is likely to take longer, however. This is why I see the 2030s as a period of consolidation, as machine intelligence rapidly expands its skills and incorporates the vast knowledge bases of our biological human and machine civilization. By the 2040s we will have the opportunity to apply the acc.u.mulated knowledge and skills of our civilization to computational platforms that are billions of times more capable than una.s.sisted biological human intelligence.

The advent of strong AI is the most important transformation this century will see. Indeed, it's comparable in importance to the advent of biology itself. It will mean that a creation of biology has finally mastered its own intelligence and discovered means to overcome its limitations. Once the principles of operation of human intelligence are understood, expanding its abilities will be conducted by human scientists and engineers whose own biological intelligence will have been greatly amplified through an intimate merger with nonbiological intelligence. Over time, the nonbiological portion will predominate.

We've discussed aspects of the impact of this transformation throughout this book, which I focus on in the next chapter. Intelligence is the ability to solve problems with limited resources, including limitations of time. The Singularity will be characterized by the rapid cycle of human intelligence-increasingly nonbiological-capable of comprehending and leveraging its own powers.

FRIEND OF FUTURIST BACTERIUM, 2 BILLION B.C. So tell me again about these ideas you have about the future. So tell me again about these ideas you have about the future.

FUTURIST BACTERIUM, 2 BILLION B.C.: Well, I see bacteria getting together into societies, with the whole band of cells basically acting like one big complicated organism with greatly enhanced capabilities. Well, I see bacteria getting together into societies, with the whole band of cells basically acting like one big complicated organism with greatly enhanced capabilities.

FRIEND OF FUTURIST BACTERIUM: What gives you that idea? What gives you that idea?

FUTURIST BACTERIUM: Well already, some of our fellow Daptobacters have gone inside other larger bacteria to form a little duo. Well already, some of our fellow Daptobacters have gone inside other larger bacteria to form a little duo. It's inevitable that our fellow cells will band together so that each cell can specialize its function. As it is now, we each have to do everything by ourselves: find food, digest it, excrete by-products. It's inevitable that our fellow cells will band together so that each cell can specialize its function. As it is now, we each have to do everything by ourselves: find food, digest it, excrete by-products.

FRIEND OF FUTURIST BACTERIUM: And then what? And then what?

FUTURIST BACTERIUM: All these cells will develop ways of communicating with one another that go beyond just the swapping of chemical gradients that you and I can do. All these cells will develop ways of communicating with one another that go beyond just the swapping of chemical gradients that you and I can do.

FRIEND OF FUTURIST BACTERIUM: Okay, now tell me again the part about that future supera.s.sembly of ten trillion cells. Okay, now tell me again the part about that future supera.s.sembly of ten trillion cells.

FUTURIST BACTERIUM: Yes, well, according to my models, in about two billion years a big society of ten trillion cells will make up a single organism and include tens of billions of special cells that can communicate with one another in very complicated patterns. Yes, well, according to my models, in about two billion years a big society of ten trillion cells will make up a single organism and include tens of billions of special cells that can communicate with one another in very complicated patterns.

FRIEND OF FUTURIST BACTERIUM: What sort of patterns? What sort of patterns?

FUTURIST BACTERIUM: Well, ”music,” for one thing. These huge bands of cells will create musical patterns and communicate them to all the other bands of cells. Well, ”music,” for one thing. These huge bands of cells will create musical patterns and communicate them to all the other bands of cells.

FRIEND OF FUTURIST BACTERIUM: Music? Music?

FUTURIST BACTERIUM: Yes, patterns of sound. Yes, patterns of sound.