Part 6 (1/2)

But on the whole, animals do not have a well-developed sense of the distant past or future. Apparently, there is no tomorrow in the animal kingdom. We have no evidence that they can think days into the future. (Animals will store food in preparation for the winter, but this is largely genetic: they have been programmed by their genes to react to plunging temperatures by seeking out food.) Humans, however, have a very well-developed sense of the future and continually make plans. We constantly run simulations of reality in our heads. In fact, we can contemplate plans far beyond our own lifetimes. We judge other humans, in fact, by their ability to predict evolving situations and formulate concrete strategies. An important part of leaders.h.i.+p is to antic.i.p.ate future situations, weigh possible outcomes, and set concrete goals accordingly.

In other words, this form of consciousness involves predicting the future, that is, creating multiple models that approximate future events. This requires a very sophisticated understanding of common sense and the rules of nature. It means that you ask yourself ”what if” repeatedly. Whether planning to rob a bank or run for president, this kind of planning means being able to run multiple simulations of possible realities in your head.

All indications are that only humans have mastered this art in nature.

We also see this when psychological profiles of test subjects are a.n.a.lyzed. Psychologists often compare the psychological profiles of adults to their profiles when they were children. Then one asks the question: What is the one quality that predicted their success in marriage, careers, wealth, etc.? When one compensates for socioeconomic factors, one finds that one characteristic sometimes stands out from all the others: the ability to delay gratification. According to the long-term studies of Walter Mischel of Columbia University, and many others, children who were able to refrain from immediate gratification (e.g., eating a marshmallow given to them) and held out for greater long-term rewards (getting two marshmallows instead of one) consistently scored higher on almost every measure of future success, in SATs, life, love, and career.

But being able to defer gratification also refers to a higher level of awareness and consciousness. These children were able to simulate the future and realize that future rewards were greater. So being able to see the future consequences of our actions requires a higher level of awareness.

AI researchers, therefore, should aim to create a robot with all three characteristics. The first is hard to achieve, since robots can sense their environment but cannot make sense of it. Self-awareness is easier to achieve. But planning for the future requires common sense, an intuitive understanding of what is possible, and concrete strategies for reaching specific goals.

So we see that common sense is a prerequisite for the highest level of consciousness. In order for a robot to simulate reality and predict the future, it must first master millions of commonsense rules about the world around it. But common sense is not enough. Common sense is just the ”rules of the game,” rather than the rules of strategy and planning.

On this scale, we can then rank all the various robots that have been created.

We see that Deep Blue, the chess-playing machine, would rank very low. It can beat the world champion in chess, but it cannot do anything else. It is able to run a simulation of reality, but only for playing chess. It is incapable of running simulations of any other reality. This is true for many of the world's largest computers. They excel at simulating the reality of one object, for example, modeling a nuclear detonation, the wind patterns around a jet airplane, or the weather. These computers can run simulations of reality much better than a human. But they are also pitifully one-dimensional, and hence useless in surviving in the real world.

Today, AI researchers are clueless about how to duplicate all these processes in a robot. Most throw up their hands and say that somehow huge networks of computers will show ”emergent phenomena” in the same way that order sometimes spontaneously coalesces from chaos. When asked precisely how these emergent phenomena will create consciousness, most roll their eyes to the heavens.

Although we do not know how to create a robot with consciousness, we can imagine what a robot would look like that is more advanced than us, given this framework for measuring consciousness.

They would excel in the third characteristic: they would be able to run complex simulations of the future far ahead of us, from more perspectives, with more details and depth. Their simulations would be more accurate than ours, because they would have a better grasp of common sense and the rules of nature and hence better able to ferret out patterns. They would be able to antic.i.p.ate problems that we might ignore or not be aware of. Moreover, they would be able to set their own goals. If their goals include helping the human race, then everything is fine. But if one day they formulate goals in which humans are in the way, this could have nasty consequences.

But this raises the next question: What happens to humans in this scenario?

WHEN ROBOTS EXCEED HUMANS.

In one scenario, we puny humans are simply pushed aside as a relic of evolution. It is a law of evolution that fitter species arise to displace unfit species; and perhaps humans will be lost in the shuffle, eventually winding up in zoos where our robotic creations come to stare at us. Perhaps that is our destiny: to give birth to superrobots that treat us as an embarra.s.singly primitive footnote in their evolution. Perhaps that is our role in history, to give birth to our evolutionary successors. In this view, our role is to get out of their way.

Douglas Hofstadter confided to me that this might be the natural order of things, but we should treat these superintelligent robots as we do our children, because that is what they are, in some sense. If we can care for our children, he said to me, then why can't we also care about intelligent robots, which are also our children?

Hans Moravec contemplates how we may feel being left in the dust by our robots: ”...life may seem pointless if we are fated to spend it staring stupidly at our ultraintelligent progeny as they try to describe their ever more spectacular discoveries in baby talk that we can understand.”

When we finally hit the fateful day when robots are smarter than us, not only will we no longer be the most intelligent being on earth, but our creations may make copies of themselves that are even smarter than they are. This army of self-replicating robots will then create endless future generations of robots, each one smarter than the previous one. Since robots can theoretically produce ever-smarter generations of robots in a very short period of time, eventually this process will explode exponentially, until they begin to devour the resources of the planet in their insatiable quest to become ever more intelligent.

In one scenario, this ravenous appet.i.te for ever-increasing intelligence will eventually ravage the resources of the entire planet, so the entire earth becomes a computer. Some envision these superintelligent robots then shooting out into s.p.a.ce to continue their quest for more intelligence, until they reach other planets, stars, and galaxies in order to convert them into computers. But since the planets, stars, and galaxies are so incredibly far away, perhaps the computer may alter the laws of physics so its ravenous appet.i.te can race faster than the speed of light to consume whole star systems and galaxies. Some even believe it might consume the entire universe, so that the universe becomes intelligent.

This is the ”singularity.” The word originally came from the world of relativistic physics, my personal specialty, where a singularity represents a point of infinite gravity, from which nothing can escape, such as a black hole. Because light itself cannot escape, it is a horizon beyond which we cannot see.

The idea of an AI singularity was first mentioned in 1958, in a conversation between two mathematicians, Stanislaw Ulam (who made the key breakthrough in the design of the hydrogen bomb) and John von Neumann. Ulam wrote, ”One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the human race beyond which human affairs, as we know them, could not continue.” Versions of the idea have been kicking around for decades. But it was then amplified and popularized by science fiction writer and mathematician Vernor Vinge in his novels and essays.

But this leaves the crucial question unanswered: When will the singularity take place? Within our lifetimes? Perhaps in the next century? Or never? We recall that the partic.i.p.ants at the 2009 Asilomar conference put the date at any time between 20 to 1,000 years into the future.

One man who has become the spokesperson for the singularity is inventor and best-selling author Ray Kurzweil, who has a penchant for making predictions based on the exponential growth of technology. Kurzweil once told me that when he gazes at the distant stars at night, perhaps one should be able to see some cosmic evidence of the singularity happening in some distant galaxy. With the ability to devour or rearrange whole star systems, there should be some footprint left behind by this rapidly expanding singularity. (His detractors say that he is whipping up a near-religious fervor around the singularity. However, his supporters say that he has an uncanny ability to correctly see into the future, judging by his track record.) Kurzweil cut his teeth on the computer revolution by starting up companies in diverse fields involving pattern recognition, such as speech recognition technology, optical character recognition, and electronic keyboard instruments. In 1999, he wrote a best seller, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which predicted when robots will surpa.s.s us in intelligence. In 2005, he wrote which predicted when robots will surpa.s.s us in intelligence. In 2005, he wrote The Singularity Is Near The Singularity Is Near and elaborated on those predictions. The fateful day when computers surpa.s.s human intelligence will come in stages. and elaborated on those predictions. The fateful day when computers surpa.s.s human intelligence will come in stages.

By 2019, he predicts, a $1,000 personal computer will have as much raw power as a human brain. Soon after that, computers will leave us in the dust. By 2029, a $1,000 personal computer will be 1,000 times more powerful than a human brain. By 2045, a $1,000 computer will be a billion times more intelligent than every human combined. Even small computers will surpa.s.s the ability of the entire human race.

After 2045, computers become so advanced that they make copies of themselves that are ever increasing in intelligence, creating a runaway singularity. To satisfy their never-ending, ravenous appet.i.te for computer power, they will begin to devour the earth, asteroids, planets, and stars, and even affect the cosmological history of the universe itself.

I had the chance to visit Kurzweil in his office outside Boston. Walking through the corridor, you see the awards and honors he has received, as well as some of the musical instruments he has designed, which are used by top musicians, such as Stevie Wonder. He explained to me that there was a turning point in his life. It came when he was unexpectedly diagnosed with type II diabetes when he was thirty-five. Suddenly, he was faced with the grim reality that he would not live long enough to see his predictions come true. His body, after years of neglect, had aged beyond his years. Rattled by this diagnosis, he now attacked the problem of personal health with the same enthusiasm and energy he used for the computer revolution. (Today, he consumes more than 100 pills a day and has written books on the revolution in longevity. He expects that the revolution in microscopic robots will be able to clean out and repair the human body so that it can live forever. His philosophy is that he would like to live long enough to see the medical breakthroughs that can prolong our life spans indefinitely. In other words, he wants to live long enough to live forever.) Recently, he embarked on an ambitious plan to launch the Singularity University, based in the NASA Ames laboratory in the Bay Area, which trains a cadre of scientists to prepare for the coming singularity.

There are many variations and combinations of these various themes.

Kurzweil himself believes, ”It's not going to be an invasion of intelligent machines coming over the horizon. We're going to merge with this technology.... We're going to put these intelligent devices in our bodies and brains to make us live longer and healthier.”

Any idea as controversial as the singularity is bound to unleash a backlash. Mitch Kapor, founder of Lotus Development Corporation, says that the singularity is ”intelligent design for the IQ 140 people.... This proposition that we're heading to this point at which everything is going to be just unimaginably different-it's fundamentally, in my view, driven by a religious impulse. And all the frantic arm-waving can't obscure that fact for me.”

Douglas Hofstadter has said, ”It's as if you took a lot of good food and some dog excrement and blended it all up so that you can't possibly figure out what's good or bad. It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid.”

No one knows how this will play out. But I think the most likely scenario is the following.

MOST LIKELY SCENARIO: FRIENDLY AI.

First, scientists will probably take simple measures to ensure that robots are not dangerous. At the very least, scientists can put a chip in robot brains to automatically shut them off if they have murderous thoughts. In this approach, all intelligent robots will be equipped with a fail-safe mechanism that can be switched on by a human at any time, especially when a robot exhibits errant behavior. At the slightest hint that a robot is malfunctioning, any voice command will immediately shut it down.

Or specialized hunter robots may also be created whose duty is to neutralize deviant robots. These robot hunters will be specifically designed to have superior speed, strength, and coordination in order to capture errant robots. They will be designed to understand the weak points of any robotic system and how they behave under certain conditions. Human can also be trained in this skill. In the movie Blade Runner, Blade Runner, a specially trained cadre of agents, including one played by Harrison Ford, are skilled in the techniques necessary to neutralize any rogue robot. a specially trained cadre of agents, including one played by Harrison Ford, are skilled in the techniques necessary to neutralize any rogue robot.

Since it will take many decades of hard work for robots to slowly go up the evolutionary scale, it will not be a sudden moment when humanity is caught off guard and we are all shepherded into zoos like cattle. Consciousness, as I see it, is a process that can be ranked on a scale, rather than being a sudden evolutionary event, and it will take many decades for robots to ascend up this scale of consciousness. After all, it took Mother Nature millions of years to develop human consciousness. So humans will not be caught off guard one day when the Internet unexpectedly ”wakes up” or robots suddenly begin to plan for themselves.

This is the option preferred by science fiction writer Isaac Asimov, who envisioned each robot hardwired in the factory with three laws to prevent them from getting out of control. He devised his famous three laws of robotics to prevent robots from hurting themselves or humans. (Basically, the three laws state that robots cannot harm humans, they must obey humans, and they must protect themselves, in that order.) (Even with Asimov's three laws, there are also problems when there are contradictions among the three laws. For example, if one creates a benevolent robot, what happens if humanity makes self-destructive choices that can endanger the human race? Then a friendly robot may feel that it has to seize control of the government to prevent humanity from harming itself. This was the problem faced by Will Smith in the movie version of I, Robot, I, Robot, when the central computer decides that ”some humans must be sacrificed and some freedoms must be surrendered” in order to save humanity. To prevent a robot from enslaving us in order to save us, some have advocated that we must add the zeroth law of robotics: Robots cannot harm or enslave the human race.) when the central computer decides that ”some humans must be sacrificed and some freedoms must be surrendered” in order to save humanity. To prevent a robot from enslaving us in order to save us, some have advocated that we must add the zeroth law of robotics: Robots cannot harm or enslave the human race.) But many scientists are leaning toward something called ”friendly AI,” where we design our robots to be benign from the very beginning. Since we are the creators of these robots, we will design them, from the very start, to perform only useful and benevolent tasks.

The term ”friendly AI” was coined by Eliezer Yudkowsky, a founder of the Singularity Inst.i.tute for Artificial Intelligence. Friendly AI is a bit different from Asimov's laws, which are forced upon robots, perhaps against their will. (Asimov's laws, imposed from the outside, could actually invite the robots to devise clever ways to circ.u.mvent them.) In friendly AI, by contrast, robots are free to murder and commit mayhem. There are no rules that enforce an artificial morality. Rather, these robots are designed from the very beginning to desire to help humans rather than destroy them. They choose to be benevolent.

This has given rise to a new field called ”social robotics,” which is designed to give robots the qualities that will help them integrate into human society. Scientists at Hanson Robotics, for example, have stated that one mission for their research is to design robots that ”will evolve into socially intelligent beings, capable of love and earning a place in the extended human family.”

But one problem with all these approaches is that the military is by far the largest funder of AI systems, and these military robots are specifically designed to hunt, track, and kill humans. One can easily imagine future robotic soldiers whose missions are to identify enemy humans and eliminate them with unerring efficiency. One would then have to take extraordinary precautions to guarantee that the robots don't turn against their masters as well. Predator drone aircraft, for example, are run by remote control, so there are humans constantly directing their movements, but one day these drones may be autonomous, able to select and take out their own targets at will. A malfunction in such an autonomous plane could lead to disastrous consequences.

In the future, however, more and more funding for robots will come from the civilian commercial sector, especially from j.a.pan, where robots are designed to help rather than destroy. If this trend continues, then perhaps friendly AI could become a reality. In this scenario, it is the consumer sector and market forces that will eventually dominate robotics, so that there will be a vast commercial interest in investing in friendly AI.

MERGING WITH ROBOTS.

In addition to friendly AI, there is also another option: merging with our creations. Instead of simply waiting for robots to surpa.s.s us in intelligence and power, we should try to enhance ourselves, becoming superhuman in the process. Most likely, I believe, the future will proceed with a combination of these two goals, i.e., building friendly AI and also enhancing ourselves.

This is an option being explored by Rodney Brooks, former director of the famed MIT Artificial Intelligence Laboratory. He has been a maverick, overturning cherished but ossified ideas and injecting innovation into the field. When he entered the field, the top-down approach was dominant in most universities. But the field was stagnating. Brooks raised a few eyebrows when he called for creating an army of insectlike robots that learned via the bottom-up approach by b.u.mping into obstacles. He did not want to create another dumb, lumbering robot that took hours to walk across the room. Instead, he built nimble ”insectoids” or ”bugbots” that had almost no programming at all but would quickly learn to walk and navigate around obstacles by trial and error. He envisioned the day that his robots would explore the solar system, b.u.mping into things along the way. It was an outlandish idea, proposed in his essay ”Fast, Cheap, and Out of Control,” but his approach eventually led to an array of new avenues. One by-product of his idea is the Mars Rovers now scurrying over the surface of the Red Planet. Not surprisingly, he was also the chairman of iRobot, the company that markets buglike vacuum cleaners to households across the country.

One problem, he feels, is that workers in artificial intelligence follow fads, adopting the paradigm of the moment, rather than thinking in fresh ways. For example, he recalls, ”When I was a kid, I had a book that described the brain as a telephone-switching network. Earlier books described it as a hydrodynamic system or a steam engine. Then in the 1960s, it became a digital computer. In the 1980s, it became a ma.s.sively parallel digital computer. Probably there's a kid's book out there somewhere that says the brain is just like the World Wide Web....”