Part 15 (2/2)
RAY: The problem here has a lot to do with the word ”machine.” Your conception of a machine is of something that is much less valued-less complex, less creative, less intelligent, less knowledgeable, less subtle and supple-than a human. That's reasonable for today's machines because all the machines we've ever met-like cars-are like this. The whole point of my thesis, of the coming Singularity revolution, is that this notion of a machine-of nonbiological intelligence-will fundamentally change. The problem here has a lot to do with the word ”machine.” Your conception of a machine is of something that is much less valued-less complex, less creative, less intelligent, less knowledgeable, less subtle and supple-than a human. That's reasonable for today's machines because all the machines we've ever met-like cars-are like this. The whole point of my thesis, of the coming Singularity revolution, is that this notion of a machine-of nonbiological intelligence-will fundamentally change.
BILL: Well, that's exactly my problem. Part of our humanness is our limitations. We don't claim to be the fastest ent.i.ty possible, to have memories with the biggest capacity possible, and so on. But there is an indefinable, spiritual quality to being human that a machine inherently doesn't possess. Well, that's exactly my problem. Part of our humanness is our limitations. We don't claim to be the fastest ent.i.ty possible, to have memories with the biggest capacity possible, and so on. But there is an indefinable, spiritual quality to being human that a machine inherently doesn't possess.
RAY: Again, where do you draw the line? Humans are already replacing parts of their bodies and brains with non biological replacements that work better at performing their ”human” functions. Again, where do you draw the line? Humans are already replacing parts of their bodies and brains with non biological replacements that work better at performing their ”human” functions.
BILL: Better only in the sense of replacing diseased or disabled organs and systems. But you're replacing essentially all of our humanness to enhance human ability, and that's inherently inhuman. Better only in the sense of replacing diseased or disabled organs and systems. But you're replacing essentially all of our humanness to enhance human ability, and that's inherently inhuman.
RAY: Then perhaps our basic disagreement is over the nature of being human. To me, the essence of being human is not our limitations-although we do have many-it's our ability to reach beyond our limitations. We didn't stay on the ground. We didn't even stay on the planet. And we are already not settling for the limitations of our biology. Then perhaps our basic disagreement is over the nature of being human. To me, the essence of being human is not our limitations-although we do have many-it's our ability to reach beyond our limitations. We didn't stay on the ground. We didn't even stay on the planet. And we are already not settling for the limitations of our biology.
BILL: We have to use these technological powers with great discretion. Past a certain point, we're losing some ineffable quality that gives life meaning. We have to use these technological powers with great discretion. Past a certain point, we're losing some ineffable quality that gives life meaning.
RAY: I think we're in agreement that we need to recognize what's important in our humanity. But there is no reason to celebrate our limitations. I think we're in agreement that we need to recognize what's important in our humanity. But there is no reason to celebrate our limitations.
. . . on the Human Brain
Is all what we see or seem, but a dream within a dream?-EDGAR ALLEN POE The computer programmer is a creator of universes for which he alone is the lawgiver. No playwright, no stage director, no emperor, however powerful, has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.-JOSEPH WEIZENBAUM One windy day two monks were arguing about a flapping banner. The first said, ”I say the banner is moving, not the wind.” The second said, ”I say the wind is moving, not the banner.” A third monk pa.s.sed by and said, ”The wind is not moving. The banner is not moving. Your minds are moving.”-ZEN PARABLE Suppose someone were to say, ”Imagine this b.u.t.terfly exactly as it is, but ugly instead of beautiful.”-LUDWIG WITTGENSTEIN
The 2010 Scenario. Computers arriving at the beginning of the next decade will become essentially invisible: woven into our clothing, embedded in our furniture and environment. They will tap into the worldwide mesh (what the World Wide Web will become once all of its linked devices become communicating Web servers, thereby forming vast supercomputers and memory banks) of high-speed communications and computational resources. We'll have very high-bandwidth, wireless communication to the Internet at all times. Displays will be built into our eyegla.s.ses and contact lenses and images projected directly onto our retinas. The Department of Defense is already using technology along these lines to create virtual-reality environments in which to train soldiers. Computers arriving at the beginning of the next decade will become essentially invisible: woven into our clothing, embedded in our furniture and environment. They will tap into the worldwide mesh (what the World Wide Web will become once all of its linked devices become communicating Web servers, thereby forming vast supercomputers and memory banks) of high-speed communications and computational resources. We'll have very high-bandwidth, wireless communication to the Internet at all times. Displays will be built into our eyegla.s.ses and contact lenses and images projected directly onto our retinas. The Department of Defense is already using technology along these lines to create virtual-reality environments in which to train soldiers.27 An impressive immersive virtual reality system already demonstrated by the army's Inst.i.tute for Creative Technologies includes virtual humans that respond appropriately to the user's actions. An impressive immersive virtual reality system already demonstrated by the army's Inst.i.tute for Creative Technologies includes virtual humans that respond appropriately to the user's actions.
Similar tiny devices will project auditory environments. Cell phones are already being introduced in clothing that projects sound to the ears.28 And there's an MP3 player that vibrates your skull to play music that only you can hear. And there's an MP3 player that vibrates your skull to play music that only you can hear.29 The army has also pioneered transmitting sound through the skull from a soldier's helmet. The army has also pioneered transmitting sound through the skull from a soldier's helmet.
There are also systems that can project from a distance sound that only a specific person can hear, a technology that was dramatized by the personalized talking street ads in the movie Minority Report Minority Report. The Hypersonic Sound technology and the Audio Spotlight systems achieve this by modulating the sound on ultrasonic beams, which can be precisely aimed. Sound is generated by the beams interacting with air, which restores sound in the audible range. By focusing multiple sets of beams on a wall or other surface, a new kind of personalized surround sound without speakers is also possible.30 These resources will provide high-resolution, full-immersion visual-auditory virtual reality at any time. We will also have augmented reality with displays overlaying the real world to provide real-time guidance and explanations. For example, your retinal display might remind us, ”That's Dr. John Smith, director of the ABC Inst.i.tute-you last saw him six months ago at the XYZ conference” or, ”That's the Time-Life Building-your meeting is on the tenth floor.”
We'll have real-time translation of foreign languages, essentially subt.i.tles on the world, and access to many forms of online information integrated into our daily activities. Virtual personalities that overlay the real world will help us with information retrieval and our ch.o.r.es and transactions. These virtual a.s.sistants won't always wait for questions and directives but will step forward if they see us struggling to find a piece of information. (As we wonder about ”That actress ... who played the princess, or was it the queen ... in that movie with the robot,” our virtual a.s.sistant may whisper in our ear or display in our visual field of view: ”Natalie Portman as Queen Amidala in Star Wars Star Wars, episodes 1, 2, and 3.”)
The 2030 Scenario. Nan.o.bot technology will provide fully immersive, totally convincing virtual reality. Nan.o.bots will take up positions in close physical proximity to every interneuronal connection coming from our senses. We already have the technology for electronic devices to communicate with neurons in both directions, yet requiring no direct physical contact with the neurons. For example, scientists at the Max Planck Inst.i.tute have developed ”neuron transistors” that can detect the firing of a nearby neuron, or alternatively can cause a nearby neuron to fire or suppress it from firing. Nan.o.bot technology will provide fully immersive, totally convincing virtual reality. Nan.o.bots will take up positions in close physical proximity to every interneuronal connection coming from our senses. We already have the technology for electronic devices to communicate with neurons in both directions, yet requiring no direct physical contact with the neurons. For example, scientists at the Max Planck Inst.i.tute have developed ”neuron transistors” that can detect the firing of a nearby neuron, or alternatively can cause a nearby neuron to fire or suppress it from firing.31 This amounts to two-way communication between neurons and the electronic-based neuron transistors. As mentioned above, quantum dots have also shown the ability to provide noninvasive communication between neurons and electronics. This amounts to two-way communication between neurons and the electronic-based neuron transistors. As mentioned above, quantum dots have also shown the ability to provide noninvasive communication between neurons and electronics.32 If we want to experience real reality, the nan.o.bots just stay in position (in the capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from our actual senses and replace them with the signals that would be appropriate for the virtual environment.33 Your brain experiences these signals as if they came from your physical body. After all, the brain does not experience the body directly. As I discussed in chapter 4, inputs from the body-comprising a few hundred megabits per second-representing information about touch, temperature, acid levels, the movement of food, and other physical events, stream through the Lamina 1 neurons, then through the posterior ventromedial nucleus, ending up in the two insula regions of cortex. If these are coded correctly-and we will know how to do that from the brain reverse-engineering effort-your brain will experience the synthetic signals just as it would real ones. You could decide to cause your muscles and limbs to move as you normally would, but the nan.o.bots would intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move, appropriately adjusting your vestibular system and providing the appropriate movement and reorientation in the virtual environment. Your brain experiences these signals as if they came from your physical body. After all, the brain does not experience the body directly. As I discussed in chapter 4, inputs from the body-comprising a few hundred megabits per second-representing information about touch, temperature, acid levels, the movement of food, and other physical events, stream through the Lamina 1 neurons, then through the posterior ventromedial nucleus, ending up in the two insula regions of cortex. If these are coded correctly-and we will know how to do that from the brain reverse-engineering effort-your brain will experience the synthetic signals just as it would real ones. You could decide to cause your muscles and limbs to move as you normally would, but the nan.o.bots would intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move, appropriately adjusting your vestibular system and providing the appropriate movement and reorientation in the virtual environment.
The Web will provide a panoply of virtual environments to explore. Some will be re-creations of real places; others will be fanciful environments that have no counterpart in the physical world. Some, indeed, would be impossible, perhaps because they violate the laws of physics. We will be able to visit these virtual places and have any kind of interaction with other real, as well as simulated, people (of course, ultimately there won't be a clear distinction between the two), ranging from business negotiations to sensual encounters. ”Virtual-reality environment designer” will be a new job description and a new art form.
Become Someone Else. In virtual reality we won't be restricted to a single personality, since we will be able to change our appearance and effectively become other people. Without altering our physical body (in real reality) we will be able to readily transform our projected body in these three-dimensional virtual environments. We can select different bodies at the same time for different people. So your parents may see you as one person, while your girlfriend will experience you as another. However, the other person may choose to override your selections, preferring to see you differently than the body you have chosen for yourself. You could pick different body projections for different people: Ben Franklin for a wise uncle, a clown for an annoying coworker. Romantic couples can choose whom they wish to be, even to become each other. These are all easily changeable decisions. In virtual reality we won't be restricted to a single personality, since we will be able to change our appearance and effectively become other people. Without altering our physical body (in real reality) we will be able to readily transform our projected body in these three-dimensional virtual environments. We can select different bodies at the same time for different people. So your parents may see you as one person, while your girlfriend will experience you as another. However, the other person may choose to override your selections, preferring to see you differently than the body you have chosen for yourself. You could pick different body projections for different people: Ben Franklin for a wise uncle, a clown for an annoying coworker. Romantic couples can choose whom they wish to be, even to become each other. These are all easily changeable decisions.
I had the opportunity to experience what it is like to project myself as another persona in a virtual-reality demonstration at the 2001 TED (technology, entertainment, design) conference in Monterey. By means of magnetic sensors in my clothing a computer was able to track all of my movements. With ultrahigh-speed animation the computer created a life-size, near photorealistic image of a young woman-Ramona-who followed my movements in real time. Using signal-processing technology, my voice was transformed into a woman's voice and also controlled the movements of Ramona's lips. So it appeared to the TED audience as if Ramona herself were giving the presentation.34 To make the concept understandable, the audience could see me and see Ramona at the same time, both moving simultaneously in exactly the same way. A band came onstage, and I-Ramona-performed Jefferson Airplane's ”White Rabbit,” as well as an original song. My daughter, then fourteen, also equipped with magnetic sensors, joined me, and her dance movements were transformed into those of a male backup dancer-who happened to be a virtual Richard Saul Wurman, the impresario of the TED conference. The hit of the presentation was seeing Wurman-not known for his hip-hop moves-convincingly doing my daughter's dance steps. Present in the audience was the creative leaders.h.i.+p of Warner Bros., who then went off and created the movie Simone, in which the character played by AI Pacino transforms himself into Simone in essentially the same way.
The experience was a profound and moving one for me. When I looked in the ”cybermirror” (a display showing me what the audience was seeing), I saw myself as Ramona rather than the person I usually see in the mirror. I experienced the emotional force-and not just the intellectual idea-of transforming myself into someone else.
People's ident.i.ties are frequently closely tied to their bodies (”I'm a person with a big nose,” ”I'm skinny,” ”I'm a big guy,” and so on). I found the opportunity to become a different person liberating. All of us have a variety of personalities that we are capable of conveying but generally suppress them since we have no readily available means of expressing them. Today we have very limited technologies available-such as fas.h.i.+on, makeup, and hairstyle-to change who we are for different relations.h.i.+ps and occasions, but our palette of personalities will greatly expand in future full-immersion virtual-reality environments.
In addition to encompa.s.sing all of the senses, these shared environments can include emotional overlays. Nan.o.bots will be capable of generating the neurological correlates of emotions, s.e.xual pleasure, and other derivatives of our sensory experience and mental reactions. Experiments during open brain surgery have demonstrated that stimulating certain specific points in the brain can trigger emotional experiences (for example, the girl who found everything funny when stimulated in a particular spot of her brain, as I reported in The Age of Spiritual Machines The Age of Spiritual Machines).35 Some emotions and secondary reactions involve a pattern of activity in the brain rather than the stimulation of a specific neuron, but with ma.s.sively distributed nan.o.bots, stimulating these patterns will also be feasible. Some emotions and secondary reactions involve a pattern of activity in the brain rather than the stimulation of a specific neuron, but with ma.s.sively distributed nan.o.bots, stimulating these patterns will also be feasible.
Experience Beamers. ”Experience beamers” will send the entire flow of their sensory experiences as well as the neurological correlates of their emotional reactions out onto the Web, just as people today beam their bedroom images from their Web cams. A popular pastime will be to plug into someone else's sensory-emotional beam and experience what it's like to be that person, a la the premise of the movie ”Experience beamers” will send the entire flow of their sensory experiences as well as the neurological correlates of their emotional reactions out onto the Web, just as people today beam their bedroom images from their Web cams. A popular pastime will be to plug into someone else's sensory-emotional beam and experience what it's like to be that person, a la the premise of the movie Being John Malkovich Being John Malkovich. There will also be a vast selection of archived experiences to choose from, with virtual-experience design another new art form.
Expand Your Mind. The most important application of circa-2030 nan.o.bots will be literally to expand our minds through the merger of biological and nonbiological intelligence. The first stage will be to augment our hundred trillion very slow interneuronal connections with high-speed virtual connections via nanorobot communication. The most important application of circa-2030 nan.o.bots will be literally to expand our minds through the merger of biological and nonbiological intelligence. The first stage will be to augment our hundred trillion very slow interneuronal connections with high-speed virtual connections via nanorobot communication.36 This will provide us with the opportunity to greatly boost our pattern-recognition abilities, memories, and overall thinking capacity, as well as to directly interface with powerful forms of nonbiological intelligence. The technology will also provide wireless communication from one brain to another. This will provide us with the opportunity to greatly boost our pattern-recognition abilities, memories, and overall thinking capacity, as well as to directly interface with powerful forms of nonbiological intelligence. The technology will also provide wireless communication from one brain to another.
It is important to point out that well before the end of the first half of the twenty-first century, thinking via nonbiological substrates will predominate. As I reviewed in chapter 3, biological human thinking is limited to 1016 calculations per second (cps) per human brain (based on neuromorphic modeling of brain regions) and about 10 calculations per second (cps) per human brain (based on neuromorphic modeling of brain regions) and about 1026 cps for all human brains. These figures will not appreciably change, even with bioengineering adjustments to our genome. The processing capacity of nonbiological intelligence, in contrast, is growing at an exponential rate (with the rate itself increasing) and will vastly exceed biological intelligence by the mid-2040s. cps for all human brains. These figures will not appreciably change, even with bioengineering adjustments to our genome. The processing capacity of nonbiological intelligence, in contrast, is growing at an exponential rate (with the rate itself increasing) and will vastly exceed biological intelligence by the mid-2040s.
By that time we will have moved beyond just the paradigm of nan.o.bots in a biological brain. Nonbiological intelligence will be billions of times more powerful, so it will predominate. We will have version 3.0 human bodies, which we will be able to modify and reinstantiate into new forms at will. We will be able to quickly change our bodies in full-immersion visual-auditory virtual environments in the second decade of this century; in full-immersion virtual-reality environments incorporating all of the senses during the 2020s; and in real reality in the 2040s.
Nonbiological intelligence should still be considered human, since it is fully derived from human-machine civilization and will be based, at least in part, on reverse engineering human intelligence. I address this important philosophical issue in the next chapter. The merger of these two worlds of intelligence is not merely a merger of biological and non biological thinking mediums, but more important, one of method and organization of thinking, one that will be able to expand our minds in virtually any imaginable way.
Our brains today are relatively fixed in design. Although we do add patterns of interneuronal connections and neurotransmitter concentrations as a normal part of the learning process, the current overall capacity of the human brain is highly constrained. As the nonbiological portion of our thinking begins to predominate by the end of the 2030s, we will be able to move beyond the basic architecture of the brain's neural regions. Brain implants based on ma.s.sively distributed intelligent nan.o.bots will greatly expand our memories and otherwise vastly improve all of our sensory, pattern-recognition, and cognitive abilities. Since the nan.o.bots will be communicating with one another, they will be able to create any set of new neural connections, break existing connections (by suppressing neural firing), create new hybrid biological-nonbiological networks, and add completely nonbiological networks, as well as interface intimately with new nonbiological forms of intelligence.
The use of nan.o.bots as brain extenders will be a significant improvement over surgically installed neural implants, which are beginning to be used today. Nan.o.bots will be introduced without surgery, through the bloodstream, and if necessary can all be directed to leave, so the process is easily reversible. They are programmable, in that they can provide virtual reality one minute and a variety of brain extensions the next. They can change their configuration and can alter their software. Perhaps most important, they are ma.s.sively distributed and therefore can take up billions of positions throughout the brain, whereas a surgically introduced neural implant can be placed only in one or at most a few locations.
MOLLY 2004: Full-immersion virtual reality doesn't seem very inviting. I mean, all those nan.o.bots running around in my head, like little bugs. Full-immersion virtual reality doesn't seem very inviting. I mean, all those nan.o.bots running around in my head, like little bugs.
RAY: Oh, you won't feel them, any more than you feel the neurons in your head or the bacteria in your GI tract. Oh, you won't feel them, any more than you feel the neurons in your head or the bacteria in your GI tract.
MOLLY 2004: Actually, that I can feel. But I can have full immersion with my friends right now, just by, you know, getting together physically. Actually, that I can feel. But I can have full immersion with my friends right now, just by, you know, getting together physically.
SIGMUND FREUD: Hmmm, that's what they used to say about the telephone when I was young. People would say, ”Who needs to talk to someone hundreds of miles away when you can just get together?” Hmmm, that's what they used to say about the telephone when I was young. People would say, ”Who needs to talk to someone hundreds of miles away when you can just get together?”
RAY: Exactly, the telephone is auditory virtual reality. So full-immersion VR is, basically, a full-body telephone. You can get together with anyone anytime but do more than just talk. Exactly, the telephone is auditory virtual reality. So full-immersion VR is, basically, a full-body telephone. You can get together with anyone anytime but do more than just talk.
GEORGE 2048: It's certainly been a boon for s.e.x workers; they never have to leave their homes. It became so impossible to draw any meaningful lines that the authorities had no choice but to legalize virtual prost.i.tution in 2033. It's certainly been a boon for s.e.x workers; they never have to leave their homes. It became so impossible to draw any meaningful lines that the authorities had no choice but to legalize virtual prost.i.tution in 2033.
MOLLY 2004: Very interesting but actually not very appealing. Very interesting but actually not very appealing.
<script>