Part 18 (1/2)

Another refinement of the Dyson concept is that the heat radiated by one sh.e.l.l could be captured and used by a parallel sh.e.l.l that is placed at a position farther from the sun. Computer scientist Robert Bradbury points out that there could be any number of such layers and proposes a computer aptly called a ”Matrioshka brain,” organized as a series of nested sh.e.l.ls around the sun or another star. One such conceptual design a.n.a.lyzed by Sandberg is called Uranos, which is designed to use 1 percent of the nonhydrogen, nonhelium ma.s.s in the solar system (not including the sun), or about 1024 kilograms, a bit smaller than Zeus. kilograms, a bit smaller than Zeus.77 Uranos provides about 10 Uranos provides about 1039 computational nodes, an estimated 10 computational nodes, an estimated 1051 cps of computation, and about 10 cps of computation, and about 1052 bits of storage. bits of storage.

Computation is already a widely distributed-rather than centralized-resource, and my expectation is that the trend will continue toward greater decentralization. However, as our civilization approaches the densities of computation envisioned above, the distribution of the vast number of processors is likely to have characteristics of these conceptual designs. For example, the idea of Matrioshka sh.e.l.ls would take maximal advantage of solar power and heat dissipation. Note that the computational powers of these solar system-scale computers will be achieved, according to my projections in chapter 2, around the end of this century.

Bigger or Smaller. Given that the computational capacity of our solar system is in the range of 10 Given that the computational capacity of our solar system is in the range of 1070 to 10 to 1080 cps, we will reach these limits early in the twenty-second century, according to my projections. The history of computation tells us that the power of computation expands both inward and outward. Over the last several decades we have been able to place twice as many computational elements (transistors) on each integrated circuit chip about every two years, which represents inward growth (toward greater densities of computation per kilogram of matter). But we are also expanding outward, in that the number of chips is expanding (currently) at a rate of about 8.3 percent per year. cps, we will reach these limits early in the twenty-second century, according to my projections. The history of computation tells us that the power of computation expands both inward and outward. Over the last several decades we have been able to place twice as many computational elements (transistors) on each integrated circuit chip about every two years, which represents inward growth (toward greater densities of computation per kilogram of matter). But we are also expanding outward, in that the number of chips is expanding (currently) at a rate of about 8.3 percent per year.78 It is reasonable to expect both types of growth to continue, and for the outward growth rate to increase significantly once we approach the limits of inward growth (with three-dimensional circuits). It is reasonable to expect both types of growth to continue, and for the outward growth rate to increase significantly once we approach the limits of inward growth (with three-dimensional circuits).

Moreover, once we b.u.mp up against the limits of matter and energy in our solar system to support the expansion of computation, we will have no choice but to expand outward as the primary form of growth. We discussed earlier the speculation that finer scales of computation might be feasible-on the scale of subatomic particles. Such pico- or femtotechnology would permit continued growth of computation by continued shrinking of feature sizes. Even if this is feasible, however, there are likely to be major technical challenges in mastering subnanoscale computation, so the pressure to expand outward will remain.

Expanding Beyond the Solar System. Once we do expand our intelligence beyond the solar system, at what rate will this take place? The expansion will not start out at the maximum speed; it will quickly achieve a speed within a vanis.h.i.+ngly small change from the maximum speed (speed of light or greater). Some critics have objected to this notion, insisting that it would be very difficult to send people (or advanced organisms from any other ETI civilization) and equipment at near the speed of light without crus.h.i.+ng them. Of course, we could avoid this problem by accelerating slowly, but another problem would be collisions with interstellar material. But again, this objection entirely misses the point of the nature of intelligence at this stage of development. Early ideas about the spread of ETI through the galaxy and universe were based on the migration and colonization patterns from our human history and basically involved sending settlements of humans (or, in the case of other ETI civilizations, intelligent organisms) to other star systems. This would allow them to multiply through normal biological reproduction and then continue to spread in like manner from there. Once we do expand our intelligence beyond the solar system, at what rate will this take place? The expansion will not start out at the maximum speed; it will quickly achieve a speed within a vanis.h.i.+ngly small change from the maximum speed (speed of light or greater). Some critics have objected to this notion, insisting that it would be very difficult to send people (or advanced organisms from any other ETI civilization) and equipment at near the speed of light without crus.h.i.+ng them. Of course, we could avoid this problem by accelerating slowly, but another problem would be collisions with interstellar material. But again, this objection entirely misses the point of the nature of intelligence at this stage of development. Early ideas about the spread of ETI through the galaxy and universe were based on the migration and colonization patterns from our human history and basically involved sending settlements of humans (or, in the case of other ETI civilizations, intelligent organisms) to other star systems. This would allow them to multiply through normal biological reproduction and then continue to spread in like manner from there.

But as we have seen, by late in this century nonbiological intelligence on the Earth will be many trillions of times more powerful than biological intelligence, so sending biological humans on such a mission would not make sense. The same would be true for any other ETI civilization. This is not simply a matter of biological humans sending robotic probes. Human civilization by that time will be nonbiological for all practical purposes.

These nonbiological sentries would not need to be very large and in fact would primarily comprise information. It is true, however, that just sending information would not be sufficient, for some material-based device that can have a physical impact on other star and planetary systems must be present. However, it would be sufficient for the probes to be self-replicating nan.o.bots (note that a nan.o.bot has nanoscale features but that the overall size of a nan.o.bot is measured in microns).79 We could send swarms of many trillions of them, with some of these ”seeds” taking root in another planetary system and then replicating by finding the appropriate materials, such as carbon and other needed elements, and building copies of themselves. We could send swarms of many trillions of them, with some of these ”seeds” taking root in another planetary system and then replicating by finding the appropriate materials, such as carbon and other needed elements, and building copies of themselves.

Once established, the nan.o.bot colony could obtain the additional information it needs to optimize its intelligence from pure information transmissions that involve only energy, not matter, and that are sent at the speed of light. Unlike large organisms such as humans, these nan.o.bots, being extremely small, could travel at close to the speed of light. Another scenario would be to dispense with the information transmissions and embed the information needed in the nan.o.bots' own memory. That's an engineering decision we can leave to these future superengineers.

The software files could be spread out among billions of devices. Once one or a few of them get a ”foothold” by self-replicating at a destination, the now much larger system could gather up the nan.o.bots traveling in the vicinity so that from that time on, the bulk of the nan.o.bots sent in that direction do not simply fly by. In this way, the now established colony can gather up the information, as well as the distributed computational resources, it needs to optimize its intelligence.

The Speed of Light Revisited. In this way the maximum speed of expansion of a solar system-size intelligence (that is, a type II civilization) into the rest of the universe would be very close to the speed of light. We currently understand the maximum speed to transmit information and material objects to be the speed of light, but there are at least suggestions that this may not be an absolute limit. In this way the maximum speed of expansion of a solar system-size intelligence (that is, a type II civilization) into the rest of the universe would be very close to the speed of light. We currently understand the maximum speed to transmit information and material objects to be the speed of light, but there are at least suggestions that this may not be an absolute limit.

We have to regard the possibility of circ.u.mventing the speed of light as speculative, and my projections of the profound changes that our civilization will undergo in this century make no such a.s.sumption. However, the potential to engineer around this limit has important implications for the speed with which we will be able to colonize the rest of the universe with our intelligence.

Recent experiments have measured the flight time of photons at nearly twice the speed of light, a result of quantum uncertainty on their position.80 However, this result is really not useful for this a.n.a.lysis, because it does not actually allow information to be communicated faster than the speed of light, and we are fundamentally interested in communication speed. However, this result is really not useful for this a.n.a.lysis, because it does not actually allow information to be communicated faster than the speed of light, and we are fundamentally interested in communication speed.

Another intriguing suggestion of an action at a distance that appears to occur at speeds far greater than the speed of light is quantum disentanglement. Two particles created together may be ”quantum entangled,” meaning that while a given property (such as the phase of its spin) is not determined in either particle, the resolution of this ambiguity of the two particles will occur at the same moment. In other words, if the undetermined property is measured in one of the particles, it will also be determined as the exact same value at the same instant in the other particle, even if the two have traveled far apart. There is an appearance of some sort of communication link between the particles.

This quantum disentanglement has been measured at many times the speed of light, meaning that resolution of the state of one particle appears to resolve the state of the other particle in an amount of time that is a small fraction of the time it would take if the information were transmitted from one particle to the other at the speed of light (in theory, the time lapse is zero). For example, Dr. Nicolas Gisin of the University of Geneva sent quantum-entangled photons in opposite directions through optical fibers across Geneva. When the photons were seven miles apart, they each encountered a gla.s.s plate. Each photon had to ”decide” whether to pa.s.s through or bounce off the plate (which previous experiments with non-quantum-entangled photons have shown to be a random choice). Yet because the two photons were quantum entangled, they made the same decision at the same moment. Many repet.i.tions provided the identical result.81 The experiments have not absolutely ruled out the explanation of a hidden variable-that is, an unmeasurable state of each particle that is in phase (set to the same point in a cycle), so that when one particle is measured (for example, has to decide its path through or off a gla.s.s plate), the other has the same value of this internal variable. So the ”choice” is generated by an identical setting of this hidden variable, rather than being the result of actual communication between the two particles. However, most quantum physicists reject this interpretation.

Yet even if we accept the interpretation of these experiments as indicating a quantum link between the two particles, the apparent communication is transmitting only randomness (profound quantum randomness) at speeds far greater than the speed of light, not predetermined information, such as the bits in a file. This communication of quantum random decisions to different points in s.p.a.ce could have value, however, in applications such as providing encryption codes. Two different locations could receive the same random sequence, which could then be used by one location to encrypt a message and by the other to decipher it. It would not be possible for anyone else to eavesdrop on the encryption code without destroying the quantum entanglement and thereby being detected. There are already commercial encryption products incorporating this principle. This is a fortuitous application of quantum mechanics because of the possibility that another application of quantum mechanics-quantum computing-may put an end to the standard method of encryption based on factoring large numbers (which quantum computing, with a large number of entangled qubits, would be good at).

Yet another faster-than-the-speed-of-light phenomenon is the speed with which galaxies can recede from each other as a result of the expansion of the universe. If the distance between two galaxies is greater than what is called the Hubble distance, then these galaxies are receding from one another at faster than the speed of light.82 This does not violate Einstein's special theory of relativity, because this velocity is caused by s.p.a.ce itself expanding rather than the galaxies moving through s.p.a.ce. However, it also doesn't help us transmit information at speeds faster than the speed of light. This does not violate Einstein's special theory of relativity, because this velocity is caused by s.p.a.ce itself expanding rather than the galaxies moving through s.p.a.ce. However, it also doesn't help us transmit information at speeds faster than the speed of light.

Wormholes. There are two exploratory conjectures that suggest ways to circ.u.mvent the apparent limitation of the speed of light. The first is to use wormholes-folds of the universe in dimensions beyond the three visible ones. This does not really involve traveling at speeds faster than the speed of light but merely means that the topology of the universe is not the simple three-dimensional s.p.a.ce that naive physics implies. However, if wormholes or folds in the universe are ubiquitous, perhaps these shortcuts would allow us to get everywhere quickly. Or perhaps we can even engineer them. There are two exploratory conjectures that suggest ways to circ.u.mvent the apparent limitation of the speed of light. The first is to use wormholes-folds of the universe in dimensions beyond the three visible ones. This does not really involve traveling at speeds faster than the speed of light but merely means that the topology of the universe is not the simple three-dimensional s.p.a.ce that naive physics implies. However, if wormholes or folds in the universe are ubiquitous, perhaps these shortcuts would allow us to get everywhere quickly. Or perhaps we can even engineer them.

In 1935 Einstein and physicist Nathan Rosen formulated ”Einstein-Rosen” bridges as a way of describing electrons and other particles in terms of tiny s.p.a.ce-time tunnels.83 In 1955 physicist John Wheeler described these tunnels as ”wormholes,” introducing the term for the first time. In 1955 physicist John Wheeler described these tunnels as ”wormholes,” introducing the term for the first time.84 His a.n.a.lysis of wormholes showed them to be fully consistent with the theory of general relativity, which describes s.p.a.ce as essentially curved in another dimension. His a.n.a.lysis of wormholes showed them to be fully consistent with the theory of general relativity, which describes s.p.a.ce as essentially curved in another dimension.

In 1988 California Inst.i.tute of Technology physicists Michael Morris, Kip Thorne, and Uri Yurtsever explained in some detail how such wormholes could be engineered.85 Responding to a question from Carl Sagan they described the energy requirements to keep wormholes of varying sizes open. They also pointed out that based on quantum fluctuation, so-called empty s.p.a.ce is continually generating tiny wormholes the size of subatomic particles. By adding energy and following other requirements of both quantum physics and general relativity (two fields that have been notoriously difficult to unify), these wormholes could be expanded to allow objects larger than subatomic particles to travel through them. Sending humans through them would not be impossible but extremely difficult. However, as I pointed out above, we really only need to send nan.o.bots plus information, which could pa.s.s through wormholes measured in microns rather than meters. Responding to a question from Carl Sagan they described the energy requirements to keep wormholes of varying sizes open. They also pointed out that based on quantum fluctuation, so-called empty s.p.a.ce is continually generating tiny wormholes the size of subatomic particles. By adding energy and following other requirements of both quantum physics and general relativity (two fields that have been notoriously difficult to unify), these wormholes could be expanded to allow objects larger than subatomic particles to travel through them. Sending humans through them would not be impossible but extremely difficult. However, as I pointed out above, we really only need to send nan.o.bots plus information, which could pa.s.s through wormholes measured in microns rather than meters.

Thorne and his Ph.D. students Morris and Yurtsever also described a method consistent with general relativity and quantum mechanics that could establish wormholes between the Earth and faraway locations. Their proposed technique involves expanding a spontaneously generated, subatomic-size wormhole to a larger size by adding energy, then stabilizing it using superconducting spheres in the two connected ”wormhole mouths.” After the wormhole is expanded and stabilized, one of its mouths (entrances) is transported to another location, while keeping its connection to the other entrance, which remains on Earth.

Thorne offered the example of moving the remote entrance via a small rocket s.h.i.+p to the star Vega, which is twenty-five light-years away. By traveling at very close to the speed of light, the journey, as measured by clocks on the s.h.i.+p, would be relatively brief. For example, if the s.h.i.+p traveled at 99.995 percent of the speed of light, the clocks on the s.h.i.+p would move ahead by only three months. Although the time for the voyage, as measured on Earth, would be around twenty-five years, the stretched wormhole would maintain the direct link between the locations as well as the points in time of the two locations. Thus, even as experienced on Earth, it would take only three months to establish the link between Earth and Vega, because the two ends of the wormhole would maintain their time relations.h.i.+p. Suitable engineering improvements could allow such links to be established anywhere in the universe. By traveling arbitrarily close to the speed of light, the time required to establish a link-for both communications and transportation-to other locations in the universe, even those millions of billions of light years away, could be relatively brief.

Matt Visser of Was.h.i.+ngton University in St. Louis has suggested refinements to the Morris-Thorne-Yurtsever concept that provide a more stable environment, which might even allow humans to travel through wormholes.86 In my view, however, this is unnecessary. By the time engineering projects of this scale might be feasible, human intelligence will long since have been dominated by its nonbiological component. Sending molecular-scale self-replicating devices along with software will be sufficient and much easier. Anders Sandberg estimates that a one-nanometer wormhole could transmit a formidable 10 In my view, however, this is unnecessary. By the time engineering projects of this scale might be feasible, human intelligence will long since have been dominated by its nonbiological component. Sending molecular-scale self-replicating devices along with software will be sufficient and much easier. Anders Sandberg estimates that a one-nanometer wormhole could transmit a formidable 1069 bits per second. bits per second.87 Physicist David Hochberg and Vanderbilt University's Thomas Kephart point out that shortly after the Big Bang, gravity was strong enough to have provided the energy required to spontaneously create ma.s.sive numbers of self-stabilizing wormholes.88 A significant portion of these wormholes is likely to still be around and may be pervasive, providing a vast network of corridors that reach far and wide throughout the universe. It might be easier to discover and use these natural wormholes than to create new ones. A significant portion of these wormholes is likely to still be around and may be pervasive, providing a vast network of corridors that reach far and wide throughout the universe. It might be easier to discover and use these natural wormholes than to create new ones.

Changing the Speed of Light. The second conjecture is to change the speed of light itself. In chapter 3, I mentioned the finding that appears to indicate that the speed of light has differed by 4.5 parts out of 10 The second conjecture is to change the speed of light itself. In chapter 3, I mentioned the finding that appears to indicate that the speed of light has differed by 4.5 parts out of 108 over the past two billion years. over the past two billion years.

In 2001 astronomer John Webb discovered that the so-called fine-structure constant varied when he examined light from sixty-eight quasars (very bright young galaxies).89 The speed of light is one of four constants that the fine-structure constant comprises, so the result is another suggestion that varying conditions in the universe may cause the speed of light to change. Cambridge University physicist John Barrow and his colleagues are in the process of running a two-year tabletop experiment that will test the ability to engineer a small change in the speed of light. The speed of light is one of four constants that the fine-structure constant comprises, so the result is another suggestion that varying conditions in the universe may cause the speed of light to change. Cambridge University physicist John Barrow and his colleagues are in the process of running a two-year tabletop experiment that will test the ability to engineer a small change in the speed of light.90 Suggestions that the speed of light can vary are consistent with recent theories that it was significantly higher during the inflationary period of the universe (an early phase in its history, when it underwent very rapid expansion). These experiments showing possible variation in the speed of light clearly need corroboration and are showing only small changes. But if confirmed, the findings would be profound, because it is the role of engineering to take a subtle effect and greatly amplify it. Again, the mental experiment we should perform now is not whether contemporary human scientists, such as we are, can perform these engineering feats but whether or not a human civilization that has expanded its intelligence by trillions of trillions will be able to do so.

For now we can say that ultrahigh levels of intelligence will expand outward at the speed of light, while recognizing that our contemporary understanding of physics suggests that this may not be the actual limit of the speed of expansion or, even if the speed of light proves to be immutable, that this limit may not restrict reaching other locations quickly through wormholes.

The Fermi Paradox Revisited. Recall that biological evolution is measured in millions and billions of years. So if there are other civilizations out there, they would be spread out in terms of development by huge spans of time. The SETI a.s.sumption implies that there should be billions of ETIs (among all the galaxies), so there should be billions that lie far ahead of us in their technological progress. Yet it takes only a few centuries at most from the advent of computation for such civilizations to expand outward at at least light speed. Given this, how can it be that we have not noticed them? The conclusion I reach is that it is likely (although not certain) that there are no such other civilizations. In other words, we are in the lead. That's right, our humble civilization with its pickup trucks, fast food, and persistent conflicts (and computation!) is in the lead in terms of the creation of complexity and order in the universe. Recall that biological evolution is measured in millions and billions of years. So if there are other civilizations out there, they would be spread out in terms of development by huge spans of time. The SETI a.s.sumption implies that there should be billions of ETIs (among all the galaxies), so there should be billions that lie far ahead of us in their technological progress. Yet it takes only a few centuries at most from the advent of computation for such civilizations to expand outward at at least light speed. Given this, how can it be that we have not noticed them? The conclusion I reach is that it is likely (although not certain) that there are no such other civilizations. In other words, we are in the lead. That's right, our humble civilization with its pickup trucks, fast food, and persistent conflicts (and computation!) is in the lead in terms of the creation of complexity and order in the universe.

Now how can that be? Isn't this extremely unlikely, given the sheer number of likely inhabited planets? Indeed it is very unlikely. But equally unlikely is the existence of our universe, with its set of laws of physics and related physical constants, so exquisitely, precisely what is needed for the evolution of life to be possible. But by the anthropic principle, if the universe didn't allow the evolution of life we wouldn't be here to notice it. Yet here we are. So by a similar anthropic principle, we're here in the lead in the universe. Again, if we weren't here, we would not be noticing it.

Let's consider some arguments against this perspective.

Perhaps there are extremely advanced technological civilizations out there, but we are outside their light sphere of intelligence. That is, they haven't gotten here yet. Okay, in this case, SETI will still fail to find ETIs because we won't be able to see (or hear) them, at least not unless and until we find a way to break out of our light sphere (or the ETl does so) by manipulating the speed of light or finding shortcuts, as I discussed above.

Perhaps they are among us, but have decided to remain invisible to us. If they have made that decision, they are likely to succeed in avoiding being noticed. Again, it is hard to believe that every single ETl has made the same decision.

John Smart has suggested in what he calls the ”transcension” scenario that once civilizations saturate their local region of s.p.a.ce with their intelligence, they create a new universe (one that will allow continued exponential growth of complexity and intelligence) and essentially leave this universe.91 Smart suggests that this option may be so attractive that it is the consistent and inevitable outcome of an ETl's having reached an advanced stage of its development, and it thereby explains the Fermi Paradox. Smart suggests that this option may be so attractive that it is the consistent and inevitable outcome of an ETl's having reached an advanced stage of its development, and it thereby explains the Fermi Paradox.

Incidentally, I have always considered the science-fiction notion of large s.p.a.ces.h.i.+ps piloted by huge, squishy creatures similar to us to be very unlikely. Seth Shostak comments that ”the reasonable probability is that any extraterrestrial intelligence we will detect will be machine intelligence, not biological intelligence like us.” In my view this is not simply a matter of biological beings sending out machines (as we do today) but rather that any civilization sophisticated enough to make the trip here would have long since pa.s.sed the point of merging with its technology and would not need to send physically bulky organisms and equipment.

If they exist, why would they come here? One mission would be for observation-to gather knowledge (just as we observe other species on Earth today). Another would be to seek matter and energy to provide additional substrate for its expanding intelligence. The intelligence and equipment needed for such exploration and expansion (by an ETl, or by us when we get to that stage of development) would be extremely small, basically nan.o.bots and information transmissions.

It appears that our solar system has not yet been turned into someone else's computer. And if this other civilization is only observing us for knowledge's sake and has decided to remain silent, SETl will fail to find it, because if an advanced civilization does not want us to notice it, it would succeed in that desire. Keep in mind that such a civilization would be vastly more intelligent than we are today. Perhaps it will reveal itself to us when we achieve the next level of our evolution, specifically merging our biological brains with our technology, which is to say, after the Singularity. However, given that the SETl a.s.sumption implies that there are billions of such highly developed civilizations, it seems unlikely that all of them have made the same decision to stay out of our way.