Part 30 (2/2)

65. 65. I discussed Norbert Wiener and Ed Fredkin's view of information as the fundamental building block for physics and other levels of reality in my 1990 book, I discussed Norbert Wiener and Ed Fredkin's view of information as the fundamental building block for physics and other levels of reality in my 1990 book, The Age of Intelligent Machines The Age of Intelligent Machines.

The complexity of casting all of physics in terms of computational transformations proved to be an immensely challenging project, but Fredkin has continued his efforts. Wolfram has devoted a considerable portion of his work over the past decade to this notion, apparently with only limited communication with some of the others in the physics community who are also pursuing the idea. Wolfram's stated goal ”is not to present a specific ultimate model for physics,” but in his ”Note for Physicists” (which essentially equates to a grand challenge), Wolfram describes the ”features that [he] believe[s] such a model will have” (A New Kind of Science, pp. 104365, /nksonline/page-1043c-text).In The Age of Intelligent Machines The Age of Intelligent Machines, I discuss ”the question of whether the ultimate nature of reality is a.n.a.log or digital” and point out that ”as we delve deeper and deeper into both natural and artificial processes, we find the nature of the process often alternates between a.n.a.log and digital representations of information.” As an ill.u.s.tration, I discussed sound. In our brains, music is represented as the digital firing of neurons in the cochlea, representing different frequency bands. In the air and in the wires leading to loudspeakers, it is an a.n.a.log phenomenon. The representation of sound on a compact disc is digital, which is interpreted by digital circuits. But the digital circuits consist of thresholded transistors, which are a.n.a.log amplifiers. As amplifiers, the transistors manipulate individual electrons, which can be counted and are, therefore, digital, but at a deeper level electrons are subject to a.n.a.log quantum-field equations. At a yet deeper level, Fredkin and now Wolfram are theorizing a digital (computational) basis to these continuous equations.It should be further noted that if someone actually does succeed in establis.h.i.+ng such a digital theory of physics, we would then be tempted to examine what sorts of deeper mechanisms are actually implementing the computations and links of the cellular automata. Perhaps underlying the cellular automata that run the universe are yet more basic a.n.a.log phenomena, which, like transistors, are subject to thresholds that enable them to perform digital transactions. Thus, establis.h.i.+ng a digital basis for physics will not settle the philosophical debate as to whether reality is ultimately digital or a.n.a.log. Nonetheless, establis.h.i.+ng a viable computational model of physics would be a major accomplishment.So how likely is this? We can easily establish an existence proof that a digital model of physics is feasible, in that continuous equations can always be expressed to any desired level of accuracy in the form of discrete transformations on discrete changes in value. That is, after all, the basis for the fundamental theorem of calculus. However, expressing continuous formulas in this way is an inherent complication and would violate Einstein's dictum to express things ”as simply as possible, but no simpler.” So the real question is whether we can express the basic relations.h.i.+ps that we are aware of in more elegant terms, using cellular-automata algorithms. One test of a new theory of physics is whether it is capable of making verifiable predictions. In at least one important way, that might be a difficult challenge for a cellular automata-based theory because lack of predictability is one of the fundamental features of cellular automata.Wolfram starts by describing the universe as a large network of nodes. The nodes do not exist in ”s.p.a.ce,” but rather s.p.a.ce, as we perceive it, is an illusion created by the smooth transition of phenomena through the network of nodes. One can easily imagine building such a network to represent ”naive” (Newtonian) physics by simply building a three-dimensional network to any desired degree of granularity. Phenomena such as ”particles” and ”waves” that appear to move through s.p.a.ce would be represented by ”cellular gliders,” which are patterns that are advanced through the network for each cycle of computation. Fans of the game Life Life (which is based on cellular automata) will recognize the common phenomenon of gliders and the diversity of patterns that can move smoothly through a cellular-automaton network. The speed of light, then, is the result of the clock speed of the celestial computer, since gliders can advance only one cell per computational cycle. (which is based on cellular automata) will recognize the common phenomenon of gliders and the diversity of patterns that can move smoothly through a cellular-automaton network. The speed of light, then, is the result of the clock speed of the celestial computer, since gliders can advance only one cell per computational cycle.Einstein's general relativity, which describes gravity as perturbations in s.p.a.ce itself, as if our three-dimensional world were curved in some unseen fourth dimension, is also straightforward to represent in this scheme. We can imagine a four-dimensional network and can represent apparent curvatures in s.p.a.ce in the same way that one represents normal curvatures in three-dimensional s.p.a.ce. Alternatively, the network can become denser in certain regions to represent the equivalent of such curvature.A cellular-automata conception proves useful in explaining the apparent increase in entropy (disorder) that is implied by the second law of thermodynamics. We have to a.s.sume that the cellular-automata rule underlying the universe is a cla.s.s 4 rule (see main text)-otherwise the universe would be a dull place indeed. Wolfram's primary observation that a cla.s.s 4 cellular automaton quickly produces apparent randomness (despite its determinate process) is consistent with the tendency toward randomness that we see in Brownian motion and that is implied by the second law.Special relativity is more difficult. There is an easy mapping from the Newtonian model to the cellular network. But the Newtonian model breaks down in special relativity. In the Newtonian world, if a train is going eighty miles per hour and you drive along it on a parallel road at sixty miles per hour, the train will appear to pull away from you at twenty miles per hour. But in the world of special relativity, if you leave Earth at three quarters of the speed of light, light will still appear to you to move away from you at the full speed of light. In accordance with this apparently paradoxical perspective, both the size and subjective pa.s.sage of time for two observers will vary depending on their relative speed. Thus, our fixed mapping of s.p.a.ce and nodes becomes considerably more complex. Essentially, each observer needs his or her own network. However, in considering special relativity, we can essentially apply the same conversion to our ”Newtonian” network as we do to Newtonian s.p.a.ce. However, it is not clear that we are achieving greater simplicity in representing special relativity in this way.A cellular-node representation of reality may have its greatest benefit in NOTES 521 understanding some aspects of the phenomenon of quantum mechanics. It could provide an explanation for the apparent randomness that we find in quantum phenomena. Consider, for example, the sudden and apparently random creation of particle-antiparticle pairs. The randomness could be the same sort of randomness that we see in cla.s.s 4 cellular automata. Although predetermined, the behavior of cla.s.s 4 automata cannot be antic.i.p.ated (other than by running the cellular automata) and is effectively random.This is not a new view. It's equivalent to the ”hidden variables” formulation of quantum mechanics, which states that there are some variables that we cannot otherwise access that control what appears to be random behavior that we can observe. The hidden-variables conception of quantum mechanics is not inconsistent with the formulas for quantum mechanics. It is possible but is not popular with quantum physicists because it requires a large number.of a.s.sumptions to work out in a very particular way. However, I do not view this as a good argument against it. The existence of our universe is itself very unlikely and requires many a.s.sumptions to all work out in a very precise way.Yethere we are.A bigger question is, How could a hidden-variables theory be tested? If based on cellular-automata-like processes, the hidden variables would be inherently unpredictable, even if deterministic. We would have to find some other way to ”unhide” the hidden variables.Wolfram's network conception of the universe provides a potential perspective on the phenomenon of quantum entanglement and the collapse of the wave function. The collapse of the wave function, which renders apparently ambiguous properties of a particle (for example, its location) retroactively determined, can be viewed from the cellular-network perspective as the interaction of the observed phenomenon with the observer itself. As observers, we are not outside the network but exist inside it. We know from cellular mechanics that two ent.i.ties cannot interact without both being changed, which suggests a basis for wave-function collapse.Wolfram writes, ”If the universe is a network, then it can in a sense easily contain threads that continue to connect particles even when the particles get far apart in terms of ordinary s.p.a.ce.” This could provide an explanation for recent dramatic experiments showing nonlocality of action in which two ”quantum entangled” particles appear to continue to act in concert with each other even though separated by large distances. Einstein called this ”spooky action at a distance” and rejected it, although recent experiments appear to confirm it.Some phenomena fit more neatly into this cellular automata-network conception than others. Some of the suggestions appear elegant, but as Wolfram's ”Note for Physicists” makes clear, the task of translating all of physics into a consistent cellular-automata-based system is daunting indeed.Extending his discussion to philosophy, Wolfram ”explains” the apparent phenomenon of free will as decisions that are determined but unpredictable. Since 522 NOTES I: 1'.IIIf there is no way to predict the outcome of a cellular process without actually running the process, and since no simulator could possibly run faster than the universe itself, there is therefore no way to reliably predict human decisions. So even though our decisions are determined, there is no way to preidentify what they will be. However, this is not a fully satisfactory examination of the concept. This observation concerning the lack of predictability can be made for the outcome of most physical processes-such as where a piece of dust will fall on the ground. This view thereby equates human free will with the random descent of a piece of dust. Indeed, that appears to be Wolfram's view when he states that the process in the human brain is ”computationally equivalent” to those taking place in processes such as fluid turbulence.Some of the phenomena in nature (for example, clouds, coastlines) are characterized by repet.i.tive simple processes such as cellular automata and fractals, but intelligent patterns (such as the human brain) require an evolutionary process (or alternatively, the reverse engineering of the results of such a process). Intelligence is the inspired product of evolution and is also, in my view, the most powerful ”force” in the world, ultimately transcending the powers of mindless natural forces.In summary, Wolfram's sweeping and ambitious treatise paints a compelling but ultimately overstated and incomplete picture. Wolfram joins a growing community of voices that maintain that patterns of information, rather than matter and energy, represent the more fundamental building blocks of reality. Wolfram has added to our knowledge of how patterns of information create the world we experience, and I look forward to a period of collaboration between Wolfram and his colleagues so that we can build a more robust vision of the ubiquitous role of algorithms in the world.The lack of predictability of cla.s.s 4 cellular automata underlies at least some of the apparent complexity of biological systems and does represent one of the important biological paradigms that we can seek to emulate in our technology. It does not explain all of biology. It remains at least possible, however, that such methods can explain all of physics. If Wolfram, or anyone else for that matter, succeeds in formulating physics in terms of cellular-automata operations and their patterns, Wolfram's book will have earned its t.i.tle. In any event, I believe the book to be an important work of ontology.

66. 66. Rule 110 states that a cell becomes white if its previous color was, and its two neighbors are, all black or all white, or if its previous color was white and the two neighbors are black and white, respectively; otherwise, the cell becomes black. Rule 110 states that a cell becomes white if its previous color was, and its two neighbors are, all black or all white, or if its previous color was white and the two neighbors are black and white, respectively; otherwise, the cell becomes black.

67. 67. Wolfram, Wolfram, New Kind of Science New Kind of Science, p. 4, /nksonline/page-4-text.

68. 68. Note that certain interpretations of quantum mechanics imply that the world is not based on deterministic rules and that there is an inherent quantum randomness to every interaction at the (small) quantum scale of physical reality. Note that certain interpretations of quantum mechanics imply that the world is not based on deterministic rules and that there is an inherent quantum randomness to every interaction at the (small) quantum scale of physical reality.

69. 69. As discussed in note 57 above, the uncompressed genome has about six billion bits of information (order of magnitude = 10 As discussed in note 57 above, the uncompressed genome has about six billion bits of information (order of magnitude = 1010 bits), and the compressed genome is about 30 to 100 million bytes. Some of this design information applies, of course, to other organs. Even a.s.suming all of 100 million bytes applies to the brain, we get a conservatively high figure of 10 bits), and the compressed genome is about 30 to 100 million bytes. Some of this design information applies, of course, to other organs. Even a.s.suming all of 100 million bytes applies to the brain, we get a conservatively high figure of 109 bits for the design of the brain in the genome. In chapter 3, I discuss an estimate for ”human memory on the level of individual interneuronal connections,” including ”the connection patterns and neurotransmitter concentrations” of 10 bits for the design of the brain in the genome. In chapter 3, I discuss an estimate for ”human memory on the level of individual interneuronal connections,” including ”the connection patterns and neurotransmitter concentrations” of 1018 (billion billion) bits in a mature brain. This is about a billion (10 (billion billion) bits in a mature brain. This is about a billion (109) times more information than that in the genome which describes the brain's design. This increase comes about from the self-organization of the brain as it interacts with the person's environment.

70. 70. See the sections ”Disdisorder” and ”The Law of Increasing Entropy Versus the Growth of Order” in my book See the sections ”Disdisorder” and ”The Law of Increasing Entropy Versus the Growth of Order” in my book The Age of Spiritual Machines: When Computers Exceed Human Intelligence The Age of Spiritual Machines: When Computers Exceed Human Intelligence (New York: Viking, 1999), pp. 3033. (New York: Viking, 1999), pp. 3033.

71. 71. A universal computer can accept as input the definition of any other computer and then simulate that other computer. This does not address the speed of simulation, which might be relatively slow. A universal computer can accept as input the definition of any other computer and then simulate that other computer. This does not address the speed of simulation, which might be relatively slow.

72. 72. C. Geoffrey Woods, ”Crossing the Midline,” C. Geoffrey Woods, ”Crossing the Midline,” Science Science 304.5676 (June 4, 2004): 145556; Stephen Matthews, ”Early Programming of the Hypothalamo-Pituitary-Adrenal Axis,” 304.5676 (June 4, 2004): 145556; Stephen Matthews, ”Early Programming of the Hypothalamo-Pituitary-Adrenal Axis,” Trends in Endocrinology and Metabolism Trends in Endocrinology and Metabolism 13.9 (November 1, 2002): 37380; Justin Crowley and Lawrence Katz, ”Early Development of Ocular Dominance Columns,” 13.9 (November 1, 2002): 37380; Justin Crowley and Lawrence Katz, ”Early Development of Ocular Dominance Columns,” Science Science 290.5495 (November 17, 2000): 132124; Anna Penn et al., ”Compet.i.tion in the Retinogenicu1ate Patterning Driven by Spontaneous Activity,” 290.5495 (November 17, 2000): 132124; Anna Penn et al., ”Compet.i.tion in the Retinogenicu1ate Patterning Driven by Spontaneous Activity,” Science Science 279.5359 (March 27, 1998): 210812. 279.5359 (March 27, 1998): 210812.

73. 73. The seven commands of a Turing machine are: (1) Read Tape, (2) Move Tape Left, (3) Move Tape Right, (4) Write 0 on the Tape, (5) Write 1 on the Tape, (6) Jump to Another Command, and (7) Halt. The seven commands of a Turing machine are: (1) Read Tape, (2) Move Tape Left, (3) Move Tape Right, (4) Write 0 on the Tape, (5) Write 1 on the Tape, (6) Jump to Another Command, and (7) Halt.

74. 74. In what is perhaps the most impressive a.n.a.lysis in his book, Wolfram shows how a Turing machine with only two states and five possible colors can be a universal Turing machine. For forty years, we've thought that a universal Turing machine had to be more complex than this. Also impressive is Wolfram's demonstration that rule 110 is capable of universal computation, given the right software. Of course, universal computation by itself cannot perform useful tasks without appropriate software. In what is perhaps the most impressive a.n.a.lysis in his book, Wolfram shows how a Turing machine with only two states and five possible colors can be a universal Turing machine. For forty years, we've thought that a universal Turing machine had to be more complex than this. Also impressive is Wolfram's demonstration that rule 110 is capable of universal computation, given the right software. Of course, universal computation by itself cannot perform useful tasks without appropriate software.

75. 75. The ”nor” gate transforms two inputs into one output. The output of ”nor” is true if and only if neither A The ”nor” gate transforms two inputs into one output. The output of ”nor” is true if and only if neither A nor nor B is true. B is true.

76. 76. See the section ”A See the section ”A nor nor B: The Basis of Intelligence?” in B: The Basis of Intelligence?” in The Age of Intelligent Machines The Age of Intelligent Machines (Cambridge, Ma.s.s.: MIT Press, 1990), pp. 15257, munications,” presentation at the International Conference on Satellite and Cable Television in Chinese and Asian Regions, Communication Arts Research Inst.i.tute, Fu Ien Catholic University, June 46, 1996. United Nations Economic and Social Commission for Asia and the Pacific, ”Regional Road Map Towards an Information Society in Asia and the Pacific,” ST/ESCAP/2283, munications,” presentation at the International Conference on Satellite and Cable Television in Chinese and Asian Regions, Communication Arts Research Inst.i.tute, Fu Ien Catholic University, June 46, 1996.

78. 78. See ”The 3 by 5 Initiative,” Fact Sheet 274, December 2003, panies cornered 90 percent of venture-capital investments ($32 billion) (PricewaterhouseCoopers news release, ”Venture Funding Explosion Continues: Annual and Quarterly Investment Records Smashed, According to PricewaterhouseCoopers Money Tree National Survey,” February 14, 2000). Venture-capital levels certainly dropped during the high-tech recession; but in just the second quarter of 2003, software companies alone attracted close to $1 billion (PricewaterhouseCoopers news release, ”Venture Capital Investments Stabilize in Q2 2003,” July 29, 2003). In 1974 in all U.S. manufacturing industries forty-two firms received a total of $26.4 million in venture-capital disburs.e.m.e.nts (in 1974 dollars, or $81 million in 1992 dollars). Samuel Kortum and Josh Lerner, ”a.s.sessing the Contribution of Venture Capital to Innovation,” Technology investments accounted for 76 percent of 1998 venture-capital investments ($10.1 billion) (PricewaterhouseCoopers news release, ”Venture Capital Investments Rise 24 Percent and Set Record at $14.7 Billion, PricewaterhouseCoopers Finds,” February 16, 1999). In 1999, technology-based companies cornered 90 percent of venture-capital investments ($32 billion) (PricewaterhouseCoopers news release, ”Venture Funding Explosion Continues: Annual and Quarterly Investment Records Smashed, According to PricewaterhouseCoopers Money Tree National Survey,” February 14, 2000). Venture-capital levels certainly dropped during the high-tech recession; but in just the second quarter of 2003, software companies alone attracted close to $1 billion (PricewaterhouseCoopers news release, ”Venture Capital Investments Stabilize in Q2 2003,” July 29, 2003). In 1974 in all U.S. manufacturing industries forty-two firms received a total of $26.4 million in venture-capital disburs.e.m.e.nts (in 1974 dollars, or $81 million in 1992 dollars). Samuel Kortum and Josh Lerner, ”a.s.sessing the Contribution of Venture Capital to Innovation,” RAND Journal of Economics RAND Journal of Economics 31.4 (Winter 2000): 67492, econ.bu.edu/kortum/rje_Winter'00_Kortum.pdf. As Paul Gompers and Josh Lerner say, ”Inflows to venture capital funds have expanded from virtually zero in the mid-1970s....,” Gompers and Lerner, 31.4 (Winter 2000): 67492, econ.bu.edu/kortum/rje_Winter'00_Kortum.pdf. As Paul Gompers and Josh Lerner say, ”Inflows to venture capital funds have expanded from virtually zero in the mid-1970s....,” Gompers and Lerner, The Venture Capital Cycle The Venture Capital Cycle, (Cambridge, Ma.s.s.: MIT Press, 1999). See also Paul Gompers, ”Venture Capital,” in B. Espen Eckbo, ed., Handbook of Corporate Finance: Empirical Corporate Finance Handbook of Corporate Finance: Empirical Corporate Finance, in the Handbooks in Finance series (Holland: Elsevier, forthcoming), chapter 11, 2005, mba.tuck.dartmouth.edu/pages/faculty/espen.eckbo/PDFs/Handbookpdf/CH11-VentureCapital.pdf.

80. 80. An account of how ”new economy” technologies are making important transformations to ”old economy” industries: Jonathan Rauch, ”The New Old Economy: Oil, Computers, and the Reinvention of the Earth,” An account of how ”new economy” technologies are making important transformations to ”old economy” industries: Jonathan Rauch, ”The New Old Economy: Oil, Computers, and the Reinvention of the Earth,” Atlantic Monthly Atlantic Monthly, January 3, 2001.

81. 81. U.S. Department of Commerce, Bureau of Economic a.n.a.lysis (/english/2004-11-17-voa41.cfrn.

84. 84. Mark Bils and Peter Klenow, ”The Acceleration in Variety Growth,” Mark Bils and Peter Klenow, ”The Acceleration in Variety Growth,” American Economic Review American Economic Review 91.2 (May 2001): 27480, /Acceleration.pdf. 91.2 (May 2001): 27480, /Acceleration.pdf.

85. 85. See notes 84, 86, and 87. See notes 84, 86, and 87.

86. 86. U.S. Department of Labor, Bureau of Labor Statistics, news report, June 3, 2004. You can generate productivity reports at /Article.aspx?1002207; and ”Worldwide B2B E-Commerce to Surpa.s.s $1 Trillion By Year's End,” March 19, 2003, /Article.aspx?1002125. eMarketer, ”E-Business in 2003: How the Internet Is Transforming Companies, Industries, and the Economy-a Review in Numbers,” February 2003; ”US B2C E-Commerce to Top $90 Billion in 2003,” April 30, 2003, /Article.aspx?1002207; and ”Worldwide B2B E-Commerce to Surpa.s.s $1 Trillion By Year's End,” March 19, 2003, /Article.aspx?1002125.

91. 91. The patents used in this chart are, as described by the U.S. Patent and Trademark Office, ”patents for inventions,” also known as ”utility” patents. The U.S. Patent and Trademark Office, Table of Annual U.S. Patent Activity, pared to current expectations) by an annual compounded rate of as little as 2 percent, and considering an annual discount rate (for discounting future values today) of 6 percent, then considering the increased present value resulting from only twenty years of compounded and discounted future (additional) growth, present values should triple. As the subsequent dialogue points out, this a.n.a.lysis does not take into consideration the likely increase in the discount rate that would result from such a perception of increased future growth.

Chapter Three: Achieving the Computational Capacity of the Human Brain 1. 1. Gordon E. Moore, ”Cramming More Components onto Integrated Circuits,” Gordon E. Moore, ”Cramming More Components onto Integrated Circuits,” Electronics Electronics 38.8 (April 19, 1965): 11417, 38.8 (April 19, 1965): 11417,

2. 2. Moore's initial projection in this 1965 paper was that the number of components would double every year. In 1975 this was revised to every two years. However, this more than doubles price-performance every two years because smaller components run faster (because the electronics have less distance to travel). So overall price-performance (for the cost of each transistor cycle) has been coming down by half about every thirteen months. Moore's initial projection in this 1965 paper was that the number of components would double every year. In 1975 this was revised to every two years. However, this more than doubles price-performance every two years because smaller components run faster (because the electronics have less distance to travel). So overall price-performance (for the cost of each transistor cycle) has been coming down by half about every thirteen months.

3. 3. Paolo Gargini quoted in Ann Steffora Mutschler, ”Moore's Law Here to Stay,” ElectronicsWeekly.com, July 14, 2004, , July 14, 2004, puter world.com/hardwaretopics/hardware/story/0,10801,96917,00.html.

4. 4. Michael Kanellos, ” 'High-rise' Chips Sneak on Market,” CNET News.com, July 13, 2004, zdnet.com.com/2100-1103-5267738.html. Michael Kanellos, ” 'High-rise' Chips Sneak on Market,” CNET News.com, July 13, 2004, zdnet.com.com/2100-1103-5267738.html.

5. 5. Benjamin Fulford, ”Chipmakers Are Running Out of Room: The Answer Might Lie in 3-D,” Forbes.com, July 22, 2002, /forbes/2002/0722/173_print.html. Benjamin Fulford, ”Chipmakers Are Running Out of Room: The Answer Might Lie in 3-D,” Forbes.com, July 22, 2002, /forbes/2002/0722/173_print.html.

6. 6. NTT news release, ”Three-Dimensional Nanofabrication Using Electron Beam Lithography,” February 2, 2004, NTT news release, ”Three-Dimensional Nanofabrication Using Electron Beam Lithography,” February 2, 2004,

7. 7. Laszlo Forro and Christian Schonenberger, ”Carbon Nanotubes, Materials for the Future,” Laszlo Forro and Christian Schonenberger, ”Carbon Nano

<script>