Part 34 (2/2)

133. 133. Alternatively, nanotechnology can be designed to be extremely energy efficient in the first place so that energy recapture would be unnecessary, and infeasible because there would be relatively little heat dissipation to recapture. In a private communication (January 2005), Robert A. Freitas Jr. writes: ”Drexler ( Alternatively, nanotechnology can be designed to be extremely energy efficient in the first place so that energy recapture would be unnecessary, and infeasible because there would be relatively little heat dissipation to recapture. In a private communication (January 2005), Robert A. Freitas Jr. writes: ”Drexler (Nanosystems: 396) claims that energy dissipation may in theory be as low as Ediss ~ 0.1 MJ/kg 'if one a.s.sumes the development of a set of mechanochemical processes capable of transforming feedstock molecules into complex product structures using only reliable, nearly reversible steps: 0.1 MJ/kg of diamond corresponds roughly to the minimum thermal noise at room temperature (e.g., kT ~ 4 zJ/atom at 298 K).” ~ 0.1 MJ/kg 'if one a.s.sumes the development of a set of mechanochemical processes capable of transforming feedstock molecules into complex product structures using only reliable, nearly reversible steps: 0.1 MJ/kg of diamond corresponds roughly to the minimum thermal noise at room temperature (e.g., kT ~ 4 zJ/atom at 298 K).”

134. 134. Alexis De Vos, Alexis De Vos, Endoreversible Thermodynamics of Solar Energy Conversion Endoreversible Thermodynamics of Solar Energy Conversion (London: Oxford University Press, 1992), p. 103. (London: Oxford University Press, 1992), p. 103.

135. 135. R. D. Schaller and V. 1. Klimov, ”High Efficiency Carrier Multiplication in PbSe Nanocrystals: Implications for Solar Energy Conversion,” R. D. Schaller and V. 1. Klimov, ”High Efficiency Carrier Multiplication in PbSe Nanocrystals: Implications for Solar Energy Conversion,” Physical Review Letters Physical Review Letters 92.18 (May 7, 2004): 186601. 92.18 (May 7, 2004): 186601.

136. 136. National Academies Press, Commission on Physical Sciences, Mathematics, and Applications, Harnessing Light: Optical Science and Engineering for the 21st Century, (Was.h.i.+ngton, D.C.: National Academy Press, 1998), p. 166, books.nap.edu/books/0309059917/html/166.html. National Academies Press, Commission on Physical Sciences, Mathematics, and Applications, Harnessing Light: Optical Science and Engineering for the 21st Century, (Was.h.i.+ngton, D.C.: National Academy Press, 1998), p. 166, books.nap.edu/books/0309059917/html/166.html.

137. 137. Matt Marshall, ”World Events Spark Interest in Solar Cell Energy Start-ups,” Mercury News, August 15, 2004, /news_articles_082004/b-silicon_ valley.php and /cache/merc081504.htm. Matt Marshall, ”World Events Spark Interest in Solar Cell Energy Start-ups,” Mercury News, August 15, 2004, /news_articles_082004/b-silicon_ valley.php and /cache/merc081504.htm.

138. 138. John Gartner, ”NASA s.p.a.ces on Energy Solution,” John Gartner, ”NASA s.p.a.ces on Energy Solution,” Wired News Wired News, June 22, 2004, /news/technology/0,1282,63913,00.html. See also Arthur Smith, ”The Case for Solar Power from s.p.a.ce,” plishments,” Gabor A. Somorjai and Keith McCrea, ”Roadmap for Catalysis Science in the 21st Century: A Personal View of Building the Future on Past and Present Accomplishments,” Applied Catalysis Applied Catalysis A:General 222.12 (2001): 318, Lawrence Berkeley National Laboratory number 3.LBNL-48555, /Nano/DeathIsAnOutrage.htm. Robert A. Freitas, Jr. ”Death Is an Outrage!” presented at the Fifth AlcorConference on Extreme Life Extension, Newport Beach, California, November 16, 2002, /Nano/DeathIsAnOutrage.htm.

149. 149. For example, the fifth annual BIOMEMS conference, June 2003, San Jose, /events/11201717.htm. For example, the fifth annual BIOMEMS conference, June 2003, San Jose, /events/11201717.htm.

150. 150. First two volumes of a planned four-volume series: Robert A. Freitas Jr., First two volumes of a planned four-volume series: Robert A. Freitas Jr., Nanomedicine Nanomedicine, vol. I, Basic Capabilities Basic Capabilities (Georgetown, Tex.: Landes Bioscience, 1999); (Georgetown, Tex.: Landes Bioscience, 1999); Nanomedicine Nanomedicine, vol. IIA, Biocompatibility Biocompatibility (Georgetown, Tex.: Landes Bioscience, 2003); . (Georgetown, Tex.: Landes Bioscience, 2003); .

151. 151. Robert A. Freitas Jr., ”Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell,” Robert A. Freitas Jr., ”Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell,” Artificial Cells, Blood Subst.i.tutes, and Immobilization Biotechnology Artificial Cells, Blood Subst.i.tutes, and Immobilization Biotechnology 26 (1998): 41130, /Nano/Microbivores.htm; Robert A. Freitas Jr., ”Microbivores: Artificial Mechanical Phagocytes,” Robert A. Freitas Jr., ”Microbivores: Artificial Mechanical Phagocytes using Digest and Discharge Protocol,” Zyvex preprint, March 2001, /Nano/Microbivores.htm; Robert A. Freitas Jr., ”Microbivores: Artificial Mechanical Phagocytes,” Foresight Update Foresight Update no. 44, March 31, 2001, pp. 1113, /NMI/9.4.2.5.htm.

154. 154. George Whitesides, ”Nanoinspiration: The Once and Future Nanomachine,” George Whitesides, ”Nanoinspiration: The Once and Future Nanomachine,” Scientific American Scientific American 285.3 (September 16,2001): 7883. 285.3 (September 16,2001): 7883.

155. 155. ”According to Einstein's approximation for Brownian motion, after 1 second has elapsed at room temperature a fluidic water molecule has, on average, diffused a distance of ~50 microns (~400,000 molecular diameters) whereas a l-rnicron nanorobot immersed in that same fluid has displaced by only ~0.7 microns (only ~0.7 device diameter) during the same time period. Thus Brownian motion is at most a minor source of navigational error for motile medical nanorobots,” See K. Eric Drexler et al., ”Many Future Nanomachines: A Reb.u.t.tal to Whitesides' a.s.sertion That Mechanical Molecular a.s.semblers Are Not Workable and Not a Concern,” a Debate about a.s.semblers, Inst.i.tute for Molecular Manufacturing, 2001, /search/expand?pub=infobike://adis/add/2003/00000001/00000001/art00001. 1.1 (2003): 311, abstract available at /search/expand?pub=infobike://adis/add/2003/00000001/00000001/art00001.

157. 157. As quoted by Douglas Hofstadter in As quoted by Douglas Hofstadter in G.o.del, Escher, Bach: An Eternal Golden Braid G.o.del, Escher, Bach: An Eternal Golden Braid (New York: Basic Books, 1979). (New York: Basic Books, 1979).

158. 158. The author runs a company, FATKAT (Financial Accelerating Transactions by Kurzweil Adaptive Technologies), which applies computerized pattern recognition to financial data to make stock-market investment decisions, . The author runs a company, FATKAT (Financial Accelerating Transactions by Kurzweil Adaptive Technologies), which applies computerized pattern recognition to financial data to make stock-market investment decisions, .

159. 159. See discussion in chapter 2 on price-performance improvements in computer memory and electronics in general. See discussion in chapter 2 on price-performance improvements in computer memory and electronics in general.

160. 160. Runaway AI refers to a scenario where, as Max More describes, ”superintelligent Runaway AI refers to a scenario where, as Max More describes, ”superintelligent machines machines, initially harnessed for human human benefit, soon leave us behind.” Max More, ”Embrace, Don't Relinquish, the Future,” puterworld.com/softwaretopics/software/story/0,10801,99691,00.html. This tarnished image led to ”AI Winter,” defined as ”a term coined by Richard Gabriel for the (circa 199094?) crash of the wave of enthusiasm for the AI language Lisp and AI itself, following a boom in the 1980s.” Duane Rettig wrote: ”... companies rode the great AI wave in the early 80's, when large corporations poured billions of dollars into the AI hype that promised thinking machines in 10 years. When the promises turned out to be harder than originally thought, the AI wave crashed, and Lisp crashed with it because of its a.s.sociation with AI. We refer to it as the AI Winter.” Duane Rettig quoted in ”AI Winter,” c2.com/cgi/wiki?AiWinter.

163. 163. The General Problem Solver (GPS) computer program, written in 1957, was able to solve problems through rules that allowed the GPS to divide a problem's goals into subgoals, and then check if obtaining a particular subgoal would bring the GPS closer to solving the overall goal. In the early 1960s Thomas Evan wrote a.n.a.lOGY, a ”program [that] solves geometric-a.n.a.logy problems of the form A:B::C:? taken from IQ tests and college entrance exams.” Boicho Kokinov and Robert M. French, ”Computational Models of a.n.a.logy-Making,” in L. Nadel, ed., The General Problem Solver (GPS) computer program, written in 1957, was able to solve problems through rules that allowed the GPS to divide a problem's goals into subgoals, and then check if obtaining a particular subgoal would bring the GPS closer to solving the overall goal. In the early 1960s Thomas Evan wrote a.n.a.lOGY, a ”program [that] solves geometric-a.n.a.logy problems of the form A:B::C:? taken from IQ tests and college entrance exams.” Boicho Kokinov and Robert M. French, ”Computational Models of a.n.a.logy-Making,” in L. Nadel, ed., Encyclopedia of Cognitive Science Encyclopedia of Cognitive Science, vol. 1 (London: Nature Publis.h.i.+ng Group, 2003), pp. 11318. See also A. Newell, J. C. Shaw, and H. A. Simon, ”Report on a General Problem-Solving Program,” Proceedings of the International Conference on Information Processing Proceedings of the International Conference on Information Processing (Paris: UNESCO House, 1959), pp. 25664; Thomas Evans, ”A Heuristic Program to Solve Geometric-a.n.a.logy Problems,” in M. Minsky, ed., (Paris: UNESCO House, 1959), pp. 25664; Thomas Evans, ”A Heuristic Program to Solve Geometric-a.n.a.logy Problems,” in M. Minsky, ed., Semantic Information Processing Semantic Information Processing (Cambridge, Ma.s.s.: MIT Press, 1968). (Cambridge, Ma.s.s.: MIT Press, 1968).

164. 164. Sir Arthur Conan Doyle, ”The Red-Headed League,” 1890, available at /short-stories/UBooks/RedHead.shtml. Sir Arthur Conan Doyle, ”The Red-Headed League,” 1890, available at /short-stories/UBooks/RedHead.shtml.

165. 165. V. Yu et al., ”Antimicrobial Selection by a Computer: A Blinded Evaluation by Infectious Diseases Experts,” V. Yu et al., ”Antimicrobial Selection by a Computer: A Blinded Evaluation by Infectious Diseases Experts,” JAMA JAMA 242.12 (1979): 127982. 242.12 (1979): 127982.

166. 166. Gary H. Anthes, ”Computerizing Common Sense,” Gary H. Anthes, ”Computerizing Common Sense,” Computerworld Computerworld, April 8, 2002, puterworld.com/news/2002/story/0,11280,69881,00.html.

167. 167. Kristen Philipkoski, ”Now Here's a Really Big Idea,” Kristen Philipkoski, ”Now Here's a Really Big Idea,” Wired News Wired News, November 25, 2002, /news/technology/0,1282,56374,00.html, reporting on Darryl Macer, ”The Next Challenge Is to Map the Human Mind,” Nature Nature 420 (November 14, 2002): 121; see also a description of the project at pany, which was sold to Xerox in 1980), now a public company. KAI introduced the first commercially marketed large-vocabulary speech-recognition system in 1987 (Kurzweil Voice Report, with a ten-thousand-word vocabulary). Kurzweil Applied Intelligence (KAI), founded by the author in 1982, was sold in 1997 for $100 million and is now part of ScanSoft (formerly called Kurzweil Computer Products, the author's first company, which was sold to Xerox in 1980), now a public company. KAI introduced the first commercially marketed large-vocabulary speech-recognition system in 1987 (Kurzweil Voice Report, with a ten-thousand-word vocabulary).

172. 172. Here is the basic schema for a neural net algorithm. Many variations are possible, and the designer of the system needs to provide certain critical parameters and methods, detailed below. Here is the basic schema for a neural net algorithm. Many variations are possible, and the designer of the system needs to provide certain critical parameters and methods, detailed below.

Creating a neural-net solution to a problem involves the following steps: Define the input. Define the input. Define the topology of the neural net (i.e., the layers of neurons and the connections between the neurons). Define the topology of the neural net (i.e., the layers of neurons and the connections between the neurons). Train the neural net on examples of the problem. Train the neural net on examples of the problem. Run the trained neural net to solve new examples of the problem. Run the trained neural net to solve new examples of the problem. Take your neural-net company public. Take your neural-net company public.These steps (except for the last one) are detailed below:The Problem InputThe problem input to the neural net consists of a series of numbers. This input can be: In a visual pattern-recognition system, a two-dimensional array of numbers representing the pixels of an image; or In a visual pattern-recognition system, a two-dimensional array of numbers representing the pixels of an image; or In an auditory (e.g., speech) recognition system, a two-dimensional array of numbers representing a sound, in which the first dimension represents parameters of the sound (e.g., frequency components) and the second dimension represents different points in time; or In an auditory (e.g., speech) recognition system, a two-dimensional array of numbers representing a sound, in which the first dimension represents parameters of the sound (e.g., frequency components) and the second dimension represents different points in time; or In an arbitrary pattern-recognition system, an n-dimensional array of numbers representing the input pattern. In an arbitrary pattern-recognition system, an n-dimensional array of numbers representing the input pattern.Defining the TopologyTo set up the neural net, the architecture of each neuron consists of: Multiple inputs in which each input is ”connected” to either the output of another neuron, or one of the input numbers. Multiple inputs in which each input is ”connected” to either the output of another neuron, or one of the input numbers. Generally, a single output, which is connected either to the input of another neuron (which is usually in a higher layer), or to the final output. Generally, a single output, which is connected either to the input of another neuron (which is usually in a higher layer), or to the final output.Set Up the First Layer of Neurons Create N Create N0 neurons in the first layer. For each of these neurons, ”connect” each of the multiple inputs of the neuron to ”points” (i.e., numbers) in the problem input. These connections can be determined randomly or using an evolutionary algorithm (see below). neurons in the first layer. For each of these neurons, ”connect” each of the multiple inputs of the neuron to ”points” (i.e., numbers) in the problem input. These connections can be determined randomly or using an evolutionary algorithm (see below). a.s.sign an initial ”synaptic strength” to each connection created. These weights can start out all the same, can be a.s.signed randomly, or can be determined in another way (see below). a.s.sign an initial ”synaptic strength” to each connection created. These weights can start out all the same, can be a.s.signed randomly, or can be determined in another way (see below).Set Up the Additional Layers of NeuronsSet up a total of M layers of neurons. For each layer, set up the neurons in that layer.For layeri: Create N Create Ni neurons in layer., For each of these neurons, ”connect” each of the multiple inputs of the neuron to the outputs of the neurons in layer neurons in layer., For each of these neurons, ”connect” each of the multiple inputs of the neuron to the outputs of the neurons in layeri-1 (see variations below). (see variations below). a.s.sign an initial ”synaptic strength” to each connection created. These weights can start out all the same, can be a.s.signed randomly, or can be determined in another way (see below). a.s.sign an initial ”synaptic strength” to each connection created. These weights can start out all the same, can be a.s.signed randomly, or can be determined in another way (see below). The outputs of the neurons in layer The outputs of the neurons in layerM are the outputs of the neural net (see variations below). are the outputs of the neural net (see variations below).The Recognition TrialsHow Each Neuron WorksOnce the neuron is set up, it does the following for each recognition trial: Each weighted input to the neuron is computed by multiplying the output of the other neuron (or initial input) that the input to this neuron is connected to by the synaptic strength of that connection. Each weighted input to the neuron is computed by multiplying the output of the other neuron (or initial input) that the input to this neuron is connected to by the synaptic strength of that connection. All of these weighted inputs to the neuron are summed. All of these weighted inputs to the neuron are summed. If this sum is greater than the firing threshold of this neuron, then this neuron is considered to fire and its output is 1. Otherwise, its output is 0 (see variations below). If this sum is greater than the firing threshold of this neuron, then this neuron is considered to fire and its output is 1. Otherwise, its output is 0 (see variations below).Do the Following for Each Recognition TrialFor each layer, from layer, to layerM:For each neuron in the layer: Sum its weighted inputs (each weighted input = the output of the other neuron [or initial input] that the input to this neuron is connected to multiplied by the synaptic strength of that connection). Sum its weighted inputs (each weighted input = the output of the other neuron [or initial input] that the input to this neuron is connected to multiplied by the synaptic strength of that connection). If this sum of weighted inputs is greater than the firing threshold for this neuron, set the output of this neuron = 1, otherwise set it to 0. If this sum of weighted inputs is greater than the firing threshold for this neuron, set the output of this neuron = 1, otherwise set it to 0.To Train the Neural Net Run repeated recognition trials on sample problems. Run repeated recognition trials on sample problems. After each trial, adjust the synaptic strengths of all the interneuronal connections to improve the performance of the neural net on this trial (see the discussion below on how to do this). After each trial, adjust the synaptic strengths of all the interneuronal connections to improve the performance of the neural net on this trial (see the discussion below on how to do this). Continue this training until the accuracy rate of the neural net is no longer improving (i.e., reaches an asymptote). Continue this training until the accuracy rate of the neural net is no longer improving (i.e., reaches an asymptote).Key Design DecisionsIn the simple schema above, the designer of this neural-net algorithm needs to determine at the outset: What the input numbers represent. What the input numbers represent. The number of layers of neurons. The number of layers of neurons. The number of neurons in each layer. (Each layer does not necessarily need to have the same number of neurons.) The number of neurons in each layer. (Each layer does not necessarily need to have the same number of neurons.) The number of inputs to each neuron in each layer. The number of inputs (i.e., interneuronal connections) can also vary from neuron to neuron and from layer to layer. The number of inputs to each neuron in each layer. The number of inputs (i.e., interneuronal connections) can also vary from neuron to neuron and from layer to layer. The actual ”wiring” (i.e., the connections). For each neuron in each layer, this consists of a list of other neurons, the outputs of which const.i.tute the inputs to this neuron. This represents a key design area. There are a number of possible ways to do this: The actual ”wiring” (i.e., the connections). For each neuron in each layer, this consists of a list of other neurons, the outputs of which const.i.tute the inputs to this neuron. This represents a key design area. There are a number of possible ways to do this:(i)Wire the neural net randomly; or Wire the neural net randomly; or(ii) Use an evolutionary algorithm (see below) to determine an optimal wiring; or Use an evolutionary algorithm (see below) to determine an optimal wiring; or(iii) Use the system designer's best judgment in determining the wiring. Use the system designer's best judgment in determining the wiring. The initial synaptic strengths (i.e., weights) of each connection. There are a number of possible ways to do this: The initial synaptic strengths (i.e., weights) of each connection. There are a number of possible ways to do this:(i)Set the synaptic strengths to the same value; or Set the synaptic strengths to the same value; or(ii) Set the synaptic strengths to different random values; or Set the synaptic strengths to different random values; or(iii) Use an evolutionary algorithm to determine an optimal set of initial values; or Use an evolutionary algorithm to determine an optimal set of initial values; or(iv) Use the system designer's best judgment in determining the initial values. Use the system designer's best judgment in determining the initial values. The firing threshold of each neuron. The firing threshold of each neuron. The output. The output can be: The output. The output can be:(i)the outputs of layer the outputs of layerM of neurons; or of neurons; or(ii) the output of a single output neuron, the inputs of which are the outputs of the neurons in layer the output of a single output neuron, the inputs of which are the outputs of the neurons in layerM; or(iii) a function of (e.g., a sum of) the outputs of the neurons in layer a function of (e.g., a sum of) the outputs of the neurons in layerM; or(iv) another function of neuron outputs in multiple layers. another function of neuron outputs in multiple layers. How the synaptic strengths of all the connections are adjusted during the training of this neural net. This is a key design decision and is the subject of a great deal of research and discussion. There are a number of possible ways to do this: How the synaptic strengths of all the connections are adjusted during the training of this neural net. This is a key design decision and is the subject of a great deal of research and discussion. There are a number of possible ways to do this:(i)For each recognition trial, increment or decrement each synaptic strength by a (generally small) fixed amount so that the neural net's output more closely matches the correct answer. One way to do this is to try both incrementing and decrementing and see which has the more desirable effect. This can be time-consuming, so other methods exist for making local decisions on whether to increment or decrement each synaptic strength. For each recognition trial, increment or decrement each synaptic strength by a (generally small) fixed amount so that the neural net's output more closely matches the correct answer. One way to do this is to try both incrementing and decrementing and see which has the more desirable effect. Thi

<script>