Part 7 (1/2)
An obvious tactic is to make the nan.o.bot small enough to glide through the BBB, but this is the least practical approach, at least with nanotechnology as we envision it today. To do this, the nan.o.bot would have to be twenty nanometers or less in diameter, which is about the size of one hundred carbon atoms. Limiting a nan.o.bot to these dimensions would severely limit its functionality.An intermediate strategy would be to keep the nan.o.bot in the bloodstream but to have it project a robotic arm through the BBB and into the extracellular fluid that lines the neural cells. This would allow the nan.o.bot to remain large enough to have sufficient computational and navigational resources. Since almost all neurons lie within two or three cell-widths of a capillary, the arm would need to reach only up to about fifty microns. a.n.a.lyses conducted by Rob Freitas and others show that it is quite feasible to restrict the width of such a manipulator to under twenty nanometers.Another approach is to keep the nan.o.bots in the capillaries and use noninvasive scanning. For example, the scanning system being designed by Finkel and his a.s.sociates can scan at very high resolution (sufficient to see individual interconnections) to a depth of 150 microns, which is several times greater than we need. Obviously this type of optical-imaging system would have to be significantly miniaturized (compared to contemporary designs), but it uses charge-coupled device sensors, which are amenable to such size reduction.Another type of noninvasive scanning would involve one set of nan.o.bots emitting focused signals similar to those of a two-photon scanner and another set of nan.o.bots receiving the transmission. The topology of the intervening tissue could be determined by a.n.a.lyzing the impact on the received signal.Another type of strategy, suggested by Robert Freitas, would be for the nan.o.bot literally to barge its way past the BBB by breaking a hole in it, exit the blood vessel, and then repair the damage. Since the nan.o.bot can be constructed using carbon in a diamondoid configuration, it would be far stronger than biological tissues. Freitas writes, ”To pa.s.s between cells in cell-rich tissue, it is necessary for an advancing nanorobot to disrupt some minimum number of cell-to-cell adhesive contacts that lie ahead in its path. After that, and with the objective of minimizing biointrusiveness, the nanorobot must reseal those adhesive contacts in its wake, crudely a.n.a.logous to a burrowing mole.”46Yet another approach is suggested by contemporary cancer studies. Cancer researchers are keenly interested in selectively disrupting the BBB to transport cancer-destroying substances to tumors. Recent studies of the BBB show that it opens up in response to a variety of factors, which include certain proteins, as mentioned above; localized hypertension; high concentrations of certain substances; microwaves and other forms of radiation; infection; and inflammation. There are also specialized processes that ferry out needed substances such as glucose. It has also been found that the sugar mannitol causes a temporary shrinking of the tightly packed endothelial cells to provide a temporary breach of the BBB. By exploiting these mechanisms, several research groups are developing compounds that open the BBB.47 Although this research is aimed at cancer therapies, similar approaches can be used to open the gateways for nan.o.bots that will scan the brain as well as enhance our mental functioning. Although this research is aimed at cancer therapies, similar approaches can be used to open the gateways for nan.o.bots that will scan the brain as well as enhance our mental functioning.We could bypa.s.s the bloodstream and the BBB altogether by injecting the nan.o.bots into areas of the brain that have direct access to neural tissue. As I mention below, new neurons migrate from the ventricles to other parts of the brain. Nan.o.bots could follow the same migration path.Rob Freitas has described several techniques for nan.o.bots to monitor sensory signals.48 These will be important both for reverse engineering the inputs to the brain, as well as for creating full-immersion virtual reality from within the nervous system. These will be important both for reverse engineering the inputs to the brain, as well as for creating full-immersion virtual reality from within the nervous system.
To scan and monitor auditory signals, Freitas proposes ”mobile nanodevices ... [that] swim into the spiral artery of the ear and down through its bifurcations to reach the cochlear ca.n.a.l, then position themselves as neural monitors in the vicinity of the spiral nerve fibers and the nerves entering the epithelium of the organ of Corti [cochlear or auditory nerves] within the spiral ganglion. These monitors can detect, record, or rebroadcast to other nanodevices in the communications network all auditory neural traffic perceived by the human ear.” To scan and monitor auditory signals, Freitas proposes ”mobile nanodevices ... [that] swim into the spiral artery of the ear and down through its bifurcations to reach the cochlear ca.n.a.l, then position themselves as neural monitors in the vicinity of the spiral nerve fibers and the nerves entering the epithelium of the organ of Corti [cochlear or auditory nerves] within the spiral ganglion. These monitors can detect, record, or rebroadcast to other nanodevices in the communications network all auditory neural traffic perceived by the human ear.” For the body's ”sensations of gravity, rotation, and acceleration,” he envisions ”nanomonitors positioned at the afferent nerve endings emanating from hair cells located in the ... semicircular ca.n.a.ls.” For the body's ”sensations of gravity, rotation, and acceleration,” he envisions ”nanomonitors positioned at the afferent nerve endings emanating from hair cells located in the ... semicircular ca.n.a.ls.” For ”kinesthetic sensory management ... motor neurons can be monitored to keep track of limb motions and positions, or specific muscle activities, and even to exert control.” For ”kinesthetic sensory management ... motor neurons can be monitored to keep track of limb motions and positions, or specific muscle activities, and even to exert control.” ”Olfactory and gustatory sensory neural traffic may be eavesdropped [on] by nanosensory instruments.” ”Olfactory and gustatory sensory neural traffic may be eavesdropped [on] by nanosensory instruments.” ”Pain signals may be recorded or modified as required, as can mechanical and temperature nerve impulses from ... receptors located in the skin.” ”Pain signals may be recorded or modified as required, as can mechanical and temperature nerve impulses from ... receptors located in the skin.” Freitas points out that the retina is rich with small blood vessels, ”permitting ready access to both photoreceptor (rod, cone, bipolar and ganglion) and integrator ... neurons.” The signals from the optic nerve represent more than one hundred million levels per second, but this level of signal processing is already manageable. As MIT's Tomaso Poggio and others have indicated, we do not yet understand the coding of the optic nerve's signals. Once we have the ability to monitor the signals for each discrete fiber in the optic nerve, our ability to interpret these signals will be greatly facilitated. This is currently an area of intense research. Freitas points out that the retina is rich with small blood vessels, ”permitting ready access to both photoreceptor (rod, cone, bipolar and ganglion) and integrator ... neurons.” The signals from the optic nerve represent more than one hundred million levels per second, but this level of signal processing is already manageable. As MIT's Tomaso Poggio and others have indicated, we do not yet understand the coding of the optic nerve's signals. Once we have the ability to monitor the signals for each discrete fiber in the optic nerve, our ability to interpret these signals will be greatly facilitated. This is currently an area of intense research.
As I discuss below, the raw signals from the body go through multiple levels of processing before being aggregated in a compact dynamic representation in two small organs called the right and left insula, located deep in the cerebral cortex. For full-immersion virtual reality, it may be more effective to tap into the already-interpreted signals in the insula rather than the unprocessed signals throughout the body.
Scanning the brain for the purpose of reverse engineering its principles of operation is an easier action than scanning it for the purpose of ”uploading” a particular personality, which I discuss further below (see the ”Uploading the Human Brain” section, p. 198). In order to reverse engineer the brain, we only need to scan the connections in a region sufficiently to understand their basic pattern. We do not need to capture every single connection.
Once we understand the neural wiring patterns within a region, we can combine that knowledge with a detailed understanding of how each type of neuron in that region operates. Although a particular region of the brain may have billions of neurons, it will contain only a limited number of neuron types. We have already made significant progress in deriving the mechanisms underlying specific varieties of neurons and synaptic connections by studying these cells in vitro (in a test dish), as well as in vivo using such methods as two-photon scanning.
The scenarios above involve capabilities that exist at least in an early stage today. We already have technology capable of producing very high-resolution scans for viewing the precise shape of every connection in a particular brain area, if the scanner is physically proximate to the neural features. With regard to nan.o.bots, there are already four major conferences dedicated to developing blood cell-size devices for diagnostic and therapeutic purposes.49 As discussed in chapter 2, we can project the exponentially declining cost of computation and the rapidly declining size and increasing effectiveness of both electronic and mechanical technologies. Based on these projections, we can conservatively antic.i.p.ate the requisite nan.o.bot technology to implement these types of scenarios during the 2020s. Once nan.o.bot-based scanning becomes a reality, we will finally be in the same position that circuit designers are in today: we will be able to place highly sensitive and very high-resolution sensors (in the form of nan.o.bots) at millions or even billions of locations in the brain and thus witness in breathtaking detail living brains in action. As discussed in chapter 2, we can project the exponentially declining cost of computation and the rapidly declining size and increasing effectiveness of both electronic and mechanical technologies. Based on these projections, we can conservatively antic.i.p.ate the requisite nan.o.bot technology to implement these types of scenarios during the 2020s. Once nan.o.bot-based scanning becomes a reality, we will finally be in the same position that circuit designers are in today: we will be able to place highly sensitive and very high-resolution sensors (in the form of nan.o.bots) at millions or even billions of locations in the brain and thus witness in breathtaking detail living brains in action.
Building Models of the Brain
If we were magically shrunk and put into someone's brain while she was thinking, we would see all the pumps, pistons, gears and levers working away, and we would be able to describe their workings completely, in mechanical terms, thereby completely describing the thought processes of the brain. But that description would nowhere contain any mention of thought! It would contain nothing but descriptions of pumps, pistons, levers!-G. W. LEIBNIZ (16461716) How do ... fields express their principles? Physicists use terms like photons, electrons, quarks, quantum wave function, relativity, and energy conservation. Astronomers use terms like planets, stars, galaxies, Hubble s.h.i.+ft, and black holes. Thermodynamicists use terms like entropy, first law, second law, and Carnot cycle. Biologists use terms like phylogeny, ontogeny, DNA, and enzymes. Each of these terms is actually the t.i.tle of a story! The principles of a field are actually a set of interwoven stories about the structure and behavior of field elements.-PETER J. DENNING, PAST PRESIDENT OF THE a.s.sOCIATION FOR COMPUTING MACHINERY, IN ”GREAT PRINCIPLES OF COMPUTING”
It is important that we build models of the brain at the right level. This is, of course, true for all of our scientific models. Although chemistry is theoretically based on physics and could be derived entirely from physics, this would be unwieldy and infeasible in practice. So chemistry uses its own rules and models. We should likewise, in theory, be able to deduce the laws of thermodynamics from physics, but this is a far-from-straightforward process. Once we have a sufficient number of particles to call something a gas rather than a bunch of particles, solving equations for each particle interaction becomes impractical, whereas the laws of thermodynamics work extremely well. The interactions of a single molecule within the gas are hopelessly complex and unpredictable, but the gas itself, comprising trillions of molecules, has many predictable properties.
Similarly, biology, which is rooted in chemistry, uses its own models. It is often unnecessary to express higher-level results using the intricacies of the dynamics of the lower-level systems, although one has to thoroughly understand the lower level before moving to the higher one. For example, we can control certain genetic features of an animal by manipulating its fetal DNA without necessarily understanding all of the biochemical mechanisms of DNA, let alone the interactions of the atoms in the DNA molecule.
Often, the lower level is more complex. A pancreatic islet cell, for example, is enormously complicated, in terms of all its biochemical functions (most of which apply to all human cells, some to all biological cells). Yet modeling what a pancreas does-with its millions of cells-in terms of regulating levels of insulin and digestive enzymes, although not simple, is considerably less difficult than formulating a detailed model of a single islet cell.
The same issue applies to the levels of modeling and understanding in the brain, from the physics of synaptic reactions up to the transformations of information by neural cl.u.s.ters. In those brain regions for which we have succeeded in developing detailed models, we find a phenomenon similar to that involving pancreatic cells. The models are complex but remain simpler than the mathematical descriptions of a single cell or even a single synapse. As we discussed earlier, these region-specific models also require significantly less computation than is theoretically implied by the computational capacity of all of the synapses and cells.
Gilles Laurent of the California Inst.i.tute of Technology observes, ”In most cases, a system's collective behavior is very difficult to deduce from knowledge of its components....[Neuroscience is ... a science of systems in which first-order and local explanatory schemata are needed but not sufficient.” Brain reverse-engineering will proceed by iterative refinement of both top-to-bottom and bottom-to-top models and simulations, as we refine each level of description and modeling.
Until very recently neuroscience was characterized by overly simplistic models limited by the crudeness of our sensing and scanning tools. This led many observers to doubt whether our thinking processes were inherently capable of understanding themselves. Peter D. Kramer writes, ”If the mind were simple enough for us to understand, we would be too simple to understand it.”50 Earlier, I quoted Douglas Hofstadter's comparison of our brain to that of a giraffe, the structure of which is not that different from a human brain but which clearly does not have the capability of understanding its own methods. However, recent success in developing highly detailed models at various levels-from neural components such as synapses to large neural regions such as the cerebellum-demonstrate that building precise mathematical models of our brains and then simulating these models with computation is a challenging but viable task once the data capabilities become available. Although models have a long history in neuroscience, it is only recently that they have become sufficiently comprehensive and detailed to allow simulations based on them to perform like actual brain experiments. Earlier, I quoted Douglas Hofstadter's comparison of our brain to that of a giraffe, the structure of which is not that different from a human brain but which clearly does not have the capability of understanding its own methods. However, recent success in developing highly detailed models at various levels-from neural components such as synapses to large neural regions such as the cerebellum-demonstrate that building precise mathematical models of our brains and then simulating these models with computation is a challenging but viable task once the data capabilities become available. Although models have a long history in neuroscience, it is only recently that they have become sufficiently comprehensive and detailed to allow simulations based on them to perform like actual brain experiments.
Subneural Models: Synapses and Spines
In an address to the annual meeting of the American Psychological a.s.sociation in 2002, psychologist and neuroscientist Joseph LeDoux of New York University said,
If who we are is shaped by what we remember, and if memory is a function of the brain, then synapses-the interfaces through which neurons communicate with each other and the physical structures in which memories are encoded-are the fundamental units of the self....Synapses are pretty low on the totem pole of how the brain is organized, but I think they're pretty important....The self is the sum of the brain's individual subsystems, each with its own form of ”memory,” together with the complex interactions among the subsystems. Without synaptic plasticity-the ability of synapses to alter the ease with which they transmit signals from one neuron to another-the changes in those systems that are required for learning would be impossible.51
Although early modeling treated the neuron as the primary unit of transforming information, the tide has turned toward emphasizing its subcellular components. Computational neuroscientist Anthony J. Bell, for example, argues:
Molecular and biophysical processes control the sensitivity of neurons to incoming spikes (both synaptic efficiency and post-synaptic responsivity), the excitability of the neuron to produce spikes, the patterns of 170 spikes it can produce and the likelihood of new synapses forming (dynamic rewiring), to list only four of the most obvious interferences from the subneural level. Furthermore, transneural volume effects such as local electric fields and the transmembrane diffusion of nitric oxide have been seen to influence, responsively, coherent neural firing, and the delivery of energy (blood flow) to cells, the latter of which directly correlates with neural activity. The list could go on. I believe that anyone who seriously studies neuromodulators, ion channels, or synaptic mechanism and is honest, would have to reject the neuron level as a separate computing level, even while finding it to be a useful descriptive level.52
Indeed, an actual brain synapse is far more complex than is described in the cla.s.sic McCulloch-Pitts neural-net model. The synaptic response is influenced by a range of factors, including the action of multiple channels controlled by a variety of ionic potentials (voltages) and multiple neurotransmitters and neuromodulators. Considerable progress has been made in the past twenty years, however, in developing the mathematical formulas underlying the behavior of neurons, dendrites, synapses, and the representation of information in the spike trains (pulses by neurons that have been activated). Peter Dayan and Larry Abbott have recently written a summary of the existing nonlinear differential equations that describe a wide range of knowledge derived from thousands of experimental studies.53 Well-substantiated models exist for the biophysics of neuron bodies, synapses, and the action of feedforward networks of neurons, such as those found in the retina and optic nerves, and many other cla.s.ses of neurons. Well-substantiated models exist for the biophysics of neuron bodies, synapses, and the action of feedforward networks of neurons, such as those found in the retina and optic nerves, and many other cla.s.ses of neurons.
Attention to how the synapse works has its roots in Hebb's pioneering work. Hebb addressed the question, How does short-term (also called working) memory function? The brain region a.s.sociated with short-term memory is the prefrontal cortex, although we now realize that different forms of short-term information retention have been identified in most other neural circuits that have been closely studied.
Most of Hebb's work focused on changes in the state of synapses to strengthen or inhibit received signals and on the more controversial reverberatory circuit in which neurons fire in a continuous loop.54 Another theory proposed by Hebb is a change in state of a neuron itself-that is, a memory function in the cell soma (body). The experimental evidence supports the possibility of all of these models. Cla.s.sical Hebbian synaptic memory and reverberatory memory require a time delay before the recorded information can be used. In vivo experiments show that in at least some regions of the brain there is a neural response that is too fast to be accounted for by such standard learning models, and therefore could only be accomplished by learning-induced changes in the soma. Another theory proposed by Hebb is a change in state of a neuron itself-that is, a memory function in the cell soma (body). The experimental evidence supports the possibility of all of these models. Cla.s.sical Hebbian synaptic memory and reverberatory memory require a time delay before the recorded information can be used. In vivo experiments show that in at least some regions of the brain there is a neural response that is too fast to be accounted for by such standard learning models, and therefore could only be accomplished by learning-induced changes in the soma.55 Another possibility not directly antic.i.p.ated by Hebb is real-time changes in the neuron connections themselves. Recent scanning results show rapid growth of dendrite spikes and new synapses, so this must be considered an important mechanism. Experiments have also demonstrated a rich array of learning behaviors on the synaptic level that go beyond simple Hebbian models. Synapses can change their state rapidly, but they then begin to decay slowly with continued stimulation, or in some a lack of stimulation, or many other variations.56 Although contemporary models are far more complex than the simple synapse models devised by Hebb, his intuitions have largely proved correct. In addition to Hebbian synaptic plasticity, current models include global processes that provide a regulatory function. For example, synaptic scaling keeps synaptic potentials from becoming zero (and thus being unable to be increased through multiplicative approaches) or becoming excessively high and thereby dominating a network. In vitro experiments have found synaptic scaling in cultured networks of neocortical, hippocampal, and spinal-cord neurons.57 Other mechanisms are sensitive to overall spike timing and the distribution of potential across many synapses. Simulations have demonstrated the ability of these recently discovered mechanisms to improve learning and network stability. Other mechanisms are sensitive to overall spike timing and the distribution of potential across many synapses. Simulations have demonstrated the ability of these recently discovered mechanisms to improve learning and network stability.
The most exciting new development in our understanding of the synapse is that the topology of the synapses and the connections they form are continually changing. Our first glimpse into the rapid changes in synaptic connections was revealed by an innovative scanning system that requires a genetically modified animal whose neurons have been engineered to emit a fluorescent green light. The system can image living neural tissue and has a sufficiently high resolution to capture not only the dendrites (interneuronal connections) but the spines: tiny projections that sprout from the dendrites and initiate potential synapses.
Neurobiologist Karel Svoboda and his colleagues at Cold Spring Harbor Laboratory on Long Island used the scanning system on mice to investigate networks of neurons that a.n.a.lyze information from the whiskers, a study that provided a fascinating look at neural learning. The dendrites continually grew new spines. Most of these lasted only a day or two, but on occasion a spine would remain stable. ”We believe that the high turnover that we see might play an important role in neural plasticity, in that the sprouting spines reach out to probe different presynaptic partners on neighboring neurons,” said Svoboda. ”If a given connection is favorable, that is, reflecting a desirable kind of brain rewiring, then these synapses are stabilized and become more permanent. But most of these synapses are not going in the right direction, and they are retracted.”58 Another consistent phenomenon that has been observed is that neural responses decrease over time, if a particular stimulus is repeated. This adaptation gives greatest priority to new patterns of stimuli. Similar work by neurobiologist Wen-Biao Gan at New York University's School of Medicine on neuronal spines in the visual cortex of adult mice shows that this spine mechanism can hold long-term memories: ”Say a 10-year-old kid uses 1,000 connections to store a piece of information. When he is 80, one-quarter of the connections will still be there, no matter how things change. That's why you can still remember your childhood experiences.” Gan also explains, ”Our idea was that you actually don't need to make many new synapses and get rid of old ones when you learn, memorize. You just need to modify the strength of the preexisting synapses for short-term learning and memory. However, it's likely that a few synapses are made or eliminated to achieve long-term memory.”59 The reason memories can remain intact even if three quarters of the connections have disappeared is that the coding method used appears to have properties similar to those of a hologram. In a hologram, information is stored in a diffuse pattern throughout an extensive region. If you destroy three quarters of the hologram, the entire image remains intact, although with only one quarter of the resolution. Research by Pentti Kanerva, a neuroscientist at Redwood Neuroscience Inst.i.tute, supports the idea that memories are dynamically distributed throughout a region of neurons. This explains why older memories persist but nonetheless appear to ”fade,” because their resolution has diminished.
Neuron Models
Researchers are also discovering that specific neurons perform special recognition tasks. An experiment with chickens identified brain-stem neurons that detect particular delays as sounds arrive at the two ears.60 Different neurons respond to different amounts of delay. Although there are many complex irregularities in how these neurons (and the networks they rely on) work, what they are actually accomplis.h.i.+ng is easy to describe and would be simple to replicate. According to University of California at San Diego neuroscientist Scott Makeig, ”Recent neurobiological results suggest an important role of precisely synchronized neural inputs in learning and memory.” Different neurons respond to different amounts of delay. Although there are many complex irregularities in how these neurons (and the networks they rely on) work, what they are actually accomplis.h.i.+ng is easy to describe and would be simple to replicate. According to University of California at San Diego neuroscientist Scott Makeig, ”Recent neurobiological results suggest an important role of precisely synchronized neural inputs in learning and memory.”61
Electronic Neurons. A recent experiment at the University of California at San Diego's Inst.i.tute for Nonlinear Science demonstrates the potential for electronic neurons to precisely emulate biological ones. Neurons (biological or otherwise) are a prime example of what is often called chaotic computing. Each neuron acts in an essentially unpredictable fas.h.i.+on. When an entire network of neurons receives input (from the outside world or from other networks of neurons), the signaling among them appears at first to be frenzied and random. Over time, typically a fraction of a second or so, the chaotic interplay of the neurons dies down and a stable pattern of firing emerges. This pattern represents the ”decision” of the neural network. If the neural network is performing a pattern-recognition task (and such tasks const.i.tute the bulk of the activity in the human brain), the emergent pattern represents the appropriate recognition. . A recent experiment at the University of California at San Diego's Inst.i.tute for Nonlinear Science demonstrates the potential for electronic neurons to precisely emulate biological ones. Neurons (biological or otherwise) are a prime example of what is often called chaotic computing. Each neuron acts in an essentially unpredictable fas.h.i.+on. When an entire network of neurons receives input (from the outside world or from other networks of neurons), the signaling among them appears at first to be frenzied and random. Over time, typically a fraction of a second or so, the chaotic interplay of the neurons dies down and a stable pattern of firing emerges. This pattern represents the ”decision” of the neural network. If the neural network is performing a pattern-recognition task (and such tasks const.i.tute the bulk of the activity in the human brain), the emergent pattern represents the appropriate recognition. .
So the question addressed by the San Diego researchers was: could electronic neurons engage in this chaotic dance alongside biological ones? They connected artificial neurons with real neurons from spiny lobsters in a single network, and their hybrid biological-nonbiological network performed in the same way (that is, chaotic interplay followed by a stable emergent pattern) and with the same type of results as an all-biological net of neurons. Essentially, the biological neurons accepted their electronic peers. This indicates that the chaotic mathematical model of these neurons was reasonably accurate.
Brain Plasticity