Part 8 (1/2)
In support of this distinction, Chalmers introduces a thought experiment involving what he calls zombies. A zombie is an ent.i.ty that acts just like a person but simply does not have subjective experience-that is, a zombie is not conscious. Chalmers argues that since we can conceive of zombies, they are at least logically possible. If you were at a c.o.c.ktail party and there were both ”normal” humans and zombies, how would you tell the difference? Perhaps this sounds like a c.o.c.ktail party you have attended.
Many people answer this question by saying they would interrogate individuals they wished to a.s.sess about their emotional reactions to events and ideas. A zombie, they believe, would betray its lack of subjective experience through a deficiency in certain types of emotional responses. But an answer along these lines simply fails to appreciate the a.s.sumptions of the thought experiment. If we encountered an unemotional person (such as an individual with certain emotional deficits, as is common in certain types of autism) or an avatar or a robot that was not convincing as an emotional human being, then that ent.i.ty is not a zombie. Remember: According to Chalmers's a.s.sumption, a zombie is is completely normal in his ability to respond, including the ability to react emotionally; he is just lacking subjective experience. The bottom line is that there is no way to identify a zombie, because by definition there is no apparent indication of his zombie nature in his behavior. So is this a distinction without a difference? completely normal in his ability to respond, including the ability to react emotionally; he is just lacking subjective experience. The bottom line is that there is no way to identify a zombie, because by definition there is no apparent indication of his zombie nature in his behavior. So is this a distinction without a difference?
Chalmers does not attempt to answer the hard question but does provide some possibilities. One is a form of dualism in which consciousness per se does not exist in the physical world but rather as a separate ontological reality. According to this formulation, what a person does is based on the processes in her brain. Because the brain is causally closed, we can fully explain a person's actions, including her thoughts, through its processes. Consciousness then exists essentially in another realm, or at least is a property separate from the physical world. This explanation does not permit the mind (that is to say, the conscious property a.s.sociated with the brain) to causally affect the brain.
Another possibility that Chalmers entertains, which is not logically distinct from his notion of dualism, and is often called panprotopsychism, holds that all physical systems are conscious, albeit a human is more conscious than, say, a light switch. I would certainly agree that a human brain has more to be conscious about than a light switch.
My own view, which is perhaps a subschool of panprotopsychism, is that consciousness is an emergent property of a complex physical system. In this view a dog is also conscious but somewhat less than a human. An ant has some level of consciousness, too, but much less that of a dog. The ant colony, on the other hand, could be considered to have a higher level of consciousness than the individual ant; it is certainly more intelligent than a lone ant. By this reckoning, a computer that is successfully emulating the complexity of a human brain would also have the same emergent consciousness as a human.
Another way to conceptualize the concept of consciousness is as a system that has ”qualia.” So what are qualia? One definition of the term is ”conscious experiences.” That, however, does not take us very far. Consider this thought experiment: A neuroscientist is completely color-blind-not the sort of color-blind in which one mixes up certain shades of, say, green and red (as I do), but rather a condition in which the afflicted individual lives entirely in a black-and-white world. (In a more extreme version of this scenario, she has grown up in a black-and-white world and has never seen any colors. Bottom line, there is no color in her world.) However, she has extensively studied the physics of color-she is aware that the wavelength of red light is 700 nanometers-as well as the neurological processes of a person who can experience colors normally, and thus knows a great deal about how the brain processes color. She knows more about color than most people. If you wanted to help her out and explain what this actual experience of ”red” is like, how would you do it?
Perhaps you would read her a section from the poem ”Red” by the Nigerian poet Oluseyi Oluseun: Red the colour of bloodthe symbol of lifeRed the colour of dangerthe symbol of death.
Red the colour of rosesthe symbol of beautyRed the colour of loversthe symbol of unity.
Red the colour of tomatothe symbol of good healthRed the colour of hot firethe symbol of burning desire.
That actually would give her a pretty good idea of some of the a.s.sociations people have made with red, and may even enable her to hold her own in a conversation about the color. (”Yes, I love the color red, it's so hot and fiery, so dangerously beautiful...”) If she wanted to, she could probably convince people that she had experienced red, but all the poetry in the world would not actually enable her to have that experience.
Similarly, how would you explain what it feels like to dive into water to someone who has never touched water? We would again be forced to resort to poetry, but there is really no way to impart the experience itself. These experiences are what we refer to as qualia.
Many of the readers of this book have experienced the color red. But how do I know whether your experience of red is not the same experience that I have when I look at blue? We both look at a red object and state a.s.suredly that it is red, but that does not answer the question. I may be experiencing what you experience when you look at blue, but we have both learned to call red things red. We could start swapping poems again, but they would simply reflect the a.s.sociations that people have made with colors; they do not speak to the actual nature of the qualia. Indeed, congenitally blind people have read a great deal about colors, as such references are replete in literature, and thus they do have some version of an experience of color. How does their experience of red compare with the experience of sighted people? This is really the same question as the one concerning the woman in the black-and-white world. It is remarkable that such common phenomena in our lives are so completely ineffable as to make a simple confirmation, like one that we are experiencing the same qualia, impossible.
Another definition of qualia is the feeling of an experience. However, this definition is no less circular than our attempts at defining consciousness above, as the phrases ”feeling,” ”having an experience,” and ”consciousness” are all synonyms. Consciousness and the closely related question of qualia are a fundamental, perhaps the ultimate, philosophical question (although the issue of ident.i.ty may be even more important, as I will discuss in the closing section of this chapter).
So with regard to consciousness, what exactly is is the question again? It is this: Who or what is conscious? I refer to ”mind” in the t.i.tle of this book rather than ”brain” because a mind is a brain that is conscious. We could also say that a mind has free will and ident.i.ty. The a.s.sertion that these issues are philosophical is itself not self-evident. I maintain that these questions can never be fully resolved through science. In other words, there are no falsifiable experiments that we can contemplate that would resolve them, not without making philosophical a.s.sumptions. If we were building a consciousness detector, Searle would want it to ascertain that it was squirting biological neurotransmitters. American philosopher Daniel Dennett (born in 1942) would be more flexible on substrate, but might want to determine whether or not the system contained a model of itself and of its own performance. That view comes closer to my own, but at its core is still a philosophical a.s.sumption. the question again? It is this: Who or what is conscious? I refer to ”mind” in the t.i.tle of this book rather than ”brain” because a mind is a brain that is conscious. We could also say that a mind has free will and ident.i.ty. The a.s.sertion that these issues are philosophical is itself not self-evident. I maintain that these questions can never be fully resolved through science. In other words, there are no falsifiable experiments that we can contemplate that would resolve them, not without making philosophical a.s.sumptions. If we were building a consciousness detector, Searle would want it to ascertain that it was squirting biological neurotransmitters. American philosopher Daniel Dennett (born in 1942) would be more flexible on substrate, but might want to determine whether or not the system contained a model of itself and of its own performance. That view comes closer to my own, but at its core is still a philosophical a.s.sumption.
Proposals have been regularly presented that purport to be scientific theories linking consciousness to some measurable physical attribute-what Searle refers to as the ”mechanism for causing consciousness.” American scientist, philosopher, and anesthesiologist Stuart Hameroff (born in 1947) has written that ”cytoskeletal filaments are the roots of consciousness.”2 He is referring to thin threads in every cell (including neurons but not limited to them) called microtubules, which give each cell structural integrity and play a role in cell division. His books and papers on this issue contain detailed descriptions and equations that explain the plausibility that the microtubules play a role in information processing within the cell. But the connection of microtubules to consciousness requires a leap of faith not fundamentally different from the leap of faith implicit in a religious doctrine that describes a supreme being bestowing consciousness (sometimes referred to as a ”soul”) to certain (usually human) ent.i.ties. Some weak evidence is proffered for Hameroff's view, specifically the observation that the neurological processes that could support this purported cellular computing are stopped during anesthesia. But this is far from compelling substantiation, given that lots of processes are halted during anesthesia. We cannot even say for certain that subjects are not conscious when anesthetized. All we do know is that people do not remember their experiences afterward. Even that is not universal, as some people do remember-accurately-their experience while under anesthesia, including, for example, conversations by their surgeons. Called anesthesia awareness, this phenomenon is estimated to occur about 40,000 times a year in the United States. He is referring to thin threads in every cell (including neurons but not limited to them) called microtubules, which give each cell structural integrity and play a role in cell division. His books and papers on this issue contain detailed descriptions and equations that explain the plausibility that the microtubules play a role in information processing within the cell. But the connection of microtubules to consciousness requires a leap of faith not fundamentally different from the leap of faith implicit in a religious doctrine that describes a supreme being bestowing consciousness (sometimes referred to as a ”soul”) to certain (usually human) ent.i.ties. Some weak evidence is proffered for Hameroff's view, specifically the observation that the neurological processes that could support this purported cellular computing are stopped during anesthesia. But this is far from compelling substantiation, given that lots of processes are halted during anesthesia. We cannot even say for certain that subjects are not conscious when anesthetized. All we do know is that people do not remember their experiences afterward. Even that is not universal, as some people do remember-accurately-their experience while under anesthesia, including, for example, conversations by their surgeons. Called anesthesia awareness, this phenomenon is estimated to occur about 40,000 times a year in the United States.3 But even setting that aside, consciousness and memory are completely different concepts. As I have discussed extensively, if I think back on my moment-to-moment experiences over the past day, I have had a vast number of sensory impressions yet I remember very few of them. Was I therefore not conscious of what I was seeing and hearing all day? It is actually a good question, and the answer is not so clear. But even setting that aside, consciousness and memory are completely different concepts. As I have discussed extensively, if I think back on my moment-to-moment experiences over the past day, I have had a vast number of sensory impressions yet I remember very few of them. Was I therefore not conscious of what I was seeing and hearing all day? It is actually a good question, and the answer is not so clear.
English physicist and mathematician Roger Penrose (born in 1931) took a different leap of faith in proposing the source of consciousness, though his also concerned the microtubules-specifically, their purported quantum computing abilities. His reasoning, although not explicitly stated, seemed to be that consciousness is mysterious, and a quantum event is also mysterious, so they must be linked in some way.
Penrose started his a.n.a.lysis with Turing's theorems on unsolvable problems and G.o.del's related incompleteness theorem. Turing's premise (which was discussed in greater detail in chapter 8 chapter 8) is that there are algorithmic problems that can be stated but that cannot be solved by a Turing machine. Given the computational universality of the Turing machine, we can conclude that these ”unsolvable problems” cannot be solved by any machine. G.o.del's incompleteness theorem has a similar result with regard to the ability to prove conjectures involving numbers. Penrose's argument is that the human brain is able to solve these unsolvable problems, so is therefore capable of doing things that a deterministic machine such as a computer is unable to do. His motivation, at least in part, is to elevate human beings above machines. But his central premise-that humans can solve Turing's and G.o.del's insoluble problems-is unfortunately simply not true.
A famous unsolvable problem called the busy beaver problem is stated as follows: Find the maximum number of 1s that a Turing machine with a certain number of states can write on its tape. So to determine the busy beaver of the number n n, we build all of the Turing machines that have n n states (which will be a finite number if states (which will be a finite number if n n is finite) and then determine the largest number of 1s that these machines write on their tapes, excluding those Turing machines that get into an infinite loop. This is unsolvable because as we seek to simulate all of these is finite) and then determine the largest number of 1s that these machines write on their tapes, excluding those Turing machines that get into an infinite loop. This is unsolvable because as we seek to simulate all of these n n-state Turing machines, our simulator will get into an infinite loop when it attempts to simulate one of the Turing machines that does get into an infinite loop. However, it turns out that computers have nonetheless been able to determine the busy beaver function for certain n ns. So have humans, but computers have solved the problem for far more n ns than una.s.sisted humans. Computers are generally better than humans at solving Turing's and G.o.del's unsolvable problems.
Penrose linked these claimed transcendent capabilities of the human brain to the quantum computing that he hypothesized took place in it. According to Penrose, these neural quantum effects were somehow inherently not achievable by computers, so therefore human thinking has an inherent edge. In fact, common electronics uses quantum effects (transistors rely on quantum tunneling of electrons across barriers); quantum computing in the brain has never been demonstrated; human mental performance can be satisfactorily explained by cla.s.sical computing methods; and in any event nothing bars us from applying quantum computing in computers. None of these objections has ever been addressed by Penrose. It was when critics pointed out that the brain is a warm and messy place for quantum computing that Hameroff and Penrose joined forces. Penrose found a perfect vehicle within neurons that could conceivably support quantum computing-namely, the microtubules that Hameroff had speculated were part of the information processing within a neuron. So the Hameroff-Penrose thesis is that the microtubules in the neurons are doing quantum computing and that this is responsible for consciousness.
This thesis has also been criticized, for example, by Swedish American physicist and cosmologist Max Tegmark (born in 1967), who determined that quantum events in microtubules could survive for only 1013 seconds, which is much too brief a period of time either to compute results of any significance or to affect neural processes. There are certain types of problems for which quantum computing would show superior capabilities to cla.s.sical computing-for example, the cracking of encryption codes through the factoring of large numbers. However, una.s.sisted human thinking has proven to be terrible at solving them, and cannot match even cla.s.sical computers in this area, which suggests that the brain is not demonstrating any quantum computing capabilities. Moreover, even if such a phenomenon as quantum computing in the brain did exist, it would not necessarily be linked to consciousness. seconds, which is much too brief a period of time either to compute results of any significance or to affect neural processes. There are certain types of problems for which quantum computing would show superior capabilities to cla.s.sical computing-for example, the cracking of encryption codes through the factoring of large numbers. However, una.s.sisted human thinking has proven to be terrible at solving them, and cannot match even cla.s.sical computers in this area, which suggests that the brain is not demonstrating any quantum computing capabilities. Moreover, even if such a phenomenon as quantum computing in the brain did exist, it would not necessarily be linked to consciousness.
You Gotta Have FaithWhat a piece of work is a man! How n.o.ble in reason! How infinite in faculties! In form and moving, how express and admirable! In action how like an angel! In apprehension, how like a G.o.d! The beauty of the world! The paragon of animals! And yet, to me, what is this quintessence of dust?-Hamlet, in Shakespeare's Hamlet Hamlet
The reality is that these theories are all leaps of faith, and I would add that where consciousness is concerned, the guiding principle is ”you gotta have faith”-that is, we each need a leap of faith as to what and who is conscious, and who and what we are as conscious beings. Otherwise we could not get up in the morning. But we should be honest about the fundamental need for a leap of faith in this matter and self-reflective as to what our own particular leap involves.
People have very different leaps, despite impressions to the contrary. Individual philosophical a.s.sumptions about the nature and source of consciousness underlie disagreements on issues ranging from animal rights to abortion, and will result in even more contentious future conflicts over machine rights. My objective prediction is that machines in the future will appear to be conscious and that they will be convincing to biological people when they speak of their qualia. They will exhibit the full range of subtle, familiar emotional cues; they will make us laugh and cry; and they will get mad at us if we say that we don't believe that they are conscious. (They will be very smart, so we won't want that to happen.) We will come to accept that they are conscious persons. My own leap of faith is this: Once machines do succeed in being convincing when they speak of their qualia and conscious experiences, they will indeed const.i.tute conscious persons. I have come to my position via this thought experiment: Imagine that you meet an ent.i.ty in the future (a robot or an avatar) that is completely convincing in her emotional reactions. She laughs convincingly at your jokes, and in turn makes you laugh and cry (but not just by pinching you). She convinces you of her sincerity when she speaks of her fears and longings. In every way, she seems conscious. She seems, in fact, like a person. Would you accept her as a conscious person?
If your initial reaction is that you would likely detect some way in which she betrays her nonbiological nature, then you are not keeping to the a.s.sumptions in this hypothetical situation, which established that she is is fully convincing. Given that a.s.sumption, if she were threatened with destruction and responded, as a human would, with terror, would you react in the same empathetic way that you would if you witnessed such a scene involving a human? For myself, the answer is yes, and I believe the answer would be the same for most if not virtually all other people regardless of what they might a.s.sert now in a philosophical debate. Again, the emphasis here is on the word ”convincing.” fully convincing. Given that a.s.sumption, if she were threatened with destruction and responded, as a human would, with terror, would you react in the same empathetic way that you would if you witnessed such a scene involving a human? For myself, the answer is yes, and I believe the answer would be the same for most if not virtually all other people regardless of what they might a.s.sert now in a philosophical debate. Again, the emphasis here is on the word ”convincing.”
There is certainly disagreement on when or even whether we will encounter such a nonbiological ent.i.ty. My own consistent prediction is that this will first take place in 2029 and become routine in the 2030s. But putting the time frame aside, I believe that we will eventually come to regard such ent.i.ties as conscious. Consider how we already treat them when we are exposed to them as characters in stories and movies: R2D2 from the Star Wars Star Wars movies, David and Teddy from the movie movies, David and Teddy from the movie A.I A.I., Data from the TV series Star Trek: The Next Generation Star Trek: The Next Generation, Johnny 5 from the movie Short Circuit Short Circuit, WALL-E from Disney's movie Wall-E Wall-E, T-800-the (good) Terminator-in the second and later Terminator Terminator movies, Rachael the Replicant from the movie movies, Rachael the Replicant from the movie Blade Runner Blade Runner (who, by the way, is not aware that she is not human), b.u.mblebee from the movie, TV, and comic series (who, by the way, is not aware that she is not human), b.u.mblebee from the movie, TV, and comic series Transformers Transformers, and Sonny from the movie I, Robot I, Robot. We do empathize with these characters even though we know that they are nonbiological. We regard them as conscious persons, just as we do biological human characters. We share their feelings and fear for them when they get into trouble. If that is how we treat fictional nonbiological characters today, then that is how we will treat real-life intelligences in the future that don't happen to have a biological substrate.
If you do accept the leap of faith that a nonbiological ent.i.ty that is convincing in its reactions to qualia is actually conscious, then consider what that implies: namely that consciousness is an emergent property of the overall pattern of an ent.i.ty, not the substrate it runs on.
There is a conceptual gap between science, which stands for objective objective measurement and the conclusions we can draw thereby, and consciousness, which is a synonym for measurement and the conclusions we can draw thereby, and consciousness, which is a synonym for subjective subjective experience. We obviously cannot simply ask an ent.i.ty in question, ”Are you conscious?” If we look inside its ”head,” biological or otherwise, to ascertain that, then we would have to make philosophical a.s.sumptions in determining what it is that we are looking for. The question as to whether or not an ent.i.ty is conscious is therefore not a scientific one. Based on this, some observers go on to question whether consciousness itself has any basis in reality. English writer and philosopher Susan Blackmore (born in 1951) speaks of the ”grand illusion of consciousness.” She acknowledges the reality of the meme (idea) of consciousness-in other words, consciousness certainly exists as an idea, and there are a great many neocortical structures that deal with the idea, not to mention words that have been spoken and written about it. But it is not clear that it refers to something real. Blackburn goes on to explain that she is not necessarily denying the reality of consciousness, but rather attempting to articulate the sorts of dilemmas we encounter when we try to pin down the concept. As British psychologist and writer Stuart Sutherland (19271998) wrote in the experience. We obviously cannot simply ask an ent.i.ty in question, ”Are you conscious?” If we look inside its ”head,” biological or otherwise, to ascertain that, then we would have to make philosophical a.s.sumptions in determining what it is that we are looking for. The question as to whether or not an ent.i.ty is conscious is therefore not a scientific one. Based on this, some observers go on to question whether consciousness itself has any basis in reality. English writer and philosopher Susan Blackmore (born in 1951) speaks of the ”grand illusion of consciousness.” She acknowledges the reality of the meme (idea) of consciousness-in other words, consciousness certainly exists as an idea, and there are a great many neocortical structures that deal with the idea, not to mention words that have been spoken and written about it. But it is not clear that it refers to something real. Blackburn goes on to explain that she is not necessarily denying the reality of consciousness, but rather attempting to articulate the sorts of dilemmas we encounter when we try to pin down the concept. As British psychologist and writer Stuart Sutherland (19271998) wrote in the International Dictionary of Psychol International Dictionary of Psychology, ”Consciousness is a fascinating but elusive phenomenon; it is impossible to specify what it is, what it does, or why it evolved.”4 However, we would be well advised not to dismiss the concept too easily as just a polite debate between philosophers-which, incidentally, dates back two thousand years to the Platonic dialogues. The idea of consciousness underlies our moral system, and our legal system in turn is loosely built on those moral beliefs. If a person extinguishes someone's consciousness, as in the act of murder, we consider that to be immoral, and with some exceptions, a high crime. Those exceptions are also relevant to consciousness, in that we might authorize police or military forces to kill certain conscious people to protect a greater number of other conscious people. We can debate the merits of particular exceptions, but the underlying principle holds true.
a.s.saulting someone and causing her to experience suffering is also generally considered immoral and illegal. If I destroy my property, it is probably acceptable. If I destroy your property without your permission, it is probably not acceptable, but not because I am causing suffering to your property, but rather to you as the owner of the property. On the other hand, if my property includes a conscious being such as an animal, then I as the owner of that animal do not necessarily have free moral or legal rein to do with it as I wish-there are, for example, laws against animal cruelty.
Because a great deal of our moral and legal system is based on protecting the existence of and preventing the unnecessary suffering of conscious ent.i.ties, in order to make responsible judgments we need to answer the question as to who is conscious. That question is therefore not simply a matter for intellectual debate, as is evident in the controversy surrounding an issue like abortion. I should point out that the abortion issue can go somewhat beyond the issue of consciousness, as pro-life proponents argue that the potential for an embryo to ultimately become a conscious person is sufficient reason for it to be awarded protection, just as someone in a coma deserves that right. But fundamentally the issue is a debate about when a fetus becomes conscious.
Perceptions of consciousness also often affect our judgments in controversial areas. Looking at the abortion issue again, many people make a distinction between a measure like the morning-after pill, which prevents the implantation of an embryo in the uterus in the first days of pregnancy, and a late-stage abortion. The difference has to do with the likelihood that the late-stage fetus is conscious. It is difficult to maintain that a few-days-old embryo is conscious unless one takes a panprotopsychist position, but even in these terms it would rank below the simplest animal in terms of consciousness. Similarly, we have very different reactions to the maltreatment of great apes versus, say, insects. No one worries too much today about causing pain and suffering to our computer software (although we do comment extensively on the ability of software to cause us suffering), but when future software has the intellectual, emotional, and moral intelligence of biological humans, this will become a genuine concern.
Thus my position is that I will accept nonbiological ent.i.ties that are fully convincing in their emotional reactions to be conscious persons, and my prediction is that the consensus in society will accept them as well. Note that this definition extends beyond ent.i.ties that can pa.s.s the Turing test, which requires mastery of human language. The latter are sufficiently humanlike that I would include them, and I believe that most of society will as well, but I also include ent.i.ties that evidence humanlike emotional reactions but may not be able to pa.s.s the Turing test-for example, young children.
Does this resolve the philosophical question of who is conscious, at least for myself and others who accept this particular leap of faith? The answer is: not not quite quite. We've only covered one case, which is that of ent.i.ties that act in a humanlike way. Even though we are discussing future ent.i.ties that are not biological, we are talking about ent.i.ties that demonstrate convincing humanlike reactions, so this position is still human-centric. But what about more alien forms of intelligence that are not humanlike? We can imagine intelligences that are as complex as or perhaps vastly more complex and intricate than human brains, but that have completely different emotions and motivations. How do we decide whether or not they are conscious?
We can start by considering creatures in the biological world that have brains comparable to those of humans yet evince very different sorts of behaviors. British philosopher David c.o.c.kburn (born in 1949) writes about viewing a video of a giant squid that was under attack (or at least it thought it was-c.o.c.kburn hypothesized that it might have been afraid of the human with the video camera). The squid shuddered and cowered, and c.o.c.kburn writes, ”It responded in a way which struck me immediately and powerfully as one of fear. Part of what was striking in this sequence was the way in which it was possible to see in the behavior of a creature physically so very different from human beings an emotion which was so unambiguously and specifically one of fear.”5 He concludes that the animal was feeling that emotion and he articulates the belief that most other people viewing that film would come to the same conclusion. If we accept c.o.c.kburn's description and conclusion, then we would have to add giant squids to our list of conscious ent.i.ties. However, this has not gotten us very far either, because it is still based on our empathetic reaction to an emotion that we recognize in ourselves. It is still a self-centric or human-centric perspective. He concludes that the animal was feeling that emotion and he articulates the belief that most other people viewing that film would come to the same conclusion. If we accept c.o.c.kburn's description and conclusion, then we would have to add giant squids to our list of conscious ent.i.ties. However, this has not gotten us very far either, because it is still based on our empathetic reaction to an emotion that we recognize in ourselves. It is still a self-centric or human-centric perspective.
If we step outside biology, nonbiological intelligence will be even more varied than intelligence in the biological world. For example, some ent.i.ties may not have a fear of their own destruction, and may not have a need for the emotions we see in humans or in any biological creature. Perhaps they could still pa.s.s the Turing test, or perhaps they wouldn't even be willing to try.
We do in fact build robots today that do not have a sense of self-preservation to carry out missions in dangerous environments. They're not sufficiently intelligent or complex yet for us to seriously consider their sentience, but we can imagine future robots of this sort that are as complex as humans. What about them?
Personally I would say that if I saw in such a device's behavior a commitment to a complex and worthy goal and the ability to execute notable decisions and actions to carry out its mission, I would be impressed and probably become upset if it got destroyed. This is now perhaps stretching the concept a bit, in that I am responding to behavior that does not include many emotions we consider universal in people and even in biological creatures of all kinds. But again, I am seeking to connect with attributes that I can relate to in myself and other people. The idea of an ent.i.ty totally dedicated to a n.o.ble goal and carrying it out or at least attempting to do so without regard for its own well-being is, after all, not completely foreign to human experience. In this instance we are also considering an ent.i.ty that is seeking to protect biological humans or in some way advance our agenda.
What if this ent.i.ty has its own goals distinct from a human one and is not carrying out a mission we would recognize as n.o.ble in our own terms? I might then attempt to see if I could connect or appreciate some of its abilities in some other way. If it is indeed very intelligent, it is likely to be good at math, so perhaps I could have a conversation with it on that topic. Maybe it would appreciate math jokes.
But if the ent.i.ty has no interest in communicating with me, and I don't have sufficient access to its actions and decision making to be moved by the beauty of its internal processes, does that mean that it is not conscious? I need to conclude that ent.i.ties that do not succeed in convincing me of their emotional reactions, or that don't care to try, are not necessarily not conscious. It would be difficult to recognize another conscious ent.i.ty without establis.h.i.+ng some level of empathetic communication, but that judgment reflects my own limitations more than it does the ent.i.ty under consideration. We thus need to proceed with humility. It is challenging enough to put ourselves in the subjective shoes of another human, so the task will be that much harder with intelligences that are extremely different from our own.
What Are We Conscious Of?If we could look through the skull into the brain of a consciously thinking person, and if the place of optimal excitability were luminous, then we should see playing over the cerebral surface, a bright spot with fantastic, waving borders constantly fluctuating in size and form, surrounded by a darkness more or less deep, covering the rest of the hemisphere.-Ivan Petrovich Pavlov, 19136
Returning to the giant squid, we can recognize some of its apparent emotions, but much of its behavior is a mystery. What is it like being a giant squid? How does it feel as it squeezes its spineless body through a tiny opening? We don't even have the vocabulary to answer this question, given that we cannot even describe experiences that we do share with other people, such as seeing the color red or feeling water splash on our bodies.
But we don't have to go as far as the bottom of the ocean to find mysteries in the nature of conscious experiences-we need only consider our own. I know, for example, that I am conscious. I a.s.sume that you, the reader, are conscious also. (As for people who have not bought my book, I am not so sure.) But what am I conscious of of? You might ask yourself the same question.