Consciousness is only marginally relevant to artificial intelligence (AI), because to most researchers in the field other problems seem more pressing.
However, there have been proposals for how consciousness would be accounted for in a complete computational theory of the mind, from theorists such as Dennett, Hofstadter, McCarthy, McDermott, Minsky, Perlis, Sloman, and Smith. One can extract from these speculations a sketch of theoretical synthesis, according to which consciousness is the property a system has by virtue of modeling itself as having sensations and making free decisions. Critics such as Harnad and Searle have not succeeded in demolishing a priori this or any other computational theory, but no such theory can be verified or refuted until and unless AI is successful in finding computational solutions of difficult problems such as vision, language, and locomotion.
Computationalism is the theory that the human brain is essentially a computer, although presumably not a stored-program, digital computer, like the kind Intel makes. Artificial intelligence (AI) is a field of computer science that explores computational models of problem-solving, where the problems to be solved are of the complexity of problems solved by human beings. An AI researcher need not be a computationalist, because they might believe that computers can do things brains do noncomputationally. Most AI researchers are computationalists to some extent, even if they think digital computers and brains-as-computers compute things in different ways. When it comes to the problem of phenomenal consciousness, however, the AI researchers who care about the problem and believe that AI can solve it are a tiny minority, as we will see. Nonetheless, because I count myself in that minority, I will do my best to survey the work of my fellows and defend a version of the theory that I think represents that work fairly well.
Perhaps calling computationalism a “theory” is not exactly right here. One might prefer a “working hypothesis,” “assumption,” or “dogma.” The evidence for computationalism is not overwhelming, and some even believe it has been refuted, by a priori argument or empirical evidence. But, in some form or other, the computationalist hypothesis underlies modern research in cognitive psychology, linguistics, and some kinds of neuroscience. That is, there wouldn’t be much point in considering formal or computational models of mind if it turned out that most of what the brain does is not computation at all, but, say, some quantum-mechanical manipulation. Computationalism has proven to be a fertile working hypothesis, although those who reject it typically think of the fertility as similar to that of fungi, or of pod people from outer space.
Some computationalist researchers believe that the brain is nothing more than a computer. Many others are more cautious, and distinguish between modules that are quite likely to be purely computational (e.g., the vision system), and others that are less likely, such as the modules, or principles of brain organization, that are responsible for creativity, or romantic love. There’s no need, in their view, to require that absolutely everything is explained in terms of computation. The brain could do some things computationally and other things by different means, but if the parts or aspects of the brain that are responsible for these various tasks are more or less decoupled, we could gain significant insight into the pieces that computational models are good for, and leave the other pieces to some other disciplines such as philosophy and theology.
Perhaps the aspect of the brain that is most likely to be exempt from the computationalist hypothesis is its ability to produce consciousness, that is, to experience things. There are many different meanings of the word “conscious,” but I am talking here about the “Hard Problem”, the problem of explaining how it is that a physical system can have vivid experiences with seemingly intrinsic “qualities,” such as the redness of a tomato, or the spiciness of a taco. These qualities usually go by their Latin name, qualia. We all know what we’re talking about when we talk about sensations, but they are notoriously undefinable. We all learn to attach a label such as “spicy” to certain tastes, but we really have no idea whether the sensation of spiciness to me is the same as the sensation of spiciness to you.
Perhaps tacos produce my “sourness” in you, and lemons produce my “spiciness” in you. We would never know because you have learned to associate the label “sour” with the quale of the experience you have when you eat lemons, which just happens to be very similar to the quale of the experience I have when I eat tacos. We can’t just tell each other what these qualia are like; the best we can do is talk about comparisons. But we agree on questions such as, Do tacos taste more like Szechuan chicken or more like lemons?
I focus on this problem because other aspects of consciousness raise no special problem for computationalism, as opposed to cognitive science generally. The purpose of consciousness, from an evolutionary perspective, is often held to have something to do with the allocation and organization of scarce cognitive resources. For a mental entity to be conscious is for it to be held in some globally accessible area. AI has made contributions to this idea, in the form of specific ideas about how this global access works, going under names such as the “blackboard model”, or “agenda-based control”. One can evaluate these proposals by measuring how well they work, or how well they match human behavior. But there doesn’t seem to be any philosophical problem associated with them.
For phenomenal consciousness, the situation is very different. Computationalism seems to have nothing to say about it, simply because computers don’t have experiences. I can build an elaborate digital climate-control system for my house, which keeps its occupants at a comfortable temperature, but the climate-control system never feels overheated or chilly. Various physical mechanisms implement its temperature sensors in various rooms. These sensors produce signals that go to units that compute whether to turn the furnace on or turn the air conditioner on. The result of these computations causes switches to close so that the furnace or air conditioner does actually change state. We can see the whole path from temperature sensing to turning off the furnace. Every step can be seen to be one of a series of straightforward physical events. Nowhere are you tempted to invoke conscious sensation as an effector element of the causal chain.
This is the prima facie case against computationalism, and a solid one it seems to be. The rest of this article is an attempt to dismantle it.
Drew McDermott Yale University
READ FULL ARTICLE