Search
Close this search box.

The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a 1980 article by American philosopher John Searle (1932– ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.

The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument.


1. Overview

Work in Artificial Intelligence (AI) has produced computer programs that can beat the world chess champion, control autonomous vehicles, complete our email sentences, and defeat the best human players on the television quiz show Jeopardy. AI has also produced programs with which one can converse in natural language, including customer service “virtual agents”, and Amazon’s Alexa and Apple’s Siri. Our experience shows that playing chess or Jeopardy, and carrying on a conversation, are activities that require understanding and intelligence. Does computer prowess at conversation and challenging games then show that computers can understand language and be intelligent? Will further development result in digital computers that fully match or even exceed human intelligence? Alan Turing (1950), one of the pioneer theoreticians of computing, believed the answer to these questions was “yes”. Turing proposed what is now known as ‘The Turing Test’: if a computer can pass for human in online chat, we should grant that it is intelligent. By the late 1970s some AI researchers claimed that computers already understood at least some natural language. In 1980 U.C. Berkeley philosopher John Searle introduced a short and widely-discussed argument intended to show conclusively that it is impossible for digital computers to understand language or think.

Searle argues that a good way to test a theory of mind, say a theory that holds that understanding can be created by doing such and such, is to imagine what it would be like to actually do what the theory says will create understanding. Searle (1999) summarized his Chinese Room Argument (herinafter, CRA) concisely:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

Searle goes on to say, “The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.”

Thirty years after introducing the CRA Searle 2010 describes the conclusion in terms of consciousness and intentionality:

I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. To put this point slightly more technically, the notion “same implemented program” defines an equivalence class that is specified independently of any specific physical realization. But such a specification necessarily leaves out the biologically specific powers of the brain to cause cognitive processes. A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker (p.17).

“Intentionality” is a technical term for a feature of mental and certain other things, namely being about something. Thus a desire for a piece of chocolate and thoughts about real Manhattan or fictional Harry Potter all display intentionality, as will be discussed in more detail in section 5.2 below.

Searle’s shift from machine understanding to consciousness and intentionality is not directly supported by the original 1980 argument. However the re-description of the conclusion indicates the close connection between understanding and consciousness in Searle’s later accounts of meaning and intentionality. Those who don’t accept Searle’s linking account might hold that running a program can create understanding without necessarily creating consciousness, and conversely a fancy robot might have dog level consciousness, desires, and beliefs, without necessarily understanding natural language.

In moving to discussion of intentionality Searle seeks to develop the broader implications of his argument. It aims to refute the functionalist approach to understanding minds, that is, the approach that holds that mental states are defined by their causal roles, not by the stuff (neurons, transistors) that plays those roles. The argument counts especially against that form of functionalism known as the Computational Theory of Mind that treats minds as information processing systems. As a result of its scope, as well as Searle’s clear and forceful writing style, the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear since the Turing Test. By 1991 computer scientist Pat Hayes had defined Cognitive Science as the ongoing research project of refuting Searle’s argument. Cognitive psychologist Steven Pinker (1997) pointed out that by the mid-1990s well over 100 articles had been published on Searle’s thought experiment – and that discussion of it was so pervasive on the Internet that Pinker found it a compelling reason to remove his name from all Internet discussion lists.

This interest has not subsided, and the range of connections with the argument has broadened. A search on Google Scholar for “Searle Chinese Room” limited to the period from 2010 through 2019 produced over 2000 results, including papers making connections between the argument and topics ranging from embodied cognition to theater to talk psychotherapy to postmodern views of truth and “our post-human future” – as well as discussions of group or collective minds and discussions of the role of intuitions in philosophy. In 2007 a game company took the name “The Chinese Room” in joking honor of “…Searle’s critique of AI – that you could create a system that gave the impression of intelligence without any actual internal smarts.” This wide-range of discussion and implications is a tribute to the argument’s simple clarity and centrality.

2. Historical Background

2.1 Leibniz’ Mill

Searle’s argument has four important antecedents. The first of these is an argument set out by the philosopher and mathematician Gottfried Leibniz (1646–1716). This argument, often known as “Leibniz’ Mill”, appears as section 17 of Leibniz’ Monadology. Like Searle’s argument, Leibniz’ argument takes the form of a thought experiment. Leibniz asks us to imagine a physical system, a machine, that behaves in such a way that it supposedly thinks and has experiences (“perception”).

17. Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine, that perception must be sought for. [Robert Latta translation]

Notice that Leibniz’s strategy here is to contrast the overt behavior of the machine, which might appear to be the product of conscious thought, with the way the machine operates internally. He points out that these internal mechanical operations are just parts moving from point to point, hence there is nothing that is conscious or that can explain thinking, feeling or perceiving. For Leibniz physical states are not sufficient for, nor constitutive of, mental states.

2.2 Turing’s Paper Machine

A second antecedent to the Chinese Room argument is the idea of a paper machine, a computer implemented by a human. This idea is found in the work of Alan Turing, for example in “Intelligent Machinery” (1948). Turing writes there that he wrote a program for a “paper machine” to play chess. A paper machine is a kind of program, a series of simple steps like a computer program, but written in natural language (e.g., English), and implemented by a human. The human operator of the paper chess-playing machine need not (otherwise) know how to play chess. All the operator does is follow the instructions for generating moves on the chess board. In fact, the operator need not even know that he or she is involved in playing chess – the input and output strings, such as “N–QB7” need mean nothing to the operator of the paper machine.

As part of the WWII project to decipher German military encryption, Turing had written English-language programs for human “computers”, as these specialized workers were then known, and these human computers did not need to know what the programs that they implemented were doing.

One reason the idea of a human-plus-paper machine is important is that it already raises questions about agency and understanding similar to those in the CRA. Suppose I am alone in a closed room and follow an instruction book for manipulating strings of symbols. I thereby implement a paper machine that generates symbol strings such as “N-KB3” that I write on pieces of paper and slip under the door to someone outside the room. Suppose further that prior to going into the room I don’t know how to play chess, or even that there is such a game. However, unbeknownst to me, in the room I am running Turing’s chess program and the symbol strings I generate are chess notation and are taken as chess moves by those outside the room. They reply by sliding the symbols for their own moves back under the door into the room. If all you see is the resulting sequence of moves displayed on a chess board outside the room, you might think that someone in the room knows how to play chess very well. Do I now know how to play chess? Or is it the system (consisting of me, the manuals, and the paper on which I manipulate strings of symbols) that is playing chess? If I memorize the program and do the symbol manipulations inside my head, do I then know how to play chess, albeit with an odd phenomenology? Does someone’s conscious states matter for whether or not they know how to play chess? If a digital computer implements the same program, does the computer then play chess, or merely simulate this?

By mid-century Turing was optimistic that the newly developed electronic computers themselves would soon be able to exhibit apparently intelligent behavior, answering questions posed in English and carrying on conversations. Turing (1950) proposed what is now known as the Turing Test: if a computer could pass for human in on-line chat, it should be counted as intelligent.

A third antecedent of Searle’s argument was the work of Searle’s colleague at Berkeley, Hubert Dreyfus. Dreyfus was an early critic of the optimistic claims made by AI researchers. In 1965, when Dreyfus was at MIT, he published a circa hundred page report titled “Alchemy and Artificial Intelligence”. Dreyfus argued that key features of human mental life could not be captured by formal rules for manipulating symbols. Dreyfus moved to Berkeley in 1968 and in 1972 published his extended critique, “What Computers Can’t Do”. Dreyfus’ primary research interests were in Continental philosophy, with its focus on consciousness, intentionality, and the role of intuition and the unarticulated background in shaping our understandings. Dreyfus identified several problematic assumptions in AI, including the view that brains are like digital computers, and, again, the assumption that understanding can be codified as explicit rules.

However by the late 1970s, as computers became faster and less expensive, some in the burgeoning AI community started to claim that their programs could understand English sentences, using a database of background information. The work of one of these, Yale researcher Roger Schank (Schank & Abelson 1977) came to Searle’s attention. Schank developed a technique called “conceptual representation” that used “scripts” to represent conceptual relations (related to Conceptual Role Semantics). Searle’s argument was originally presented as a response to the claim that AI programs such as Schank’s literally understand the sentences that they respond to.

2.3 The Chinese Nation

A fourth antecedent to the Chinese Room argument are thought experiments involving myriad humans acting as a computer. In 1961 Anatoly Mickevich (pseudonym A. Dneprov) published “The Game”, a story in which a stadium full of 1400 math students are arranged to function as a digital computer (see Dneprov 1961 and the English translation listed at Mickevich 1961, Other Internet Resources). For 4 hours each repeatedly does a bit of calculation on binary numbers received from someone near them, then passes the binary result onto someone nearby. They learn the next day that they collectively translated a sentence from Portuguese into their native Russian. Mickevich’s protagonist concludes “We’ve proven that even the most perfect simulation of machine thinking is not the thinking process itself, which is a higher form of motion of living matter.” Apparently independently, a similar consideration emerged in early discussion of functionalist theories of minds and cognition (see further discussion in section 5.3 below), Functionalists hold that mental states are defined by the causal role they play in a system (just as a door stop is defined by what it does, not by what it is made out of). Critics of functionalism were quick to turn its proclaimed virtue of multiple realizability against it. While functionalism was consistent with a materialist or biological understanding of mental states (arguably a virtue), it did not identify types of mental states (such as experiencing pain, or wondering about OZ) with particular types of neurophysiological states, as “type-type identity theory” did. In contrast with type-type identity theory, functionalism allowed sentient beings with different physiology to have the same types of mental states as humans – pains, for example. But it was pointed out that if extraterrestrial aliens, with some other complex system in place of brains, could realize the functional properties that constituted mental states, then, presumably so could systems even less like human brains. The computational form of functionalism, which holds that the defining role of each mental state is its role in information processing or computation, is particularly vulnerable to this maneuver, since a wide variety of systems with simple components are computationally equivalent (see e.g., Maudlin 1989 for discussion of a computer built from buckets of water). Critics asked if it was really plausible that these inorganic systems could have mental states or feel pain.

Daniel Dennett (1978) reports that in 1974 Lawrence Davis gave a colloquium at MIT in which he presented one such unorthodox implementation. Dennett summarizes Davis’ thought experiment as follows:

Let a functionalist theory of pain (whatever its details) be instantiated by a system the subassemblies of which are not such things as C-fibers and reticular systems but telephone lines and offices staffed by people. Perhaps it is a giant robot controlled by an army of human beings that inhabit it. When the theory’s functionally characterized conditions for pain are now met we must say, if the theory is true, that the robot is in pain. That is, real pain, as real as our own, would exist in virtue of the perhaps disinterested and businesslike activities of these bureaucratic teams, executing their proper functions.

In “Troubles with Functionalism”, also published in 1978, Ned Block envisions the entire population of China implementing the functions of neurons in the brain. This scenario has subsequently been called “The Chinese Nation” or “The Chinese Gym”. We can suppose that every Chinese citizen would be given a call-list of phone numbers, and at a preset time on implementation day, designated “input” citizens would initiate the process by calling those on their call-list. When any citizen’s phone rang, he or she would then phone those on his or her list, who would in turn contact yet others. No phone message need be exchanged; all that is required is the pattern of calling. The call-lists would be constructed in such a way that the patterns of calls implemented the same patterns of activation that occur between neurons in someone’s brain when that person is in a mental state – pain, for example. The phone calls play the same functional role as neurons causing one another to fire. Block was primarily interested in qualia, and in particular, whether it is plausible to hold that the population of China might collectively be in pain, while no individual member of the population experienced any pain, but the thought experiment applies to any mental states and operations, including understanding language.

Thus Block’s precursor thought experiment, as with those of Davis and Dennett, is a system of many humans rather than one. The focus is on consciousness, but to the extent that Searle’s argument also involves consciousness, the thought experiment is closely related to Searle’s. Cole (1984) tries to pump intuitions in the reverse direction by setting out a thought experiment in which each of his neurons is itself conscious, and fully aware of its actions including being doused with neurotransmitters, undergoing action potentials, and squirting neurotransmitters at its neighbors. Cole argues that his conscious neurons would find it implausible that their collective activity produced a consciousness and other cognitive competences, including understanding English, that the neurons lack. Cole suggests the intuitions of implementing systems are not to be trusted.

3. The Chinese Room Argument

In 1980 John Searle published “Minds, Brains and Programs” in the journal The Behavioral and Brain Sciences. In this article, Searle sets out the argument, and then replies to the half-dozen main objections that had been raised during his earlier presentations at various university campuses (see next section). In addition, Searle’s article in BBS was published along with comments and criticisms by 27 cognitive science researchers. These 27 comments were followed by Searle’s replies to his critics.

In the decades following its publication, the Chinese Room argument was the subject of very many discussions. By 1984, Searle presented the Chinese Room argument in a book, Minds, Brains and Science. In January 1990, the popular periodical Scientific American took the debate to a general scientific audience. Searle included the Chinese Room Argument in his contribution, “Is the Brain’s Mind a Computer Program?”, and Searle’s piece was followed by a responding article, “Could a Machine Think?”, written by philosophers Paul and Patricia Churchland. Soon thereafter Searle had a published exchange about the Chinese Room with another leading philosopher, Jerry Fodor (in Rosenthal (ed.) 1991).

The heart of the argument is Searle imagining himself following an symbol processing program written in English (which is what Turing called “a paper machine”). The English speaker (Searle) sitting in the room follows English instructions for manipulating Chinese symbols, whereas a computer “follows” (in some sense) a program written in a computing language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does – manipulate symbols on the basis of their syntax alone – no computer, merely by following a program, comes to genuinely understand Chinese.

This narrow argument, based closely on the Chinese Room scenario, is specifically directed at a position Searle calls “Strong AI”. Strong AI is the view that suitably programmed computers (or the programs themselves) can understand natural language and actually have other mental capabilities similar to the humans whose behavior they mimic. According to Strong AI, these computers really play chess intelligently, make clever moves, or understand language. By contrast, “weak AI” is the much more modest claim that computers are merely useful in psychology, linguistics, and other areas, in part because they can simulate mental abilities. But weak AI makes no claim that computers actually understand or are intelligent. The Chinese Room argument is not directed at weak AI, nor does it purport to show that no machine can think – Searle says that brains are machines, and brains think. The argument is directed at the view that formal computations on symbols can produce thought.

We might summarize the narrow argument as a reductio ad absurdum against Strong AI as follows. Let L be a natural language, and let us say that a “program for L” is a program for conversing fluently in L. A computing system is any system, human or otherwise, that can run a program.

  1. If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese.
  2. I could run a program for Chinese without thereby coming to understand Chinese.
  3. Therefore Strong AI is false.

The first premise elucidates the claim of Strong AI. The second premise is supported by the Chinese Room thought experiment. The conclusion of this narrow argument is that running a program cannot endow the system with language understanding. (There are other ways of understanding the structure of the argument. It may be relevant to understand some of the claims as counterfactual: e.g. “there is a program” in premise 1 as meaning there could be a program, etc. On this construal the argument involves modal logic, the logic of possibility and necessity (see Damper 2006 and Shaffer 2009)).

It is also worth noting that the first premise above attributes understanding to “the system”. Exactly what Strong-AI supposes will acquire understanding when the program runs is crucial to the success or failure of the CRA. Schank 1978 has a title that claims their group’s computer, a physical device, understands, but in the body of the paper he claims that the program [“SAM”] is doing the understanding: SAM, Schank says “…understands stories about domains about which it has knowledge” (p. 133). As we will see in the next section (4), these issues about the identity of the understander (the cpu? the program? the system? something else?) quickly came to the fore for critics of the CRA. Searle’s wider argument includes the claim that the thought experiment shows more generally that one cannot get semantics (meaning) from syntax (formal symbol manipulation). That and related issues are discussed in section 5: The Larger Philosophical Issues.

4. Replies to the Chinese Room Argument

Criticisms of the narrow Chinese Room argument against Strong AI have often followed three main lines, which can be distinguished by how much they concede:

(1) Some critics concede that the man in the room doesn’t understand Chinese, but hold that nevertheless running the program may create comprehension of Chinese by something other than the room operator. These critics object to the inference from the claim that the man in the room does not understand Chinese to the conclusion that no understanding has been created. There might be understanding by a larger, smaller, or different, entity. This is the strategy of The Systems Reply and the Virtual Mind Reply. These replies hold that the output of the room might reflect real understanding of Chinese, but the understanding would not be that of the room operator. Thus Searle’s claim that he doesn’t understand Chinese while running the room is conceded, but his claim that there is no understanding of the questions in Chinese, and that computationalism is false, is denied.

(2) Other critics concede Searle’s claim that just running a natural language processing program as described in the CR scenario does not create any understanding, whether by a human or a computer system. But these critics hold that a variation on the computer system could understand. The variant might be a computer embedded in a robotic body, having interaction with the physical world via sensors and motors (“The Robot Reply”), or it might be a system that simulated the detailed operation of an entire human brain, neuron by neuron (“the Brain Simulator Reply”).

(3) Finally, some critics do not concede even the narrow point against AI. These critics hold that the man in the original Chinese Room scenario might understand Chinese, despite Searle’s denials, or that the scenario is impossible. For example, critics have argued that our intuitions in such cases are unreliable. Other critics have held that it all depends on what one means by “understand” – points discussed in the section on The Intuition Reply. Others (e.g. Sprevak 2007) object to the assumption that any system (e.g. Searle in the room) can run any computer program. And finally some have argued that if it is not reasonable to attribute understanding on the basis of the behavior exhibited by the Chinese Room, then it would not be reasonable to attribute understanding to humans on the basis of similar behavioral evidence (Searle calls this last the “Other Minds Reply”). The objection is that we should be willing to attribute understanding in the Chinese Room on the basis of the overt behavior, just as we do with other humans (and some animals), and as we would do with extra-terrestrial Aliens (or burning bushes or angels) that spoke our language. This position is close to Turing’s own, when he proposed his behavioral test for machine intelligence.

In addition to these responses specifically to the Chinese Room scenario and the narrow argument to be discussed here, some critics also independently argue against Searle’s larger claim, and hold that one can get semantics (that is, meaning) from syntactic symbol manipulation, including the sort that takes place inside a digital computer, a question discussed in the section below on Syntax and Semantics.

4.1 The Systems Reply

In the original BBS article, Searle identified and discussed several responses to the argument that he had come across in giving the argument in talks at various places. As a result, these early responses have received the most attention in subsequent discussion. What Searle 1980 calls “perhaps the most common reply” is the Systems Reply.

The Systems Reply (which Searle says was originally associated with Yale, the home of Schank’s AI work) concedes that the man in the room does not understand Chinese. But, the reply continues, the man is but a part, a central processing unit (CPU), in a larger system. The larger system includes the huge database, the memory (scratchpads) containing intermediate states, and the instructions – the complete system that is required for answering the Chinese questions. So the Sytems Reply is that while the man running the program does not understand Chinese, the system as a whole does.

Ned Block was one of the first to press the Systems Reply, along with many others including Jack Copeland, Daniel Dennett, Douglas Hofstadter, Jerry Fodor, John Haugeland, Ray Kurzweil and Georges Rey. Rey (1986) says the person in the room is just the CPU of the system. Kurzweil (2002) says that the human being is just an implementer and of no significance (presumably meaning that the properties of the implementer are not necessarily those of the system). Kurzweil hews to the spirit of the Turing Test and holds that if the system displays the apparent capacity to understand Chinese “it would have to, indeed, understand Chinese” – Searle is contradicting himself in saying in effect, “the machine speaks Chinese but doesn’t understand Chinese”.

Margaret Boden (1988) raises levels considerations. “Computational psychology does not credit the brain with seeing bean-sprouts or understanding English: intentional states such as these are properties of people, not of brains” (244). “In short, Searle’s description of the robot’s pseudo-brain (that is, of Searle-in-the-robot) as understanding English involves a category-mistake comparable to treating the brain as the bearer, as opposed to the causal basis, of intelligence”. Boden (1988) points out that the room operator is a conscious agent, while the CPU in a computer is not – the Chinese Room scenario asks us to take the perspective of the implementer, and not surprisingly fails to see the larger picture.

Searle’s response to the Systems Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax.

In some ways Searle’s response here anticipates later extended mind views (e.g. Clark and Chalmers 1998): if Otto, who suffers loss of memory, can regain those recall abilities by externalizing some of the information to his notebooks, then Searle arguably can do the reverse: by internalizing the instructions and notebooks he should acquire any abilities had by the extended system. And so Searle in effect concludes that since he doesn’t acquire understanding of Chinese by internalizing the external components of the entire system (e.g. he still doesn’t know what the Chinese word for hamburger means), understanding was never there in the partially externalized system of the original Chinese Room.

In his 2002 paper “The Chinese Room from a Logical Point of View”, Jack Copeland considers Searle’s response to the Systems Reply and argues that a homunculus inside Searle’s head might understand even though the room operator himself does not, just as modules in minds solve tensor equations that enable us to catch cricket balls. Copeland then turns to consider the Chinese Gym, and again appears to endorse the Systems Reply: “…the individual players [do not] understand Chinese. But there is no entailment from this to the claim that the simulation as a whole does not come to understand Chinese. The fallacy involved in moving from part to whole is even more glaring here than in the original version of the Chinese Room Argument”. Copeland denies that connectionism implies that a room of people can simulate the brain.

John Haugeland writes (2002) that Searle’s response to the Systems Reply is flawed: “…what he now asks is what it would be like if he, in his own mind, were consciously to implement the underlying formal structures and operations that the theory says are sufficient to implement another mind”. According to Haugeland, his failure to understand Chinese is irrelevant: he is just the implementer. The larger system implemented would understand – there is a level-of-description fallacy.

Shaffer 2009 examines modal aspects of the logic of the CRA and argues that familiar versions of the System Reply are question-begging. But, Shaffer claims, a modalized version of the System Reply succeeds because there are possible worlds in which understanding is an emergent property of complex syntax manipulation. Nute 2011 is a reply to Shaffer.

Stevan Harnad has defended Searle’s argument against Systems Reply critics in two papers. In his 1989 paper, Harnad writes “Searle formulates the problem as follows: Is the mind a computer program? Or, more specifically, if a computer program simulates or imitates activities of ours that seem to require understanding (such as communicating in language), can the program itself be said to understand in so doing?” (Note the specific claim: the issue is taken to be whether the program itself understands.) Harnad concludes: “On the face of it, [the CR argument] looks valid. It certainly works against the most common rejoinder, the ‘Systems Reply’….” Harnad appears to follow Searle in linking understanding and states of consciousness: Harnad 2012 (Other Internet Resources) argues that Searle shows that the core problem of conscious “feeling” requires sensory connections to the real world. (See sections below “The Robot Reply” and “Intentionality” for discussion.)

Finally some have argued that even if the room operator memorizes the rules and does all the operations inside his head, the room operator does not become the system. Cole (1984) and Block (1998) both argue that the result would not be identity of Searle with the system but much more like a case of multiple personality – distinct persons in a single head. The Chinese responding system would not be Searle, but a sub-part of him. In the CR case, one person (Searle) is an English monoglot and the other is a Chinese monoglot. The English-speaking person’s total unawareness of the meaning of the Chinese responses does not show that they are not understood. This line, of distinct persons, leads to the Virtual Mind Reply.

4.1.1 The Virtual Mind Reply

The Virtual Mind reply concedes, as does the System Reply, that the operator of the Chinese Room does not understand Chinese merely by running the paper machine. However the Virtual Mind reply holds that what is important is whether understanding is created, not whether the Room operator is the agent that understands. Unlike the Systems Reply, the Virtual Mind reply (VMR) holds that a running system may create new, virtual, entities that are distinct from both the system as a whole, as well as from the sub-systems such as the CPU or operator. In particular, a running system might create a distinct agent that understands Chinese. This virtual agent would be distinct from both the room operator and the entire system. The psychological traits, including linguistic abilities, of any mind created by artificial intelligence will depend entirely upon the program and the Chinese database, and will not be identical with the psychological traits and abilities of a CPU or the operator of a paper machine, such as Searle in the Chinese Room scenario. According to the VMR the mistake in the Chinese Room Argument is to make the claim of strong AI to be “the computer understands Chinese” or “the System understands Chinese”. The claim at issue for AI should simply be whether “the running computer creates understanding of Chinese”.

A familiar model of virtual agents are characters in computer or video games, and personal digital assistants, such as Apple’s Siri and Microsoft’s Cortana. These characters have various abilities and personalities, and the characters are not identical with the system hardware or program that creates them. A single running system might control two distinct agents, or physical robots, simultaneously, one of which converses only in Chinese and one of which can converse only in English, and which otherwise manifest very different personalities, memories, and cognitive abilities. Thus the VM reply asks us to distinguish between minds and their realizing systems.

Minsky (1980) and Sloman and Croucher (1980) suggested a Virtual Mind reply when the Chinese Room argument first appeared. In his widely-read 1989 paper “Computation and Consciousness”, Tim Maudlin considers minimal physical systems that might implement a computational system running a program. His discussion revolves around his imaginary Olympia machine, a system of buckets that transfers water, implementing a Turing machine. Maudlin’s main target is the computationalists’ claim that such a machine could have phenomenal consciousness. However in the course of his discussion, Maudlin considers the Chinese Room argument. Maudlin (citing Minsky, and Sloman and Croucher) points out a Virtual Mind reply that the agent that understands could be distinct from the physical system (414). Thus “Searle has done nothing to discount the possibility of simultaneously existing disjoint mentalities” (414–5).

Perlis (1992), Chalmers (1996) and Block (2002) have apparently endorsed versions of a Virtual Mind reply as well, as has Richard Hanley in The Metaphysics of Star Trek (1997). Penrose (2002) is a critic of this strategy, and Stevan Harnad scornfully dismisses such heroic resorts to metaphysics. Harnad defended Searle’s position in a “Virtual Symposium on Virtual Minds” (1992) against Patrick Hayes and Don Perlis. Perlis pressed a virtual minds argument derived, he says, from Maudlin. Chalmers (1996) notes that the room operator is just a causal facilitator, a “demon”, so that his states of consciousness are irrelevant to the properties of the system as a whole. Like Maudlin, Chalmers raises issues of personal identity – we might regard the Chinese Room as “two mental systems realized within the same physical space. The organization that gives rise to the Chinese experiences is quite distinct from the organization that gives rise to the demon’s [= room operator’s] experiences”(326).

Cole (1991, 1994) develops the reply and argues as follows: Searle’s argument requires that the agent of understanding be the computer itself or, in the Chinese Room parallel, the person in the room. However Searle’s failure to understand Chinese in the room does not show that there is no understanding being created. One of the key considerations is that in Searle’s discussion the actual conversation with the Chinese Room is always seriously under specified. Searle was considering Schank’s programs, which can only respond to a few questions about what happened in a restaurant, all in third person. But Searle wishes his conclusions to apply to any AI-produced responses, including those that would pass the toughest unrestricted Turing Test, i.e. they would be just the sort of conversations real people have with each other. If we flesh out the conversation in the original CR scenario to include questions in Chinese such as “How tall are you?”, “Where do you live?”, “What did you have for breakfast?”, “What is your attitude toward Mao?”, and so forth, it immediately becomes clear that the answers in Chinese are not Searle’s answers. Searle is not the author of the answers, and his beliefs and desires, memories and personality traits (apart from his industriousness!) are not reflected in the answers and in general Searle’s traits are causally inert in producing the answers to the Chinese questions. This suggests the following conditional is true: if there is understanding of Chinese created by running the program, the mind understanding the Chinese would not be the computer, whether the computer is human or electronic. The person understanding the Chinese would be a distinct person from the room operator, with beliefs and desires bestowed by the program and its database. Hence Searle’s failure to understand Chinese while operating the room does not show that understanding is not being created.

Cole (1991) offers an additional argument that the mind doing the understanding is neither the mind of the room operator nor the system consisting of the operator and the program: running a suitably structured computer program might produce answers submitted in Chinese and also answers to questions submitted in Korean. Yet the Chinese answers might apparently display completely different knowledge and memories, beliefs and desires than the answers to the Korean questions – along with a denial that the Chinese answerer knows any Korean, and vice versa. Thus the behavioral evidence would be that there were two non-identical minds (one understanding Chinese only, and one understanding Korean only). Since these might have mutually exclusive properties, they cannot be identical, and ipso facto, cannot be identical with the mind of the implementer in the room. Analogously, a video game might include a character with one set of cognitive abilities (smart, understands Chinese) as well as another character with an incompatible set (stupid, English monoglot). These inconsistent cognitive traits cannot be traits of the XBOX system that realizes them. Cole argues that the implication is that minds generally are more abstract than the systems that realize them (see Mind and Body in the Larger Philosophical Issues section).

In short, the Virtual Mind argument is that since the evidence that Searle provides that there is no understanding of Chinese was that he wouldn’t understand Chinese in the room, the Chinese Room Argument cannot refute a differently formulated equally strong AI claim, asserting the possibility of creating understanding using a programmed digital computer. Maudlin (1989) says that Searle has not adequately responded to this criticism.

Others however have replied to the VMR, including Stevan Harnad and mathematical physicist Roger Penrose. Penrose is generally sympathetic to the points Searle raises with the Chinese Room argument, and has argued against the Virtual Mind reply. Penrose does not believe that computational processes can account for consciousness, both on Chinese Room grounds, as well as because of limitations on formal systems revealed by Kurt Gödel’s incompleteness proof. (Penrose has two books on mind and consciousness; Chalmers and others have responded to Penrose’s appeals to Gödel.) In his 2002 article “Consciousness, Computation, and the Chinese Room” that specifically addresses the Chinese Room argument, Penrose argues that the Chinese Gym variation – with a room expanded to the size of India, with Indians doing the processing – shows it is very implausible to hold there is “some kind of disembodied ‘understanding’ associated with the person’s carrying out of that algorithm, and whose presence does not impinge in any way upon his own consciousness” (230–1). Penrose concludes the Chinese Room argument refutes Strong AI. Christian Kaernbach (2005) reports that he subjected the virtual mind theory to an empirical test, with negative results.

4.2 The Robot Reply

The Robot Reply concedes Searle is right about the Chinese Room scenario: it shows that a computer trapped in a computer room cannot understand language, or know what words mean. The Robot reply is responsive to the problem of knowing the meaning of the Chinese word for hamburger – Searle’s example of something the room operator would not know. It seems reasonable to hold that most of us know what a hamburger is because we have seen one, and perhaps even made one, or tasted one, or at least heard people talk about hamburgers and understood what they are by relating them to things we do know by seeing, making, and tasting. Given this is how one might come to know what hamburgers are, the Robot Reply suggests that we put a digital computer in a robot body, with sensors, such as video cameras and microphones, and add effectors, such as wheels to move around with, and arms with which to manipulate things in the world. Such a robot – a computer with a body – might do what a child does, learn by seeing and doing. The Robot Reply holds that such a digital computer in a robot body, freed from the room, could attach meanings to symbols and actually understand natural language. Margaret Boden, Tim Crane, Daniel Dennett, Jerry Fodor, Stevan Harnad, Hans Moravec and Georges Rey are among those who have endorsed versions of this reply at one time or another. The Robot Reply in effect appeals to “wide content” or “externalist semantics”. This can agree with Searle that syntax and internal connections in isolation from the world are insufficient for semantics, while holding that suitable causal connections with the world can provide content to the internal symbols.

About the time Searle was pressing the CRA, many in philosophy of language and mind were recognizing the importance of causal connections to the world as the source of meaning or reference for words and concepts. Hilary Putnam 1981 argued that a Brain in a Vat, isolated from the world, might speak or think in a language that sounded like English, but it would not be English – hence a brain in a vat could not wonder if it was a brain in a vat (because of its sensory isolation, its words “brain” and “vat” do not refer to brains or vats). The view that meaning was determined by connections with the world became widespread. Searle resisted this turn outward and continued to think of meaning as subjective and connected with consciousness.

A related view that minds are best understood as embodied or embedded in the world has gained many supporters since the 1990s, contra Cartesian solipsistic intuitions. Organisms rely on environmental features for the success of their behavior. So whether one takes a mind to be a symbol processing system, with the symbols getting their content from sensory connections with the world, or a non-symbolic system that succeeds by being embedded in a particular environment, the important of things outside the head have come to the fore. Hence many are sympathetic to some form of the Robot Reply: a computational system might understand, provided it is acting in the world. E.g Carter 2007 in a textbook on philosophy and AI concludes “The lesson to draw from the Chinese Room thought experiment is that embodied experience is necessary for the development of semantics.”

However Searle does not think that the Robot Reply to the Chinese Room argument is any stronger than the Systems Reply. All the sensors can do is provide additional input to the computer – and it will be just syntactic input. We can see this by making a parallel change to the Chinese Room scenario. Suppose the man in the Chinese Room receives, in addition to the Chinese characters slipped under the door, a stream of binary digits that appear, say, on a ticker tape in a corner of the room. The instruction books are augmented to use the numerals from the tape as input, along with the Chinese characters. Unbeknownst to the man in the room, the symbols on the tape are the digitized output of a video camera (and possibly other sensors). Searle argues that additional syntactic inputs will do nothing to allow the man to associate meanings with the Chinese characters. It is just more work for the man in the room.

Jerry Fodor, Hilary Putnam, and David Lewis, were principle architects of the computational theory of mind that Searle’s wider argument attacks. In his original 1980 reply to Searle, Fodor allows Searle is certainly right that “instantiating the same program as the brain does is not, in and of itself, sufficient for having those propositional attitudes characteristic of the organism that has the brain.” But Fodor holds that Searle is wrong about the robot reply. A computer might have propositional attitudes if it has the right causal connections to the world – but those are not ones mediated by a man sitting in the head of the robot. We don’t know what the right causal connections are. Searle commits the fallacy of inferring from “the little man is not the right causal connection” to conclude that no causal linkage would succeed. There is considerable empirical evidence that mental processes involve “manipulation of symbols”; Searle gives us no alternative explanation (this is sometimes called Fodor’s “Only Game in Town” argument for computational approaches). In the 1980s and 1990s Fodor wrote extensively on what the connections must be between a brain state and the world for the state to have intentional (representational) properties, while also emphasizing that computationalism has limits because the computations are intrinsically local and so cannot account for abductive reasoning.

In a later piece, “Yin and Yang in the Chinese Room” (in Rosenthal 1991 pp.524–525), Fodor substantially revises his 1980 view. He distances himself from his earlier version of the robot reply, and holds instead that “instantiation” should be defined in such a way that the symbol must be the proximate cause of the effect – no intervening guys in a room. So Searle in the room is not an instantiation of a Turing Machine, and “Searle’s setup does not instantiate the machine that the brain instantiates.” He concludes: “…Searle’s setup is irrelevant to the claim that strong equivalence to a Chinese speaker’s brain is ipso facto sufficient for speaking Chinese.” Searle says of Fodor’s move, “Of all the zillions of criticisms of the Chinese Room argument, Fodor’s is perhaps the most desperate. He claims that precisely because the man in the Chinese room sets out to implement the steps in the computer program, he is not implementing the steps in the computer program. He offers no argument for this extraordinary claim.” (in Rosenthal 1991, p. 525)

In a 1986 paper, Georges Rey advocated a combination of the system and robot reply, after noting that the original Turing Test is insufficient as a test of intelligence and understanding, and that the isolated system Searle describes in the room is certainly not functionally equivalent to a real Chinese speaker sensing and acting in the world. In a 2002 second look, “Searle’s Misunderstandings of Functionalism and Strong AI”, Rey again defends functionalism against Searle, and in the particular form Rey calls the “computational-representational theory of thought – CRTT”. CRTT is not committed to attributing thought to just any system that passes the Turing Test (like the Chinese Room). Nor is it committed to a conversation manual model of understanding natural language. Rather, CRTT is concerned with intentionality, natural and artificial (the representations in the system are semantically evaluable – they are true or false, hence have aboutness). Searle saddles functionalism with the “blackbox” character of behaviorism, but functionalism cares how things are done. Rey sketches “a modest mind” – a CRTT system that has perception, can make deductive and inductive inferences, makes decisions on basis of goals and representations of how the world is, and can process natural language by converting to and from its native representations. To explain the behavior of such a system we would need to use the same attributions needed to explain the behavior of a normal Chinese speaker.

If we flesh out the Chinese conversation in the context of the Robot Reply, we may again see evidence that the entity that understands is not the operator inside the room. Suppose we ask the robot system Chinese translations of “what do you see?”, we might get the answer “My old friend Shakey”, or “I see you!”. Whereas if we phone Searle in the room and ask the same questions in English we might get “These same four walls” or “these damn endless instruction books and notebooks.” Again this is evidence that we have distinct responders here, an English speaker and a Chinese speaker, who see and do quite different things. If the giant robot goes on a rampage and smashes much of Tokyo, and all the while oblivious Searle is just following the program in his notebooks in the room, Searle is not guilty of homicide and mayhem, because he is not the agent committing the acts.

Tim Crane discusses the Chinese Room argument in his 1991 book, The Mechanical Mind. He cites the Churchlands’ luminous room analogy, but then goes on to argue that in the course of operating the room, Searle would learn the meaning of the Chinese: “…if Searle had not just memorized the rules and the data, but also started acting in the world of Chinese people, then it is plausible that he would before too long come to realize what these symbols mean.”(127). (Rapaport 2006 presses an analogy between Helen Keller and the Chinese Room.) Crane appears to end with a version of the Robot Reply: “Searle’s argument itself begs the question by (in effect) just denying the central thesis of AI – that thinking is formal symbol manipulation. But Searle’s assumption, none the less, seems to me correct … the proper response to Searle’s argument is: sure, Searle-in-the-room, or the room alone, cannot understand Chinese. But if you let the outside world have some impact on the room, meaning or ‘semantics’ might begin to get a foothold. But of course, this concedes that thinking cannot be simply symbol manipulation.” (129) The idea that learning grounds understanding has led to work in developmental robotics (a.k.a. epigenetic robotics). This AI research area seeks to replicate key human learning abilities, such as robots that are shown an object from several angles while being told in natural language the name of the object.

Margaret Boden 1988 also argues that Searle mistakenly supposes programs are pure syntax. But programs bring about the activity of certain machines: “The inherent procedural consequences of any computer program give it a toehold in semantics, where the semantics in question is not denotational, but causal.” (250) Thus a robot might have causal powers that enable it to refer to a hamburger.

Stevan Harnad also finds important our sensory and motor capabilities: “Who is to say that the Turing Test, whether conducted in Chinese or in any other language, could be successfully passed without operations that draw on our sensory, motor, and other higher cognitive capacities as well? Where does the capacity to comprehend Chinese begin and the rest of our mental competence leave off?” Harnad believes that symbolic functions must be grounded in “robotic” functions that connect a system with the world. And he thinks this counts against symbolic accounts of mentality, such as Jerry Fodor’s, and, one suspects, the approach of Roger Schank that was Searle’s original target. Harnad 2012 (Other Internet Resources) argues that the CRA shows that even with a robot with symbols grounded in the external world, there is still something missing: feeling, such as the feeling of understanding.

However Ziemke 2016 argues a robotic embodiment with layered systems of bodily regulation may ground emotion and meaning, and Seligman 2019 argues that “perceptually grounded” approaches to natural language processing (NLP) have the “potential to display intentionality, and thus after all to foster a truly meaningful semantics that, in the view of Searle and other skeptics, is intrinsically beyond computers’ capacity.”

4.3 The Brain Simulator Reply

Consider a computer that operates in quite a different manner than the usual AI program with scripts and operations on sentence-like strings of symbols. The Brain Simulator reply asks us to suppose instead the program simulates the actual sequence of nerve firings that occur in the brain of a native Chinese language speaker when that person understands Chinese – every nerve, every firing. Since the computer then works the very same way as the brain of a native Chinese speaker, processing information in just the same way, it will understand Chinese. Paul and Patricia Churchland have set out a reply along these lines, discussed below.

In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese. (Note however that the basis for this claim is no longer simply that Searle himself wouldn’t understand Chinese – it seems clear that now he is just facilitating the causal operation of the system and so we rely on our Leibnizian intuition that water-works don’t understand (see also Maudlin 1989).) Searle concludes that a simulation of brain activity is not the real thing.

However, following Pylyshyn 1980, Cole and Foelber 1984, Chalmers 1996, we might wonder about hybrid systems. Pylyshyn writes:

If more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. What we outside observers might take to be words would become for you just certain noises that circuits caused you to make.

These cyborgization thought experiments can be linked to the Chinese Room. Suppose Otto has a neural disease that causes one of the neurons in my brain to fail, but surgeons install a tiny remotely controlled artificial neuron, a synron, along side his disabled neuron. The control of Otto’s neuron is by John Searle in the Chinese Room, unbeknownst to both Searle and Otto. Tiny wires connect the artificial neuron to the synapses on the cell-body of his disabled neuron. When his artificial neuron is stimulated by neurons that synapse on his disabled neuron, a light goes on in the Chinese Room. Searle then manipulates some valves and switches in accord with a program. That, via the radio link, causes Otto’s artificial neuron to release neuro-transmitters from its tiny artificial vesicles. If Searle’s programmed activity causes Otto’s artificial neuron to behave just as his disabled natural neuron once did, the behavior of the rest of his nervous system will be unchanged. Alas, Otto’s disease progresses; more neurons are replaced by synrons controlled by Searle. Ex hypothesi the rest of the world will not notice the difference; will Otto? If so, when? And why?

Under the rubric “The Combination Reply”, Searle also considers a system with the features of all three of the preceding: a robot with a digital brain simulating computer in its cranium, such that the system as a whole behaves indistinguishably from a human. Since the normal input to the brain is from sense organs, it is natural to suppose that most advocates of the Brain Simulator Reply have in mind such a combination of brain simulation, Robot, and Systems Reply. Some (e.g. Rey 1986) argue it is reasonable to attribute intentionality to such a system as a whole. Searle agrees that it would indeed be reasonable to attribute understanding to such an android system – but only as long as you don’t know how it works. As soon as you know the truth – it is a computer, uncomprehendingly manipulating symbols on the basis of syntax, not meaning – you would cease to attribute intentionality to it.

(One assumes this would be true even if it were one’s spouse, with whom one had built a life-long relationship, that was revealed to hide a silicon secret. Science fiction stories, including episodes of Rod Serling’s television series The Twilight Zone, have been based on such possibilities (the face of the beloved peels away to reveal the awful android truth); however, Steven Pinker (1997) mentions one episode in which the android’s secret was known from the start, but the protagonist developed a romantic relationship with the android.)

On its tenth anniversary the Chinese Room argument was featured in the general science periodical Scientific American. Leading the opposition to Searle’s lead article in that issue were philosophers Paul and Patricia Churchland. The Churchlands agree with Searle that the Chinese Room does not understand Chinese, but hold that the argument itself exploits our ignorance of cognitive and semantic phenomena. They raise a parallel case of “The Luminous Room” where someone waves a magnet and argues that the absence of resulting visible light shows that Maxwell’s electromagnetic theory is false. The Churchlands advocate a view of the brain as a connectionist system, a vector transformer, not a system manipulating symbols according to structure-sensitive rules. The system in the Chinese Room uses the wrong computational strategies. Thus they agree with Searle against traditional AI, but they presumably would endorse what Searle calls “the Brain Simulator Reply”, arguing that, as with the Luminous Room, our intuitions fail us when considering such a complex system, and it is a fallacy to move from part to whole: “… no neuron in my brain understands English, although my whole brain does.”

In his 1991 book, Microcognition. Andy Clark holds that Searle is right that a computer running Schank’s program does not know anything about restaurants, “at least if by ‘know’ we mean anything like ‘understand’”. But Searle thinks that this would apply to any computational model, while Clark, like the Churchlands, holds that Searle is wrong about connectionist models. Clark’s interest is thus in the brain-simulator reply. The brain thinks in virtue of its physical properties. What physical properties of the brain are important? Clark answers that what is important about brains are “variable and flexible substructures” which conventional AI systems lack. But that doesn’t mean computationalism or functionalism is false. It depends on what level you take the functional units to be. Clark defends “microfunctionalism” – one should look to a fine-grained functional description, e.g. neural net level. Clark cites William Lycan approvingly contra Block’s absent qualia objection – yes, there can be absent qualia, if the functional units are made large. But that does not constitute a refutation of functionalism generally. So Clark’s views are not unlike the Churchlands’, conceding that Searle is right about Schank and symbolic-level processing systems, but holding that he is mistaken about connectionist systems.

Similarly Ray Kurzweil (2002) argues that Searle’s argument could be turned around to show that human brains cannot understand – the brain succeeds by manipulating neurotransmitter concentrations and other mechanisms that are in themselves meaningless. In criticism of Searle’s response to the Brain Simulator Reply, Kurzweil says: “So if we scale up Searle’s Chinese Room to be the rather massive ‘room’ it needs to be, who’s to say that the entire system of a hundred trillion people simulating a Chinese Brain that knows Chinese isn’t conscious? Certainly, it would be correct to say that such a system knows Chinese. And we can’t say that it is not conscious anymore than we can say that about any other process. We can’t know the subjective experience of another entity….”

4.4 The Other Minds Reply

Related to the preceding is The Other Minds Reply: “How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers.”

Searle’s (1980) reply to this is very short:

The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn’t be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In ‘cognitive sciences’ one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects.

Critics hold that if the evidence we have that humans understand is the same as the evidence we might have that a visiting extra-terrestrial alien understands, which is the same as the evidence that a robot understands, the presuppositions we may make in the case of our own species are not relevant, for presuppositions are sometimes false. For similar reasons, Turing, in proposing the Turing Test, is specifically worried about our presuppositions and chauvinism. If the reasons for the presuppositions regarding humans are pragmatic, in that they enable us to predict the behavior of humans and to interact effectively with them, perhaps the presupposition could apply equally to computers (similar considerations are pressed by Dennett, in his discussions of what he calls the Intentional Stance).

Searle raises the question of just what we are attributing in attributing understanding to other minds, saying that it is more than complex behavioral dispositions. For Searle the additional seems to be certain states of consciousness, as is seen in his 2010 summary of the CRA conclusions. Terry Horgan (2013) endorses this claim: “the real moral of Searle’s Chinese room thought experiment is that genuine original intentionality requires the presence of internal states with intrinsic phenomenal character that is inherently intentional…” But this tying of understanding to phenomenal consciousness raises a host of issues.

We attribute limited understanding of language to toddlers, dogs, and other animals, but it is not clear that we are ipso facto attributing unseen states of subjective consciousness – what do we know of the hidden states of exotic creatures? Ludwig Wittgenstein (the Private Language Argument) and his followers pressed similar points. Altered qualia possibilities, analogous to the inverted spectrum, arise: suppose I ask “what’s the sum of 5 and 7” and you respond “the sum of 5 and 7 is 12”, but as you heard my question you had the conscious experience of hearing and understanding “what is the sum of 10 and 14”, though you were in the computational states appropriate for producing the correct sum and so said “12”. Are there certain conscious states that are “correct” for certain functional states? Wittgenstein’s considerations appear to be that the subjective state is irrelevant, at best epiphenomenal, if a language user displays appropriate linguistic behavior. Afterall, we are taught language on the basis of our overt responses, not our qualia. The mathematical savant Daniel Tammet reports that when he generates the decimal expansion of pi to thousands of digits he experiences colors that reveal the next digit, but even here it may be that Tennant’s performance is likely not produced by the colors he experiences, but rather by unconscious neural computation. The possible importance of subjective states is further considered in the section on Intentionality, below.

In the 30 years since the CRA there has been philosophical interest in zombies – creatures that look like and behave just as normal humans, including linguistic behavior, yet have no subjective consciousness. A difficulty for claiming that subjective states of consciousness are crucial for understanding meaning will arise in these cases of absent qualia: we can’t tell the difference between zombies and non-zombies, and so on Searle’s account we can’t tell the difference between those that really understand English and those that don’t. And if you and I can’t tell the difference between those who understand language and Zombies who behave like they do but don’t really, than neither can any selection factor in the history of human evolution – to predators, prey, and mates, zombies and true understanders, with the “right” conscious experience, have been indistinguishable. But then there appears to be a distinction without a difference. In any case, Searle’s short reply to the Other Minds Reply may be too short.

Descartes famously argued that speech was sufficient for attributing minds and consciousness to others, and infamously argued that it was necessary. Turing was in effect endorsing Descartes’ sufficiency condition, at least for intelligence, while substituting written for oral linguistic behavior. Since most of us use dialog as a sufficient condition for attributing understanding, Searle’s argument, which holds that speech is a sufficient condition for attributing understanding to humans but not for anything that doesn’t share our biology, an account would appear to be required of what additionally is being attributed, and what can justify the additional attribution. Further, if being con-specific is key on Searle’s account, a natural question arises as to what circumstances would justify us in attributing understanding (or consciousness) to extra-terrestrial aliens who do not share our biology? Offending ET’s by withholding attributions of understanding until after doing a post-mortem may be risky.

Hans Moravec, director of the Robotics laboratory at Carnegie Mellon University, and author of Robot: Mere Machine to Transcendent Mind, argues that Searle’s position merely reflects intuitions from traditional philosophy of mind that are out of step with the new cognitive science. Moravec endorses a version of the Other Minds reply. It makes sense to attribute intentionality to machines for the same reasons it makes sense to attribute them to humans; his “interpretative position” is similar to the views of Daniel Dennett. Moravec goes on to note that one of the things we attribute to others is the ability to make attributions of intentionality, and then we make such attributions to ourselves. It is such self-representation that is at the heart of consciousness. These capacities appear to be implementation independent, and hence possible for aliens and suitably programmed computers.

As we have seen, the reason that Searle thinks we can disregard the evidence in the case of robots and computers is that we know that their processing is syntactic, and this fact trumps all other considerations. Indeed, Searle believes this is the larger point that the Chinese Room merely illustrates. This larger point is addressed in the Syntax and Semantics section below.

4.5 The Intuition Reply

Many responses to the Chinese Room argument have noted that, as with Leibniz’ Mill, the argument appears to be based on intuition: the intuition that a computer (or the man in the room) cannot think or have understanding. For example, Ned Block (1980) in his original BBS commentary says “Searle’s argument depends for its force on intuitions that certain entities do not think.” But, Block argues, (1) intuitions sometimes can and should be trumped and (2) perhaps we need to bring our concept of understanding in line with a reality in which certain computer robots belong to the same natural kind as humans. Similarly Margaret Boden (1988) points out that we can’t trust our untutored intuitions about how mind depends on matter; developments in science may change our intuitions. Indeed, elimination of bias in our intuitions was precisely what motivated Turing (1950) to propose the Turing Test, a test that was blind to the physical character of the system replying to questions. Some of Searle’s critics in effect argue that he has merely pushed the reliance on intuition back, into the room.

For example, one can hold that despite Searle’s intuition that he would not understand Chinese while in the room, perhaps he is mistaken and does, albeit unconsciously. Hauser (2002) accuses Searle of Cartesian bias in his inference from “it seems to me quite obvious that I understand nothing” to the conclusion that I really understand nothing. Normally, if one understands English or Chinese, one knows that one does – but not necessarily. Searle lacks the normal introspective awareness of understanding – but this, while abnormal, is not conclusive.

Critics of the CRA note that our intuitions about intelligence, understanding and meaning may all be unreliable. With regard to meaning, Wakefield 2003, following Block 1998, defends what Wakefield calls “the essentialist objection” to the CRA, namely that a computational account of meaning is not analysis of ordinary concepts and their related intuitions. Rather we are building a scientific theory of meaning that may require revising our intuitions. As a theory, it gets its evidence from its explanatory power, not its accord with pre-theoretic intuitions (however Wakefield himself argues that computational accounts of meaning are afflicted by a pernicious indeterminacy (pp. 308ff)).

Other critics focusing on the role of intuitions in the CRA argue that our intuitions regarding both intelligence and understanding may also be unreliable, and perhaps incompatible even with current science. With regard to understanding, Steven Pinker, in How the Mind Works (1997), holds that “… Searle is merely exploring facts about the English word understand…. People are reluctant to use the word unless certain stereotypical conditions apply…” But, Pinker claims, nothing scientifically speaking is at stake. Pinker objects to Searle’s appeal to the “causal powers of the brain” by noting that the apparent locus of the causal powers is the “patterns of interconnectivity that carry out the right information processing”. Pinker ends his discussion by citing a science fiction story in which Aliens, anatomically quite unlike humans, cannot believe that humans think when they discover that our heads are filled with meat. The Aliens’ intuitions are unreliable – presumably ours may be so as well.

Clearly the CRA turns on what is required to understand language. Schank 1978 clarifies his claim about what he thinks his programs can do: “By ‘understand’, we mean SAM [one of his programs] can create a linked causal chain of conceptualizations that represent what took place in each story.” This is a nuanced understanding of “understanding”, whereas the Chinese Room thought experiment does not turn on a technical understanding of “understanding”, but rather intuitions about our ordinary competence when we understand a word like “hamburger”. Indeed by 2015 Schank distances himself from weak senses of “understand”, holding that no computer can “understand when you tell it something”, and that IBM’s WATSON “doesn’t know what it is saying”. Schank’s program may get links right, but arguably does not know what the linked entities are. Whether it does or not depends on what concepts are, see section 5.1. Furthermore it is possible that when it comes to attributing understanding of language we have different standards for different things – more relaxed for dogs and toddlers. Some things understand a language “un poco”. Searle (1980)concedes that there are degrees of understanding, but says that all that matters that there are clear cases of no understanding, and AI programs are an example: “The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.”

Some defenders of AI are also concerned with how our understanding of understanding bears on the Chinese Room argument. In their paper “A Chinese Room that Understands” AI researchers Simon and Eisenstadt (2002) argue that whereas Searle refutes “logical strong AI”, the thesis that a program that passes the Turing Test will necessarily understand, Searle’s argument does not impugn “Empirical Strong AI” – the thesis that it is possible to program a computer that convincingly satisfies ordinary criteria of understanding. They hold however that it is impossible to settle these questions “without employing a definition of the term ‘understand’ that can provide a test for judging whether the hypothesis is true or false”. They cite W.V.O. Quine’s Word and Object as showing that there is always empirical uncertainty in attributing understanding to humans. The Chinese Room is a Clever Hans trick (Clever Hans was a horse who appeared to clomp out the answers to simple arithmetic questions, but it was discovered that Hans could detect unconscious cues from his trainer). Similarly, the man in the room doesn’t understand Chinese, and could be exposed by watching him closely. (Simon and Eisenstadt do not explain just how this would be done, or how it would affect the argument.) Citing the work of Rudolf Carnap, Simon and Eisenstadt argue that to understand is not just to exhibit certain behavior, but to use “intensions” that determine extensions, and that one can see in actual programs that they do use appropriate intensions. They discuss three actual AI programs, and defend various attributions of mentality to them, including understanding, and conclude that computers understand; they learn “intensions by associating words and other linguistic structure with their denotations, as detected through sensory stimuli”. And since we can see exactly how the machines work, “it is, in fact, easier to establish that a machine exhibits understanding that to establish that a human exhibits understanding….” Thus, they conclude, the evidence for empirical strong AI is overwhelming.

Similarly, Daniel Dennett in his original 1980 response to Searle’s argument called it “an intuition pump”, a term he came up with in discussing the CRA with Hofstader. Sharvy 1983 echoes the complaint. Dennett’s considered view (2013) is that the CRA is “clearly a fallacious and misleading argument ….” (p. 320). Paul Thagard (2013) proposes that for every thought experiment in philosophy there is an equal and opposite thought experiment. Thagard holds that intuitions are unreliable, and the CRA is an example (and that in fact the CRA has now been refuted by the technology of autonomous robotic cars). Dennett has elaborated on concerns about our intuitions regarding intelligence. Dennett 1987 (“Fast Thinking”) expressed concerns about the slow speed at which the Chinese Room would operate, and he has been joined by several other commentators, including Tim Maudlin, David Chalmers, and Steven Pinker. The operator of the Chinese Room may eventually produce appropriate answers to Chinese questions. But slow thinkers are stupid, not intelligent – and in the wild, they may well end up dead. Dennett argues that “speed … is ‘of the essence’ for intelligence. If you can’t figure out the relevant portions of the changing environment fast enough to fend for yourself, you are not practically intelligent, however complex you are” (326). Thus Dennett relativizes intelligence to processing speed relative to current environment.

Tim Maudlin (1989) disagrees. Maudlin considers the time-scale problem pointed to by other writers, and concludes, contra Dennett, that the extreme slowness of a computational system does not violate any necessary conditions on thinking or consciousness. Furthermore, Searle’s main claim is about understanding, not intelligence or being quick-witted. If we were to encounter extra-terrestrials that could process information a thousand times more quickly than we do, it seems that would show nothing about our own slow-poke ability to understand the languages we speak.

Steven Pinker (1997) also holds that Searle relies on untutored intuitions. Pinker endorses the Churchlands’ (1990) counterexample of an analogous thought experiment of waving a magnet and not generating light, noting that this outcome would not disprove Maxwell’s theory that light consists of electromagnetic waves. Pinker holds that the key issue is speed: “The thought experiment slows down the waves to a range to which we humans no longer see them as light. By trusting our intuitions in the thought experiment, we falsely conclude that rapid waves cannot be light either. Similarly, Searle has slowed down the mental computations to a range in which we humans no longer think of it as understanding (since understanding is ordinarily much faster)” (94–95). Howard Gardiner, a supporter of Searle’s conclusions regarding the room, makes a similar point about understanding. Gardiner addresses the Chinese Room argument in his book The Mind’s New Science (1985, 171–177). Gardiner considers all the standard replies to the Chinese Room argument and concludes that Searle is correct about the room: “…the word understand has been unduly stretched in the case of the Chinese room ….” (175).

Thus several in this group of critics argue that speed affects our willingness to attribute intelligence and understanding to a slow system, such as that in the Chinese Room. The result may simply be that our intuitions regarding the Chinese Room are unreliable, and thus the man in the room, in implementing the program, may understand Chinese despite intuitions to the contrary (Maudlin and Pinker). Or it may be that the slowness marks a crucial difference between the simulation in the room and what a fast computer does, such that the man is not intelligent while the computer system is (Dennett).

Philosophy

Although the Chinese Room argument was originally presented in reaction to the statements of artificial intelligence researchers, philosophers have come to consider it as an important part of the philosophy of mind. It is a challenge to functionalism and the computational theory of mind, and is related to such questions as the mind–body problem, the problem of other minds, the symbol-grounding problem, and the hard problem of consciousness.

Strong AI

Searle identified a philosophical position he calls “strong AI”:

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

The definition depends on the distinction between simulating a mind and actually having a mind. Searle writes that “according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind.”

The claim is implicit in some of the statements of early AI researchers and analysts. For example, in 1955, AI founder Herbert A. Simon declared that “there are now in the world machines that think, that learn and create”. Simon, together with Allen Newell and Cliff Shaw, after having completed the first “AI” program, the Logic Theorist, claimed that they had “solved the venerable mind–body problem, explaining how a system composed of matter can have the properties of mind.” John Haugeland wrote that “AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves.”

Searle also ascribes the following claims to advocates of strong AI:

Strong AI as computationalism or functionalism

In more recent presentations of the Chinese room argument, Searle has identified “strong AI” as “computer functionalism” (a term he attributes to Daniel Dennett). Functionalism is a position in modern philosophy of mind that holds that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and to the outside world. Because a computer program can accurately represent functional relationships as relationships between symbols, a computer can have mental phenomena if it runs the right program, according to functionalism.

Stevan Harnad argues that Searle’s depictions of strong AI can be reformulated as “recognizable tenets of computationalism, a position (unlike “strong AI”) that is actually held by many thinkers, and hence one worth refuting.” Computationalism is the position in the philosophy of mind which argues that the mind can be accurately described as an information-processing system.

Each of the following, according to Harnad, is a “tenet” of computationalism:

Strong AI vs. biological naturalism

Searle holds a philosophical position he calls “biological naturalism“: that consciousness and understanding require specific biological machinery that are found in brains. He writes “brains cause minds” and that “actual human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains”. Searle argues that this machinery (known to neuroscience as the “neural correlates of consciousness“) must have some causal powers that permit the human experience of consciousness. Searle’s belief in the existence of these powers has been criticized.

Searle does not disagree with the notion that machines can have consciousness and understanding, because, as he writes, “we are precisely such machines”. Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using machinery that is non-computational. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding. However, without the specific machinery required, Searle does not believe that consciousness can occur.

Biological naturalism implies that one cannot determine if the experience of consciousness is occurring merely by examining how a system functions, because the specific machinery of the brain is essential. Thus, biological naturalism is directly opposed to both behaviorism and functionalism (including “computer functionalism” or “strong AI”). Biological naturalism is similar to identity theory (the position that mental states are “identical to” or “composed of” neurological events); however, Searle has specific technical objections to identity theory. Searle’s biological naturalism and strong AI are both opposed to Cartesian dualism, the classical idea that the brain and mind are made of different “substances”. Indeed, Searle accuses strong AI of dualism, writing that “strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn’t matter.”

Consciousness

Searle’s original presentation emphasized “understanding”—that is, mental states with what philosophers call “intentionality“—and did not directly address other closely related ideas such as “consciousness”. However, in more recent presentations, Searle has included consciousness as the real target of the argument.

Computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modelled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.

— John R. Searle, Consciousness and Language, p. 16

David Chalmers writes, “it is fairly clear that consciousness is at the root of the matter” of the Chinese room.

Colin McGinn argues that the Chinese room provides strong evidence that the hard problem of consciousness is fundamentally insoluble. The argument, to be clear, is not about whether a machine can be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is plain that any other method of probing the occupant of a Chinese room has the same difficulties in principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a conscious agency or some clever simulation inhabits the room.

Searle argues that this is only true for an observer outside of the room. The whole point of the thought experiment is to put someone inside the room, where they can directly observe the operations of consciousness. Searle claims that from his vantage point within the room there is nothing he can see that could imaginably give rise to consciousness, other than himself, and clearly he does not have a mind that can speak Chinese.[citation needed]

Applied ethics

Sitting in the combat information center aboard a warship – proposed as a real-life analog to the Chinese room

Patrick Hew used the Chinese Room argument to deduce requirements from military command and control systems if they are to preserve a commander’s moral agency. He drew an analogy between a commander in their command center and the person in the Chinese Room, and analyzed it under a reading of Aristotle’s notions of “compulsory” and “ignorance”. Information could be “down converted” from meaning to symbols, and manipulated symbolically, but moral agency could be undermined if there was inadequate ‘up conversion’ into meaning. Hew cited examples from the USS Vincennes incident.

Computer science

The Chinese room argument is primarily an argument in the philosophy of mind, and both major computer scientists and artificial intelligence researchers consider it irrelevant to their fields. However, several concepts developed by computer scientists are essential to understanding the argument, including symbol processing, Turing machines, Turing completeness, and the Turing test.

Strong AI vs. AI research

Searle’s arguments are not usually considered an issue for AI research. Stuart Russell and Peter Norvig observe that most AI researchers “don’t care about the strong AI hypothesis—as long as the program works, they don’t care whether you call it a simulation of intelligence or real intelligence.” The primary mission of artificial intelligence research is only to create useful systems that act intelligently, and it does not matter if the intelligence is “merely” a simulation.

Searle does not disagree that AI research can create machines that are capable of highly intelligent behavior. The Chinese room argument leaves open the possibility that a digital machine could be built that acts more intelligently than a person, but does not have a mind or intentionality in the same way that brains do.

Searle’s “strong AI” should not be confused with “strong AI” as defined by Ray Kurzweil and other futurists, who use the term to describe machine intelligence that rivals or exceeds human intelligence. Kurzweil is concerned primarily with the amount of intelligence displayed by the machine, whereas Searle’s argument sets no limit on this. Searle argues that even a superintelligent machine would not necessarily have a mind and consciousness.

Turing test

Main article: Turing test

The “standard interpretation” of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination. Image adapted from Saygin, et al. 2000.

The Chinese room implements a version of the Turing test. Alan Turing introduced the test in 1950 to help answer the question “can machines think?” In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.

Turing then considered each possible objection to the proposal “machines can think”, and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of “consciousness” or “understanding”. He did not believe this was relevant to the issues that he was addressing. He wrote:

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.

To Searle, as a philosopher investigating in the nature of mind and consciousness, these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would.

Symbol processing

Main article: Physical symbol system

The Chinese room (and all modern computers) manipulate physical objects in order to carry out calculations and do simulations. AI researchers Allen Newell and Herbert A. Simon called this kind of machine a physical symbol system. It is also equivalent to the formal systems used in the field of mathematical logic.

Searle emphasizes the fact that this kind of symbol manipulation is syntactic (borrowing a term from the study of grammar). The computer manipulates the symbols using a form of syntax rules, without any knowledge of the symbol’s semantics (that is, their meaning).

Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for “general intelligent action”, or, as it is known today, artificial general intelligence. They framed this as a philosophical position, the physical symbol system hypothesis: “A physical symbol system has the necessary and sufficient means for general intelligent action.” The Chinese room argument does not refute this, because it is framed in terms of “intelligent action”, i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind.

Chinese room and Turing completeness

See also: Turing completeness and Church–Turing thesis

The Chinese room has a design analogous to that of a modern computer. It has a Von Neumann architecture, which consists of a program (the book of instructions), some memory (the papers and file cabinets), a CPU that follows the instructions (the man), and a means to write symbols in memory (the pencil and eraser). A machine with this design is known in theoretical computer science as “Turing complete“, because it has the necessary machinery to carry out any computation that a Turing machine can do, and therefore it is capable of doing a step-by-step simulation of any other digital machine, given enough memory and time. Alan Turing writes, “all digital computers are in a sense equivalent.” The widely accepted Church–Turing thesis holds that any function computable by an effective procedure is computable by a Turing machine.

The Turing completeness of the Chinese room implies that it can do whatever any other digital computer can do (albeit much, much more slowly). Thus, if the Chinese room does not or can not contain a Chinese-speaking mind, then no other digital computer can contain a mind. Some replies to Searle begin by arguing that the room, as described, cannot have a Chinese-speaking mind. Arguments of this form, according to Stevan Harnad, are “no refutation (but rather an affirmation)” of the Chinese room argument, because these arguments actually imply that no digital computers can have a mind.

There are some critics, such as Hanoch Ben-Yami, who argue that the Chinese room cannot simulate all the abilities of a digital computer, such as being able to determine the current time.

Complete argument

Searle has produced a more formal version of the argument of which the Chinese Room forms a part. He presented the first version in 1984. The version given below is from 1990. The Chinese room thought experiment is intended to prove point A3.

He begins with three axioms: (A1) “Programs are formal (syntactic).” A program uses syntax to manipulate symbols and pays no attention to the semantics of the symbols. It knows where to put the symbols and how to move them around, but it does not know what they stand for or what they mean. For the program, the symbols are just physical objects like any others. (A2) “Minds have mental contents (semantics).” Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent. (A3) “Syntax by itself is neither constitutive of nor sufficient for semantics.” This is what the Chinese room thought experiment is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics.

Searle posits that these lead directly to this conclusion: (C1) Programs are neither constitutive of nor sufficient for minds. This should follow without controversy from the first three: Programs don’t have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore no programs are minds.

This much of the argument is intended to show that artificial intelligence can never produce a machine with a mind by writing programs that manipulate symbols. The remainder of the argument addresses a different issue. Is the human brain running a program? In other words, is the computational theory of mind correct? He begins with an axiom that is intended to express the basic modern scientific consensus about brains and minds: (A4) Brains cause minds.

Searle claims that we can derive “immediately” and “trivially” that: (C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains. Brains must have something that causes a mind to exist. Science has yet to determine exactly what it is, but it must exist, because minds exist. Searle calls it “causal powers”. “Causal powers” is whatever the brain uses to create a mind. If anything else can cause a mind to exist, it must have “equivalent causal powers”. “Equivalent causal powers” is whatever else that could be used to make a mind.

And from this he derives the further conclusions: (C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program. This follows from C1 and C2: Since no program can produce a mind, and “equivalent causal powers” produce minds, it follows that programs do not have “equivalent causal powers.” (C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program. Since programs do not have “equivalent causal powers”, “equivalent causal powers” produce minds, and brains produce minds, it follows that brains do not use programs to produce minds.

Refutations of Searle’s argument take many different forms (see below). Computationalists and functionalists reject A3, arguing that “syntax” (as Searle describes it) can have “semantics” if the syntax has the right functional structure. Eliminative materialists reject A2, arguing that minds don’t actually have “semantics” — that thoughts and other mental phenomena are inherently meaningless but nevertheless function as if they had meaning.

Replies

Replies to Searle’s argument may be classified according to what they claim to show:[o]

Some of the arguments (robot and brain simulation, for example) fall into multiple categories.

Systems and virtual mind replies: finding the mind

These replies attempt to answer the question: since the man in the room doesn’t speak Chinese, where is the “mind” that does? These replies address the key ontological issues of mind vs. body and simulation vs. reality. All of the replies that identify the mind in the room are versions of “the system reply”.

The basic version of the system reply argues that it is the “whole system” that understands Chinese. While the man understands only English, when he is combined with the program, scratch paper, pencils and file cabinets, they form a system that can understand Chinese. “Here, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part” Searle explains. The fact that a certain man does not understand Chinese is irrelevant, because it is only the system as a whole that matters.

Searle notes that (in this simple version of the reply) the “system” is nothing more than a collection of ordinary physical objects; it grants the power of understanding and consciousness to “the conjunction of that person and bits of paper” without making any effort to explain how this pile of objects has become a conscious, thinking being. Searle argues that no reasonable person should be satisfied with the reply, unless they are “under the grip of an ideology;” In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing “system”, and does not require anything resembling the actual biology of the brain.

Searle then responds by simplifying this list of physical objects: he asks what happens if the man memorizes the rules and keeps track of everything in his head? Then the whole system consists of just one object: the man himself. Searle argues that if the man does not understand Chinese then the system does not understand Chinese either because now “the system” and “the man” both describe exactly the same object.

Critics of Searle’s response argue that the program has allowed the man to have two minds in one head.[who?] If we assume a “mind” is a form of information processing, then the theory of computation can account for two computations occurring at once, namely (1) the computation for universal programmability (which is the function instantiated by the person and note-taking materials independently from any particular program contents) and (2) the computation of the Turing machine that is described by the program (which is instantiated by everything including the specific program). The theory of computation thus formally explains the open possibility that the second computation in the Chinese Room could entail a human-equivalent semantic understanding of the Chinese inputs. The focus belongs on the program’s Turing machine rather than on the person’s. However, from Searle’s perspective, this argument is circular. The question at issue is whether consciousness is a form of information processing, and this reply requires that we make that assumption.

More sophisticated versions of the systems reply try to identify more precisely what “the system” is and they differ in exactly how they describe it. According to these replies,[who?] the “mind that speaks Chinese” could be such things as: the “software”, a “program”, a “running program”, a simulation of the “neural correlates of consciousness”, the “functional system”, a “simulated mind”, an “emergent property”, or “a virtual mind” (described below).

Marvin Minsky suggested a version of the system reply known as the “virtual mind reply”. The term “virtual” is used in computer science to describe an object that appears to exist “in” a computer (or computer network) only because software makes it appear to exist. The objects “inside” computers (including files, folders, and so on) are all “virtual”, except for the computer’s electronic components. Similarly, Minsky argues, a computer may contain a “mind” that is virtual in the same sense as virtual machines, virtual communities and virtual reality.

To clarify the distinction between the simple systems reply given above and virtual mind reply, David Cole notes that two simulations could be running on one system at the same time: one speaking Chinese and one speaking Korean. While there is only one system, there can be multiple “virtual minds,” thus the “system” cannot be the “mind”.

Searle responds that such a mind is, at best, a simulation, and writes: “No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched.” Nicholas Fearn responds that, for some things, simulation is as good as the real thing. “When we call up the pocket calculator function on a desktop computer, the image of a pocket calculator appears on the screen. We don’t complain that ‘it isn’t really a calculator’, because the physical attributes of the device do not matter.” The question is, is the human mind like the pocket calculator, essentially composed of information? Or is the mind like the rainstorm, something other than a computer, and not realizable in full by a computer simulation? For decades, this question of simulation has led AI researchers and philosophers to consider whether the term “synthetic intelligence” is more appropriate than the common description of such intelligences as “artificial.”

These replies provide an explanation of exactly who it is that understands Chinese. If there is something besides the man in the room that can understand Chinese, Searle cannot argue that (1) the man does not understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who make this reply, shows that Searle’s argument fails to prove that “strong AI” is false.

These replies, by themselves, do not provide any evidence that strong AI is true, however. They do not show that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing Test. Searle argues that, if we are to consider Strong AI remotely plausible, the Chinese Room is an example that requires explanation, and it is difficult or impossible to explain how consciousness might “emerge” from the room or how the system would have consciousness. As Searle writes “the systems reply simply begs the question by insisting that the system must understand Chinese” and thus is dodging the question or hopelessly circular.

Robot and semantics replies: finding the meaning

As far as the person in the room is concerned, the symbols are just meaningless “squiggles.” But if the Chinese room really “understands” what it is saying, then the symbols must get their meaning from somewhere. These arguments attempt to connect the symbols to the things they symbolize. These replies address Searle’s concerns about intentionality, symbol grounding and syntax vs. semantics.

Robot reply

Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. This would allow a “causal connection” between the symbols and things they represent.Hans Moravec comments: “If we could graft a robot to a reasoning program, we wouldn’t need a person to provide the meaning anymore: it would come from the physical world.” Searle’s reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes “he doesn’t see what comes into the robot’s eyes.” (See Mary’s room for a similar thought experiment.)

Derived meaning

Some respond that the room, as Searle describes it, is connected to the world: through the Chinese speakers that it is “talking” to and through the programmers who designed the knowledge base in his file cabinet. The symbols Searle manipulates are already meaningful, they’re just not meaningful to him. Searle says that the symbols only have a “derived” meaning, like the meaning of words in books. The meaning of the symbols depends on the conscious understanding of the Chinese speakers and the programmers outside the room. The room, like a book, has no understanding of its own.

Commonsense knowledge / contextualist reply

Some have argued that the meanings of the symbols would come from a vast “background” of commonsense knowledge encoded in the program and the filing cabinets. This would provide a “context” that would give the symbols their meaning. Searle agrees that this background exists, but he does not agree that it can be built into programs. Hubert Dreyfus has also criticized the idea that the “background” can be represented symbolically.

To each of these suggestions, Searle’s response is the same: no matter how much knowledge is written into the program and no matter how the program is connected to the world, he is still in the room manipulating symbols according to rules. His actions are syntactic and this can never explain to him what the symbols stand for. Searle writes “syntax is insufficient for semantics.”

However, for those who accept that Searle’s actions simulate a mind, separate from his own, the important question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind. While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through the cameras and other sensors that roboticists can supply.

Brain simulation and connectionist replies: redesigning the room

These arguments are all versions of the systems reply that identify a particular kind of system as being important; they identify some special technology that would create conscious understanding in a machine. (The “robot” and “commonsense knowledge” replies above also specify a certain kind of system as being important.)

Brain simulator reply

Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker. This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain. Searle replies that such a simulation does not reproduce the important features of the brain—its causal and intentional states. Searle is adamant that “human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains.” Moreover, he argues:

[I]magine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes. Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn’t understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the “neuron firings” in his imagination.

Two variations on the brain simulator reply are the China brain and the brain-replacement scenario.

China brain

What if we ask each citizen of China to simulate one neuron, using the telephone system to simulate the connections between axons and dendrites? In this version, it seems obvious that no individual would have any understanding of what the brain might be saying. It is also obvious that this system would be functionally equivalent to a brain, so if consciousness is a function, this system would be conscious.

Brain replacement scenario

In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure (either gradually or all at once). Searle’s critics argue that there would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins. (See Ship of Theseus for a similar thought experiment.)

Connectionist replies

Closely related to the brain simulator reply, this claims that a massively parallel connectionist architecture would be capable of understanding.

Combination reply

This response combines the robot reply with the brain simulation reply, arguing that a brain simulation connected to the world through a robot body could have a mind.

Many mansions / wait till next year reply

Better technology in the future will allow computers to understand. Searle agrees that this is possible, but considers this point irrelevant. Searle agrees that there may be designs that would cause a machine to have conscious understanding.

These arguments (and the robot or commonsense knowledge replies) identify some special technology that would help create conscious understanding in a machine. They may be interpreted in two ways: either they claim (1) this technology is required for consciousness, the Chinese room does not or cannot implement this technology, and therefore the Chinese room cannot pass the Turing test or (even if it did) it would not have conscious understanding. Or they may be claiming that (2) it is easier to see that the Chinese room has a mind if we visualize this technology as being used to create it.

In the first case, where features like a robot body or a connectionist architecture are required, Searle claims that strong AI (as he understands it) has been abandoned. The Chinese room has all the elements of a Turing complete machine, and thus is capable of simulating any digital computation whatsoever. If Searle’s room cannot pass the Turing test then there is no other digital technology that could pass the Turing test. If Searle’s room could pass the Turing test, but still does not have a mind, then the Turing test is not sufficient to determine if the room has a “mind”. Either way, it denies one or the other of the positions Searle thinks of as “strong AI”, proving his argument.

The brain arguments in particular deny strong AI if they assume that there is no simpler way to describe the mind than to create a program that is just as mysterious as the brain was. He writes “I thought the whole idea of strong AI was that we don’t need to know how the brain works to know how the mind works.” If computation does not provide an explanation of the human mind, then strong AI has failed, according to Searle.

Other critics hold that the room as Searle described it does, in fact, have a mind, however they argue that it is difficult to see—Searle’s description is correct, but misleading. By redesigning the room more realistically they hope to make this more obvious. In this case, these arguments are being used as appeals to intuition (see next section).

In fact, the room can just as easily be redesigned to weaken our intuitions. Ned Block‘s Blockhead argument suggests that the program could, in theory, be rewritten into a simple lookup table of rules of the form “if the user writes S, reply with P and goto X”. At least in principle, any program can be rewritten (or “refactored“) into this form, even a brain simulation. In the blockhead scenario, the entire mental state is hidden in the letter X, which represents a memory address—a number associated with the next rule. It is hard to visualize that an instant of one’s conscious experience can be captured in a single large number, yet this is exactly what “strong AI” claims. On the other hand, such a lookup table would be ridiculously large (to the point of being physically impossible), and the states could therefore be extremely specific.

Searle argues that however the program is written or however the machine is connected to the world, the mind is being simulated by a simple step-by-step digital machine (or machines). These machines are always just like the man in the room: they understand nothing and do not speak Chinese. They are merely manipulating symbols without knowing what they mean. Searle writes: “I can have any formal program you like, but I still understand nothing.”

Speed and complexity: appeals to intuition

The following arguments (and the intuitive interpretations of the arguments above) do not directly explain how a Chinese speaking mind could exist in Searle’s room, or how the symbols he manipulates could become meaningful. However, by raising doubts about Searle’s intuitions they support other positions, such as the system and robot replies. These arguments, if accepted, prevent Searle from claiming that his conclusion is obvious by undermining the intuitions that his certainty requires.

Several critics believe that Searle’s argument relies entirely on intuitions. Ned Block writes “Searle’s argument depends for its force on intuitions that certain entities do not think.” Daniel Dennett describes the Chinese room argument as a misleading “intuition pump” and writes “Searle’s thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the ‘obvious’ conclusion from it.”

Some of the arguments above also function as appeals to intuition, especially those that are intended to make it seem more plausible that the Chinese room contains a mind, which can include the robot, commonsense knowledge, brain simulation and connectionist replies. Several of the replies above also address the specific issue of complexity. The connectionist reply emphasizes that a working artificial intelligence system would have to be as complex and as interconnected as the human brain. The commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be “an extraordinarily supple, sophisticated, and multilayered system, brimming with ‘world knowledge’ and meta-knowledge and meta-meta-knowledge”, as Daniel Dennett explains.

Many of these critiques emphasize speed and complexity of the human brain, which processes information at 100 billion operations per second (by some estimates). Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require “filing cabinets” of astronomical proportions. This brings the clarity of Searle’s intuition into doubt.

An especially vivid version of the speed and complexity reply is from Paul and Patricia Churchland. They propose this analogous thought experiment: “Consider a dark room containing a man holding a bar magnet or charged object. If the man pumps the magnet up and down, then, according to Maxwell‘s theory of artificial luminance (AL), it will initiate a spreading circle of electromagnetic waves and will thus be luminous. But as all of us who have toyed with magnets or charged balls well know, their forces (or any other forces for that matter), even when set in motion produce no luminance at all. It is inconceivable that you might constitute real luminance just by moving forces around!” Churchland’s point is that the problem is that he would have to wave the magnet up and down something like 450 trillion times per second in order to see anything.[96]

Stevan Harnad is critical of speed and complexity replies when they stray beyond addressing our intuitions. He writes “Some have made a cult of speed and timing, holding that, when accelerated to the right speed, the computational may make a phase transition into the mental. It should be clear that is not a counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting up to the right degree of ‘complexity.’)”

Searle argues that his critics are also relying on intuitions, however his opponents’ intuitions have no empirical basis. He writes that, in order to consider the “system reply” as remotely plausible, a person must be “under the grip of an ideology”. The system reply only makes sense (to Searle) if one assumes that any “system” can have consciousness, just by virtue of being a system with the right behavior and functional parts. This assumption, he argues, is not tenable given our experience of consciousness.

Other minds and zombies: meaninglessness

Several replies argue that Searle’s argument is irrelevant because his assumptions about the mind and consciousness are faulty. Searle believes that human beings directly experience their consciousness, intentionality and the nature of the mind every day, and that this experience of consciousness is not open to question. He writes that we must “presuppose the reality and knowability of the mental.” The replies below question whether Searle is justified in using his own experience of consciousness to determine that it is more than mechanical symbol processing. In particular, the other minds reply argues that we cannot use our experience of consciousness to answer questions about other minds (even the mind of a computer), the eliminative materialist reply argues that Searle’s own personal consciousness does not “exist” in the sense that Searle thinks it does, and the epiphenoma replies question whether we can make any argument at all about something like consciousness which can not, by definition, be detected by any experiment.

The “Other Minds Reply” points out that Searle’s argument is a version of the problem of other minds, applied to machines. There is no way we can determine if other people’s subjective experience is the same as our own. We can only study their behavior (i.e., by giving them our own Turing test). Critics of Searle argue that he is holding the Chinese room to a higher standard than we would hold an ordinary person.

Nils Nilsson writes “If a program behaves as if it were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I’m willing to credit him with real thought.”

Alan Turing anticipated Searle’s line of argument (which he called “The Argument from Consciousness”) in 1950 and makes the other minds reply. He noted that people never consider the problem of other minds when dealing with each other. He writes that “instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.” The Turing test simply extends this “polite convention” to machines. He does not intend to solve the problem of other minds (for machines or people) and he does not think we need to.

Several philosophers argue that consciousness, as Searle describes it, does not exist. This position is sometimes referred to as eliminative materialism: the view that consciousness is not a concept that can “enjoy reduction” to a strictly mechanical (i.e. material) description, but rather is a concept that will be simply eliminated once the way the material brain works is fully understood, in just the same way as the concept of a demon has already been eliminated from science rather than enjoying reduction to a strictly mechanical description, and that our experience of consciousness is, as Daniel Dennett describes it, a “user illusion“. Other mental properties, such as original intentionality (also called “meaning”, “content”, and “semantic character”), are also commonly regarded as special properties related to beliefs and other propositional attitudes. Eliminative materialism maintains that propositional attitudes such as beliefs and desires, among other intentional mental states that have content, do not exist. If eliminative materialism is the correct scientific account of human cognition then the assumption of the Chinese room argument that “minds have mental contents (semantics)” must be rejected.

Stuart Russell and Peter Norvig argue that if we accept Searle’s description of intentionality, consciousness, and the mind, we are forced to accept that consciousness is epiphenomenal: that it “casts no shadow” i.e. is undetectable in the outside world. They argue that Searle must be mistaken about the “knowability of the mental”, and in his belief that there are “causal properties” in our neurons that give rise to the mind. They point out that, by Searle’s own description, these causal properties cannot be detected by anyone outside the mind, otherwise the Chinese Room could not pass the Turing test—the people outside would be able to tell there was not a Chinese speaker in the room by detecting their causal properties. Since they cannot detect causal properties, they cannot detect the existence of the mental. In short, Searle’s “causal properties” and consciousness itself is undetectable, and anything that cannot be detected either does not exist or does not matter.

Mike Alder makes the same point, which he calls the “Newton’s Flaming Laser Sword Reply”. He argues that the entire argument is frivolous, because it is non-verificationist: not only is the distinction between simulating a mind and having a mind ill-defined, but it is also irrelevant because no experiments were, or even can be, proposed to distinguish between the two.

Daniel Dennett provides this extension to the “epiphenomena” argument. Suppose that, by some mutation, a human being is born that does not have Searle’s “causal properties” but nevertheless acts exactly like a human being. (This sort of animal is called a “zombie” in thought experiments in the philosophy of mind). This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So therefore, if Searle is right, it is most likely that human beings (as we see them today) are actually “zombies”, who nevertheless insist they are conscious. It is impossible to know whether we are all zombies or not. Even if we are all zombies, we would still believe that we are not.

Searle disagrees with this analysis and argues that “the study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don’t … what we wanted to know is what distinguishes the mind from thermostats and livers.” He takes it as obvious that we can detect the presence of consciousness and dismisses these replies as being off the point.

Other replies

Margaret Boden argued in her paper “Escaping from the Chinese Room” that even if the person in the room does not understand the Chinese, it does not mean there is no understanding in the room. The person in the room at least understands the rule book used to provide output responses.

In popular culture

The Chinese room argument is a central concept in Peter Watts‘s novels Blindsight and (to a lesser extent) Echopraxia. Greg Egan illustrates the concept succinctly (and somewhat horrifically) in his 1990 short story Learning to Be Me, in his collection Axiomatic.

It is a central theme in the video game Zero Escape: Virtue’s Last Reward, and ties into the game’s narrative.

A similar human computer is imagined in Liu Cixin‘s novel The Three-Body Problem, described thus by Philip Steiner: “a massive human-computer by instrumentalizing millions of soldiers [who] take the role of signal input and signal output and are instructed to perform different logical circuits, like an AND gate and an OR gate”.

In Season 4 of the American crime drama Numb3rs there is a brief reference to the Chinese room.

The Chinese Room is also the name of a British independent video game development studio best known for working on experimental first-person games, such as Everybody’s Gone to the Rapture, or Dear Esther.

In the 2016 video game The Turing Test, the Chinese Room thought experiment is explained to the player by an AI.

5. The Larger Philosophical Issues

5.1 Syntax and Semantics

Searle believes the Chinese Room argument supports a larger point, which explains the failure of the Chinese Room to produce understanding. Searle argued that programs implemented by computers are just syntactical. Computer operations are “formal” in that they respond only to the physical form of the strings of symbols, not to the meaning of the symbols. Minds on the other hand have states with meaning, mental contents. We associate meanings with the words or signs in language. We respond to signs because of their meaning, not just their physical appearance. In short, we understand. But, and according to Searle this is the key point, “Syntax is not by itself sufficient for, nor constitutive of, semantics.” So although computers may be able to manipulate syntax to produce appropriate responses to natural language input, they do not understand the sentences they receive or output, for they cannot associate meanings with the words.

Searle (1984) presents a three premise argument that because syntax is not sufficient for semantics, programs cannot produce minds.

  1. Programs are purely formal (syntactic).
  2. Human minds have mental contents (semantics).
  3. Syntax by itself is neither constitutive of, nor sufficient for, semantic content.
  4. Therefore, programs by themselves are not constitutive of nor sufficient for minds.

The Chinese Room thought experiment itself is the support for the third premise. The claim that syntactic manipulation is not sufficient for meaning or thought is a significant issue, with wider implications than AI, or attributions of understanding. Prominent theories of mind hold that human cognition generally is computational. In one form, it is held that thought involves operations on symbols in virtue of their physical properties. On an alternative connectionist account, the computations are on “subsymbolic” states. If Searle is right, not only Strong AI but also these main approaches to understanding human cognition are misguided.

As we have seen, Searle holds that the Chinese Room scenario shows that one cannot get semantics from syntax alone. In a symbolic logic system, a kind of artificial language, rules are given for syntax. A semantics, if any, comes later. The logician specifies the basic symbol set and some rules for manipulating strings to produce new ones. These rules are purely syntactic – they are applied to strings of symbols solely in virtue of their syntax or form. A semantics, if any, for the symbol system must be provided separately. And if one wishes to show that interesting additional relationships hold between the syntactic operations and semantics, such as that the symbol manipulations preserve truth, one must provide sometimes complex meta-proofs to show this. So on the face of it, semantics is quite independent of syntax for artificial languages, and one cannot get semantics from syntax alone. “Formal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning (or interpretation, or semantics) except insofar as someone outside the system gives it to them” (Searle 1989, 45).

Searle’s identification of meaning with interpretation in this passage is important. Searle’s point is clearly true of the causally inert formal systems of logicians. A semantic interpretation has to be given to those symbols by a logician. When we move from formal systems to computational systems, the situation is more complex. As many of Searle’s critics (e.g. Cole 1984, Dennett 1987, Boden 1988, and Chalmers 1996) have noted, a computer running a program is not the same as “syntax alone”. A computer is an enormously complex electronic causal system. State changes in the system are physical. One can interpret the physical states, e.g. voltages, as syntactic 1’s and 0’s, but the intrinsic reality is electronic and the syntax is “derived”, a product of interpretation. The states are syntactically specified by programmers, but when implemented in a running machine they are electronic states of a complex causal system embedded in the real world. This is quite different from the abstract formal systems that logicians study. Dennett notes that no “computer program by itself” (Searle’s language) – e.g. a program lying on a shelf – can cause anything, even simple addition, let alone mental states. The program must be running. Chalmers (1996) offers a parody in which it is reasoned that recipes are syntactic, syntax is not sufficient for crumbliness, cakes are crumbly, so implementation of a recipe is not sufficient for making a cake. Implementation makes all the difference; an abstract entity (recipe, program) determines the causal powers of a physical system embedded in the larger causal nexus of the world.

Dennett (1987) sums up the issue: “Searle’s view, then, comes to this: take a material object (any material object) that does not have the power of causing mental phenomena; you cannot turn it in to an object that does have the power of producing mental phenomena simply by programming it – reorganizing the conditional dependencies of transitions between its states.” Dennett’s view is the opposite: programming “is precisely what could give something a mind”. But Dennett claims that in fact it is “empirically unlikely that the right sorts of programs can be run on anything but organic, human brains” (325–6).

A further related complication is that it is not clear that computers perform syntactic operations in quite the same sense that a human does – it is not clear that a computer understands syntax or syntactic operations. A computer does not know that it is manipulating 1’s and 0’s. A computer does not recognize that its binary data strings have a certain form, and thus that certain syntactic rules may be applied to them, unlike the man inside the Chinese Room. Inside a computer, there is nothing that literally reads input data, or that “knows” what symbols are. Instead, there are millions of transistors that change states. A sequence of voltages causes operations to be performed. We humans may choose to interpret these voltages as binary numerals and the voltage changes as syntactic operations, but a computer does not interpret its operations as syntactic or any other way. So perhaps a computer does not need to make the move from syntax to semantics that Searle objects to; it needs to move from complex causal connections to semantics. Furthermore, perhaps any causal system is describable as performing syntactic operations – if we interpret a light square as logical “0” and a dark square as logical “1”, then a kitchen toaster may be described as a device that rewrites logical “0”s as logical “1”s. But there is no philosophical problem about getting from syntax to breakfast.

In the 1990s, Searle began to use considerations related to these to argue that computational views are not just false, but lack a clear sense. Computation, or syntax, is “observer-relative”, not an intrinsic feature of reality: “…you can assign a computational interpretation to anything” (Searle 2002b, p. 17), even the molecules in the paint on the wall. Since nothing is intrinsically computational, one cannot have a scientific theory that reduces the mental, which is not observer-relative, to computation, which is. “Computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon. This is an obvious point. I should have seen it ten years ago, but I did not.” (Searle 2002b, p.17, originally published 1993).

Critics note that walls are not computers; unlike a wall, a computer goes through state-transitions that are counterfactually described by a program (Chalmers 1996, Block 2002, Haugeland 2002). In his 2002 paper, Block addresses the question of whether a wall is a computer (in reply to Searle’s charge that anything that maps onto a formal system is a formal system, whereas minds are quite different). Block denies that whether or not something is a computer depends entirely on our interpretation. Block notes that Searle ignores the counterfactuals that must be true of an implementing system. Haugeland (2002) makes the similar point that an implementation will be a causal process that reliably carries out the operations – and they must be the right causal powers. Block concludes that Searle’s arguments fail, but he concedes that they “do succeed in sharpening our understanding of the nature of intentionality and its relation to computation and representation” (78).

Rey (2002) also addresses Searle’s arguments that syntax and symbols are observer-relative properties, not physical. Searle infers this from the fact that syntactic properties (e.g. being a logical “1”)are not defined in physics; however Rey holds that it does not follow that they are observer-relative. Rey argues that Searle also misunderstands what it is to realize a program. Rey endorses Chalmers’ reply to Putnam: a realization is not just a structural mapping, but involves causation, supporting counterfactuals. “This point is missed so often, it bears repeating: the syntactically specifiable objects over which computations are defined can and standardly do possess a semantics; it’s just that the semantics is not involved in the specification.” States of a person have their semantics in virtue of computational organization and their causal relations to the world. Rey concludes: Searle “simply does not consider the substantial resources of functionalism and Strong AI.” (222) A plausibly detailed story would defuse negative conclusions drawn from the superficial sketch of the system in the Chinese Room.

John Haugeland (2002) argues that there is a sense in which a processor must intrinsically understand the commands in the programs it runs: it executes them in accord with the specifications. “The only way that we can make sense of a computer as executing a program is by understanding its processor as responding to the program prescriptions as meaningful” (385). Thus operation symbols have meaning to a system. Haugeland goes on to draw a distinction between narrow and wide system. He argues that data can have semantics in the wide system that includes representations of external objects produced by transducers. In passing, Haugeland makes the unusual claim, argued for elsewhere, that genuine intelligence and semantics presuppose “the capacity for a kind of commitment in how one lives” which is non-propositional – that is, love (cp. Steven Spielberg’s 2001 film Artificial Intelligence: AI).

To Searle’s claim that syntax is observer-relative, that the molecules in a wall might be interpreted as implementing the Wordstar program (an early word processing program) because “there is some pattern in the molecule movements which is isomorphic with the formal structure of Wordstar” (Searle 1990b, p. 27), Haugeland counters that “the very idea of a complex syntactical token … presupposes specified processes of writing and reading….” The tokens must be systematically producible and retrievable. So no random isomorphism or pattern somewhere (e.g. on some wall) is going to count, and hence syntax is not observer-relative.

With regard to the question of whether one can get semantics from syntax, William Rapaport has for many years argued for “syntactic semantics”, a view in which understanding is a special form of syntactic structure in which symbols (such as Chinese words) are linked to concepts, themselves represented syntactically. Others believe we are not there yet. AI futurist (The Age of Spiritual Machines) Ray Kurzweil holds in a 2002 follow-up book that it is red herring to focus on traditional symbol-manipulating computers. Kurzweil agrees with Searle that existent computers do not understand language – as evidenced by the fact that they can’t engage in convincing dialog. But that failure does not bear on the capacity of future computers based on different technology. Kurzweil claims that Searle fails to understand that future machines will use “chaotic emergent methods that are massively parallel”. This claim appears to be similar to that of connectionists, such as Andy Clark, and the position taken by the Churchlands in their 1990 Scientific American article.

Apart from Haugeland’s claim that processors understand program instructions, Searle’s critics can agree that computers no more understand syntax than they understand semantics, although, like all causal engines, a computer has syntactic descriptions. And while it is often useful to programmers to treat the machine as if it performed syntactic operations, it is not always so: sometimes the characters programmers use are just switches that make the machine do something, for example, make a given pixel on the computer display turn red, or make a car transmission shift gears. Thus it is not clear that Searle is correct when he says a digital computer is just “a device which manipulates symbols”. Computers are complex causal engines, and syntactic descriptions are useful in order to structure the causal interconnections in the machine. AI programmers face many tough problems, but one can hold that they do not have to get semantics from syntax. If they are to get semantics, they must get it from causality.

Two main approaches have developed that explain meaning in terms of causal connections. The internalist approaches, such as Schank’s and Rapaport’s conceptual representation approaches, and also Conceptual Role Semantics, hold that a state of a physical system gets its semantics from causal connections to other states of the same system. Thus a state of a computer might represent “kiwi” because it is connected to “bird” and “flightless” nodes, and perhaps also to images of prototypical kiwis. The state that represents the property of being “flightless” might get its content from a Negation-operator modifying a representation of “capable of airborne self-propulsion”, and so forth, to form a vast connected conceptual network, a kind of mental dictionary.

Externalist approaches developed by Dennis Stampe, Fred Dretske, Hilary Putnam, Jerry Fodor, Ruth Millikan, and others, hold that states of a physical system get their content through causal connections to the external reality they represent. Thus, roughly, a system with a KIWI concept is one that has a state it uses to represent the presence of kiwis in the external environment. This kiwi-representing state can be any state that is appropriately causally connected to the presence of kiwis. Depending on the system, the kiwi representing state could be a state of a brain, or of an electrical device such as a computer, or even of a hydraulic system. The internal representing state can then in turn play a causal role in the determining the behavior of the system. For example, Rey (1986) endorses an indicator semantics along the lines of the work of Dennis Stampe (1977) and Fodor’s Psychosemantics. These semantic theories that locate content or meaning in appropriate causal relations to the world fit well with the Robot Reply. A computer in a robot body might have just the causal connections that could allow its inner syntactic states to have the semantic property of representing states of things in its environment.

Thus there are at least two families of theories (and marriages of the two, as in Block 1986) about how semantics might depend upon causal connections. Both of these attempt to provide accounts that are substance neutral: states of suitably organized causal systems can have content, no matter what the systems are made of. On these theories a computer could have states that have meaning. It is not necessary that the computer be aware of its own states and know that they have meaning, nor that any outsider appreciate the meaning of the states. On either of these accounts meaning depends upon the (possibly complex) causal connections, and digital computers are systems designed to have states that have just such complex causal dependencies. It should be noted that Searle does not subscribe to these theories of semantics. Instead, Searle’s discussions of linguistic meaning have often centered on the notion of intentionality.

5.2 Intentionality

Intentionality is the property of being about something, having content. In the 19th Century, psychologist Franz Brentano re-introduced this term from Medieval philosophy and held that intentionality was the “mark of the mental”. Beliefs and desires are intentional states: they have propositional content (one believes that p, one desires that p, where sentences that represent propositions substitute for “p”). Searle’s views regarding intentionality are complex; of relevance here is that he makes a distinction between the original or intrinsic intentionality of genuine mental states, and the derived intentionality of language. A written or spoken sentence only has derivative intentionality insofar as it is interpreted by someone. It appears that on Searle’s view, original intentionality can at least potentially be conscious. Searle then argues that the distinction between original and derived intentionality applies to computers. We can interpret the states of a computer as having content, but the states themselves do not have original intentionality. Many philosophers endorse this intentionality dualism, including Sayre (1986) and even Fodor (2009), despite Fodor’s many differences with Searle.

In a section of her 1988 book, Computer Models of the Mind, Margaret Boden notes that intentionality is not well-understood – reason to not put too much weight on arguments that turn on intentionality. Furthermore, insofar as we understand the brain, we focus on informational functions, not unspecified causal powers of the brain: “…from the psychological point of view, it is not the biochemistry as such which matters but the information-bearing functions grounded in it.” (241) Searle sees intentionality as a causal power of the brain, uniquely produced by biological processes. Dale Jacquette 1989 argues against a reduction of intentionality – intentionality, he says, is an “ineliminable, irreducible primitive concept.” However most AI sympathizers have seen intentionality, aboutness, as bound up with information, and non-biological states can bear information as well as can brain states. Hence many responders to Searle have argued that he displays substance chauvinism, in holding that brains understand but systems made of silicon with comparable information processing capabilities cannot, even in principle. Papers on both sides of the issue appeared, such as J. Maloney’s 1987 paper “The Right Stuff”, defending Searle, and R. Sharvy’s 1983 critique, “It Ain’t the Meat, it’s the Motion”. AI proponents such as Kurzweil (1999, see also Richards 2002) have continued to hold that AI systems can potentially have such mental properties as understanding, intelligence, consciousness and intentionality, and will exceed human abilities in these areas.

Other critics of Searle’s position take intentionality more seriously than Boden does, but deny his dualistic distinction between original and derived intentionality. Dennett (1987, e.g.) argues that all intentionality is derived, in that attributions of intentionality – to animals, other people, and even ourselves – are instrumental and allow us to predict behavior, but they are not descriptions of intrinsic properties. As we have seen, Dennett is concerned about the slow speed of things in the Chinese Room, but he argues that once a system is working up to speed, it has all that is needed for intelligence and derived intentionality – and derived intentionality is the only kind that there is, according to Dennett. A machine can be an intentional system because intentional explanations work in predicting the machine’s behavior. Dennett also suggests that Searle conflates intentionality with awareness of intentionality. In his syntax-semantic arguments, “Searle has apparently confused a claim about the underivability of semantics from syntax with a claim about the underivability of the consciousness of semantics from syntax” (336). The emphasis on consciousness forces us to think about things from a first-person point of view, but Dennett 2017 continues to press the claim that this is a fundamental mistake if we want to understand the mental.

We might also worry that Searle conflates meaning and interpretation, and that Searle’s original or underived intentionality is just second-order intentionality, a representation of what an intentional object represents or means. Dretske and others have seen intentionality as information-based. One state of the world, including a state in a computer, may carry information about other states in the world, and this informational aboutness is a mind-independent feature of states. Hence it is a mistake to hold that conscious attributions of meaning are the source of intentionality.

Others have noted that Searle’s discussion has shown a shift over time from issues of intentionality and understanding to issues of consciousness. Searle links intentionality to awareness of intentionality, in holding that intentional states are at least potentially conscious. In his 1996 book, The Conscious Mind, David Chalmers notes that although Searle originally directs his argument against machine intentionality, it is clear from later writings that the real issue is consciousness, which Searle holds is a necessary condition of intentionality. It is consciousness that is lacking in digital computers. Chalmers uses thought experiments to argue that it is implausible that one system has some basic mental property (such as having qualia) that another system lacks, if it is possible to imagine transforming one system into the other, either gradually (as replacing neurons one at a time by digital circuits), or all at once, switching back and forth between flesh and silicon.

A second strategy regarding the attribution of intentionality is taken by critics who in effect argue that intentionality is an intrinsic feature of states of physical systems that are causally connected with the world in the right way, independently of interpretation (see the preceding Syntax and Semantics section). Fodor’s semantic externalism is influenced by Fred Dretske, but they come to different conclusions with regard to the semantics of states of computers. Over a period of years, Dretske developed an historical account of meaning or mental content that would preclude attributing beliefs and understanding to most machines. Dretske (1985) agrees with Searle that adding machines don’t literally add; we do the adding, using the machines. Dretske emphasizes the crucial role of natural selection and learning in producing states that have genuine content. Human built systems will be, at best, like Swampmen (beings that result from a lightning strike in a swamp and by chance happen to be a molecule by molecule copy of some human being, say, you) – they appear to have intentionality or mental states, but do not, because such states require the right history. AI states will generally be counterfeits of real mental states; like counterfeit money, they may appear perfectly identical but lack the right pedigree. But Dretske’s account of belief appears to make it distinct from conscious awareness of the belief or intentional state (if that is taken to require a higher order thought), and so would apparently allow attribution of intentionality to artificial systems that can get the right history by learning.

Howard Gardiner endorses Zenon Pylyshyn’s criticisms of Searle’s view of the relation of brain and intentionality, as supposing that intentionality is somehow a stuff “secreted by the brain”, and Pylyshyn’s own counter-thought experiment in which one’s neurons are replaced one by one with integrated circuit workalikes (see also Cole and Foelber (1984) and Chalmers (1996) for exploration of neuron replacement scenarios). Gardiner holds that Searle owes us a more precise account of intentionality than Searle has given so far, and until then it is an open question whether AI can produce it, or whether it is beyond its scope. Gardiner concludes with the possibility that the dispute between Searle and his critics is not scientific, but (quasi?) religious.

5.3 Mind and Body

Several critics have noted that there are metaphysical issues at stake in the original argument. The Systems Reply draws attention to the metaphysical problem of the relation of mind to body. It does this in holding that understanding is a property of the system as a whole, not the physical implementer. The Virtual Mind Reply holds that minds or persons – the entities that understand and are conscious – are more abstract than any physical system, and that there could be a many-to-one relation between minds and physical systems. (Even if everything is physical, in principle a single body could be shared by multiple minds, and a single mind could have a sequence of bodies over time.) Thus larger issues about personal identity and the relation of mind and body are in play in the debate between Searle and some of his critics.

Searle’s view is that the problem the relation of mind and body “has a rather simple solution. Here it is: Conscious states are caused by lower level neurobiological processes in the brain and are themselves higher level features of the brain” (Searle 2002b, p. 9). In his early discussion of the CRA, Searle spoke of the causal powers of the brain. Thus his view appears to be that brain states cause consciousness and understanding, and “consciousness is just a feature of the brain” (ibid). However, as we have seen, even if this is true it begs the question of just whose consciousness a brain creates. Roger Sperry’s split-brain experiments suggest that perhaps there can be two centers of consciousness, and so in that sense two minds, implemented by a single brain. While both display at least some language comprehension, only one (typically created by the left hemisphere) controls language production. Thus many current approaches to understanding the relation of brain and consciousness emphasize connectedness and information flow (see e.g. Dehaene 2014).

Consciousness and understanding are features of persons, so it appears that Searle accepts a metaphysics in which I, my conscious self, am identical with my brain – a form of mind-brain identity theory. This very concrete metaphysics is reflected in Searle’s original presentation of the CR argument, in which Strong AI was described by him as the claim that “the appropriately programmed computer really is a mind” (Searle 1980). This is an identity claim, and has odd consequences. If A and B are identical, any property of A is a property of B. Computers are physical objects. Some computers weigh 6 lbs and have stereo speakers. So the claim that Searle called Strong AI would entail that some minds weigh 6 lbs and have stereo speakers. However it seems to be clear that while humans may weigh 150 pounds; human minds do not weigh 150 pounds. This suggests that neither bodies nor machines can literally be minds. Such considerations support the view that minds are more abstract that brains, and if so that at least one version of the claim that Searle calls Strong AI, the version that says that computers literally are minds, is metaphysically untenable on the face of it, apart from any thought-experiments.

Searle’s CR argument was thus directed against the claim that a computer is a mind, that a suitably programmed digital computer understands language, or that its program does. Searle’s thought experiment appeals to our strong intuition that someone who did exactly what the computer does would not thereby come to understand Chinese. As noted above, many critics have held that Searle is quite right on this point – no matter how you program a computer, the computer will not literally be a mind and the computer will not understand natural language. But if minds are not physical objects this inability of a computer to be a mind does not show that running an AI program cannot produce understanding of natural language, by something other than the computer (See section 4.1 above.)

Functionalism is a theory of the relation of minds to bodies that was developed in the two decades prior to Searle’s CRA. Functionalism is an alternative to the identity theory that is implicit in much of Searle’s discussion, as well as to the dominant behaviorism of the mid-Twentieth Century. If functionalism is correct, there appears to be no intrinsic reason why a computer couldn’t have mental states. Hence the CRA’s conclusion that a computer is intrinsically incapable of mental states is an important consideration against functionalism. Julian Baggini (2009, 37) writes that Searle “came up with perhaps the most famous counter-example in history – the Chinese room argument – and in one intellectual punch inflicted so much damage on the then dominant theory of functionalism that many would argue it has never recovered.”

Functionalists hold that a mental state is what a mental state does – the causal (or “functional”) role that the state plays determines what state it is. A functionalist might hold that pain, for example, is a state that is typically caused by damage to the body, is located in a body-image, and is aversive. Functionalists distance themselves both from behaviorists and identity theorists. In contrast with the former, functionalists hold that the internal causal processes are important for the possession of mental states. Thus functionalists may agree with Searle in rejecting the Turing Test as too behavioristic. In contrast with identity theorists (who might e.g. hold “pain is identical with C-fiber firing”), functionalists hold that mental states might be had by a variety of physical systems (or non-physical, as in Cole and Foelber 1984, in which a mind changes from a material to an immaterial implementation, neuron by neuron). Thus while an identity theorist will identify pain with certain neuron firings, a functionalist will identify pain with something more abstract and higher level, a functional role that might be had by many different types of underlying system.

Functionalists accuse identity theorists of substance chauvinism. However, functionalism remains controversial: functionalism is vulnerable to the Chinese Nation type objections discussed above, and functionalists notoriously have trouble explaining qualia, a problem highlighted by the apparent possibility of an inverted spectrum, where qualitatively different states might have the same functional role (e.g. Block 1978, Maudlin 1989, Cole 1990).

Computationalism is the sub-species of functionalism that holds that the important causal role of brain processes is information processing. Milkowski 2017 notes that computational approaches have been fruitful in cognitive science; he surveys objections to computationalism and concludes that the majority target a strawman version. However Jerry Fodor, an early proponent of computational approaches, argues in Fodor 2005 that key mental processes, such as inference to the best explanation, which depend on non-local properties of representations, cannot be explained by computational modules in the brain. If Fodor is right, understanding language and interpretation appear to involve global considerations such as linguistic and non-linguistic context and theory of mind and so might resist computational explanation. If so, we reach Searle’s conclusion on the basis of different considerations.

Searle’s 2010 statement of the conclusion of the CRA has it showing that computational accounts cannot explain consciousness. There has been considerable interest in the decades since 1980 in determining what does explain consciousness, and this has been an extremely active research area across disciplines. One interest has been in the neural correlates of consciousness. This bears directly on Searle’s claim that consciousness is intrinsically biological and not computational or information processing. There is no definitive answer yet, though some recent work on anesthesia suggests that consciousness is lost when cortical (and cortico-thalamic) connections and information flow are disrupted (e.g.Hudetz 2012, a review article).

In general, if the basis of consciousness is confirmed to be at the relatively abstract level of information flow through neural networks, it will be friendly to functionalism, and if it is turns out to be lower and more biological (or sub-neuronal), it will be friendly to Searle’s account.

These controversial biological and metaphysical issues bear on the central inference in the Chinese Room argument. From the intuition that in the CR thought experiment he would not understand Chinese by running a program, Searle infers that there is no understanding created by running a program. Clearly, whether that inference is valid or not turns on a metaphysical question about the identity of persons and minds. If the person understanding is not identical with the room operator, then the inference is unsound.

5.4 Simulation, duplication and evolution

In discussing the CRA, Searle argues that there is an important distinction between simulation and duplication. No one would mistake a computer simulation of the weather for weather, or a computer simulation of digestion for real digestion. Searle concludes that it is just as serious a mistake to confuse a computer simulation of understanding with understanding.

On the face of it, there is generally an important distinction between a simulation and the real thing. But two problems emerge. It is not clear that the distinction can always be made. Hearts are biological if anything is. Are artificial hearts simulations of hearts? Or are they functional duplicates of hearts, hearts made from different materials? Walking is normally a biological phenomenon performed using limbs. Do those with artificial limbs walk? Or do they simulate walking? Do robots walk? If the properties that are needed to be certain kind of thing are high-level properties, anything sharing those properties will be a thing of that kind, even if it differs in its lower level properties. Chalmers (1996) offers a principle governing when simulation is replication. Chalmers suggests that, contra Searle and Harnad (1989), a simulation of X can be an X, namely when the property of being an X is an organizational invariant, a property that depends only on the functional organization of the underlying system, and not on any other details.

Copeland (2002) argues that the Church-Turing thesis does not entail that the brain (or every machine) can be simulated by a universal Turing machine, for the brain (or other machine) might have primitive operations that are not simple clerical routines that can be carried out by hand. (An example might be that human brains likely display genuine low-level randomness, whereas computers are carefully designed not to do that, and so computers resort to pseudo-random numbers when apparent randomness is needed.) Sprevak 2007 raises a related point. Turing’s 1938 Princeton thesis described such machines (“O-machines”). O-machines are machines that include functions of natural numbers that are not Turing-machine computable. If the brain is such a machine, then, says Sprevak,: “There is no possibility of Searle’s Chinese Room Argument being successfully deployed against the functionalist hypothesis that the brain instantiates an O-machine….” (120).

Copeland discusses the simulation / duplication distinction in connection with the Brain Simulator Reply. He argues that Searle correctly notes that one cannot infer from X simulates Y, and Y has property P, to the conclusion that therefore X has Y’s property P for arbitrary P. But Copeland claims that Searle himself commits the simulation fallacy in extending the CR argument from traditional AI to apply against computationalism. The contrapositive of the inference is logically equivalent – X simulates Y, X does not have P therefore Y does not – where P is understands Chinese. The faulty step is: the CR operator S simulates a neural net N, it is not the case that S understands Chinese, therefore it is not the case that N understands Chinese. Copeland also notes results by Siegelmann and Sontag (1994) showing that some connectionist networks cannot be simulated by a universal Turing Machine (in particular, where connection weights are real numbers).

There is another problem with the simulation-duplication distinction, arising from the process of evolution. Searle wishes to see original intentionality and genuine understanding as properties only of certain biological systems, presumably the product of evolution. Computers merely simulate these properties. At the same time, in the Chinese Room scenario, Searle maintains that a system can exhibit behavior just as complex as human behavior, simulating any degree of intelligence and language comprehension that one can imagine, and simulating any ability to deal with the world, yet not understand a thing. He also says that such behaviorally complex systems might be implemented with very ordinary materials, for example with tubes of water and valves.

This creates a biological problem, beyond the Other Minds problem noted by early critics of the CR argument. While we may presuppose that others have minds, evolution makes no such presuppositions. The selection forces that drive biological evolution select on the basis of behavior. Evolution can select for the ability to use information about the environment creatively and intelligently, as long as this is manifest in the behavior of the organism. If there is no overt difference in behavior in any set of circumstances between a system that understands and one that does not, evolution cannot select for genuine understanding. And so it seems that on Searle’s account, minds that genuinely understand meaning have no advantage over creatures that merely process information, using purely computational processes. Thus a position that implies that simulations of understanding can be just as biologically adaptive as the real thing, leaves us with a puzzle about how and why systems with “genuine” understanding could evolve. Original intentionality and genuine understanding become epiphenomenal.

Conclusion

As we have seen, since its appearance in 1980 the Chinese Room argument has sparked discussion across disciplines. Despite the extensive discussion there is still no consensus as to whether the argument is sound. At one end we have Julian Baggini’s (2009) assessment that Searle “came up with perhaps the most famous counter-example in history – the Chinese room argument – and in one intellectual punch inflicted so much damage on the then dominant theory of functionalism that many would argue it has never recovered.” Whereas philosopher Daniel Dennett (2013, p. 320) concludes that the Chinese Room argument is “clearly a fallacious and misleading argument”. Hence there is no consensus as to whether the argument is a proof that limits the aspirations of Artificial Intelligence or computational accounts of mind.

Meanwhile work in artificial intelligence and natural language processing has continued. The CRA led Stevan Harnad and others on a quest for “symbol grounding” in AI. Many in philosophy (Dretske, Fodor, Millikan) worked on naturalistic theories of mental content. Speculation about the nature of consciousness continues in many disciplines. And computers have moved from the lab to the pocket and the wrist.

At the time of Searle’s construction of the argument, personal computers were very limited hobbyist devices. Weizenbaum’s ‘Eliza’ and a few text ‘adventure’ games were played on DEC computers; these included limited parsers. More advanced parsing of language was limited to computer researchers such as Schank. Much changed in the next quarter century; billions now use natural language to interrogate and command virtual agents via computers they carry in their pockets. Has the Chinese Room argument moderated claims by those who produce AI and natural language systems? Some manufacturers linking devices to the “internet of things” make modest claims: appliance manufacturer LG says the second decade of the 21st century brings the “experience of conversing” with major appliances. That may or may not be the same as conversing. Apple is less cautious than LG in describing the capabilities of its “virtual personal assistant” application called ‘Siri’: Apple says of Siri that “It understands what you say. It knows what you mean.” IBM is quick to claim its much larger ‘Watson’ system is superior in language abilities to Siri. In 2011 Watson beat human champions on the television game show ‘Jeopardy’, a feat that relies heavily on language abilities and inference. IBM goes on to claim that what distinguishes Watson is that it “knows what it knows, and knows what it does not know.” This appears to be claiming a form of reflexive self-awareness or consciousness for the Watson computer system. Thus the claims of strong AI now are hardly chastened, and if anything some are stronger and more exuberant. At the same time, as we have seen, many others believe that the Chinese Room Argument showed once and for all that at best computers can simulate human cognition.

Though separated by three centuries, Leibniz and Searle had similar intuitions about the systems they consider in their respective thought experiments, Leibniz’ Mill and the Chinese Room. In both cases they consider a complex system composed of relatively simple operations, and note that it is impossible to see how understanding or consciousness could result. These simple arguments do us the service of highlighting the serious problems we face in understanding meaning and minds. The many issues raised by the Chinese Room argument may not be settled until there is a consensus about the nature of meaning, its relation to syntax, and about the biological basis of consciousness. There continues to be significant disagreement about what processes create meaning, understanding, and consciousness, as well as what can be proven a priori by thought experiments.

Bibliography