Now you should recall … that if A is identical to B, then there is no possible situation where you can have A without B or vice versa. If we can find a case where all of the artificial intelligence operations are present (the reception of certain inputs, syntactical manipulation of symbols, and the production of certain outputs), but we do not have real intelligence (the semantic understanding necessary for real thinking), then we will have shown that intelligence, which the dualist claims to be a feature of mind, cannot be reduced to and identified with artificial intelligence. Is there such a case? Yes, there is.
John Searle has offered the following situation known as the Chinese Room:
Imagine that you are locked in a room, and in this room are several baskets full of Chinese symbols. Imagine that you (like me) do not understand a word of Chinese, but that you are given a rule book in English for manipulating the Chinese symbols. The rules specify the manipulations of symbols purely formally, in terms of their syntax, not their semantics. So the rule might say: “Take a squiggle-squiggle out of basket number one and put it next to a squoggle-squoggle sign from basket number two.” Now suppose that some other Chinese symbols are passed into the room and that you are given further rules for passing back Chinese symbols out of the room. Suppose that unknown to you, the symbols passed into the room are called “questions” by the people outside the room, and the symbols you pass back out of the room are called “answers to the questions.” Suppose, furthermore, that the programmers are so good at designing the programs and that you are so good at manipulating the symbols, that very soon your answers are indistinguishable from those of a native Chinese speaker. There you are locked in your room shuffling your Chinese symbols and passing out Chinese symbols in response to incoming Chinese symbols.
Now the point of the story is simply this: By virtue of implementing a formal computer program from the point of view of an outside observer, you behave exactly as if you understood Chinese, but all the same you don’t understand a word of Chinese.
The Chinese room with the person inside would simulate a computer to an outside person. For a person outside, the room receives input and gives output in a way that makes it appear that the room understands Chinese.
But, of course, all the room does is imitate mental understandings; it does not possess it. Computers are just like the Chinese room. They imitate mental operations, but they do not really exemplify them. Computers and their programs are not minds, because they fail to have consciousness, intentionality, and understanding of real semantic contents. Computers merely imitate minds; thus, this objection fails.
— J. P. Moreland and Gary Habermas, from Beyond Death, p. 93