EDIT: Here's a summary of the argument from the Internet Encyclopedia of Philosophy.
(A1) "Programs are formal (syntactic)."
A program uses syntax to manipulate symbols and pays no attention to the semantics of the symbols. It knows where to put the symbols and how to move them around, but it doesn't know what they stand for or what they mean. For the program, the symbols are just physical objects like any others.
(A2) "Minds have mental contents (semantics)."
Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.
(A3) "Syntax by itself is neither constitutive of nor sufficient for semantics."
This is what the Chinese room argument is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics.
Searle posits that these lead directly to this conclusion:
(C1) "Programs are neither constitutive of nor sufficient for minds."
This should follow without controversy from the first three: Programs don't have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore programs are not minds.
The conclusion is a non-controversial claim which follows from these premises, but should we agree to all of these premises? The most common argument against the conclusion of the thought experiment is to try and prove that it is possible to get to semantics from syntax, but that’s obviously not the only possible argument. There are a number of assumptions necessary to support Searle’s experiment.
To arrive at these premises of the thought experiment Searle has to assert that there are two ways to have a conversation in Chinese (or indeed any language): 1) to understand the language or 2) to use a program which manipulates symbols. Here are the minimum assumptions needed to support this assertion:
1. Understanding is not a form of symbol manipulation. This must be assumed because while the definition of a program as something that manipulates symbols is clear, there is no clear definition given in this thought experiment of what it means to understand something. We have to rely on our self reporting of what process is going on when we have a conversation is in fact “understanding.”
2. Humans can understand things and can manipulate symbols. We claim to be able to do both of these things, but without being able to prove that these two processes are different we have to assume that they’re different and that we’re not just referring to the same process by two different names.
3. These are the only two ways to have a conversation. In theory we should be able to prove that it’s possible to have a conversation by manipulating symbols, and we’re assuming that the process of “understanding” is a second way, and we should also assume that there are no other possible ways (otherwise they should be considered as well). This would naturally raise the question of “why only two?”
4. That humans are capable of conducting a conversation using either process – we can either understand what’s being said, or we can follow rules to manipulate symbols to create a conversation. Or more importantly it’s supposed that we can internalize all the rules of symbol manipulation and carry them out that way.
5. If we do internalize all the rules, and carry out a conversation in that way, we wouldn’t claim that we were in fact understanding any concepts or meanings. This assumption is the one that draws the most attention since 1) it seems widely unrealistic that anyone could actually learn enough rules to have a conversation in this way and 2) the conclusion, that we wouldn’t claim to understand anything, doesn’t obviously follow. Humans do not have perfect access to describe the inner processes of their brains, so it’s not clear that we wouldn’t mistake following rules for having understanding.
Using these assumptions it’s possible to construct the Chinese Room experiment and come to Searle’s conclusions. But it’s also obvious that we don’t have to accept these assumptions, they’re neither self evident nor supported by empirical evidence. To provide support for these assumptions we would need to have a definition of understanding that made it clear it was incompatible with symbol manipulation – one that included some element beyond symbols, memory or mechanical processes. It’s not clear that there’s a non-dualistic element that would satisfy this rule. Baring that, any conclusions would have to wait for experimental evidence to show that the necessary assumptions are correct.
However, if for the moment, we do accept all these assumptions we can come to several conclusions besides Searle’s that are equally valid. For example:
1. If we accept that a human can have a conversation using only symbol manipulation (but wouldn’t understanding anything they saying) or can have a conversation using the process of understanding, then we have to accept that there’s no way, from an observers standpoint, to tell the difference. If we have two “English rooms” one with Searle in it, and another with a program will all the rules necessary to carry out a Searle-like English conversation, it would be impossible to tell the two rooms apart. And in fact we could even assume that they would have the exact same conversation. Both would claim to be “the real Searle” Both would claim to “understand” everything you’re saying, and both (we can imagine) would appear to become increasingly agitated when we pointed out that they’re behaving exactly how we would expect a symbol manipulating program to behave.
2. There’s no way to tell if any particular humans have suffered some brain injury that prevents them from understanding things, or were born lacking this ability. If they have the possibility to instead just follow rules to manipulate symbols then they could fall back on this process if the “understanding” process was unavailable.
Of course these seem ridiculous, but they’re equally valid conclusions to draw from the assumptions of the Chinese room thought experiment. In fact many people would claim that all conclusions we can draw from these premises are ridiculous, and that therefore this is not a useful thought experiment. It may be a useful experiment since it’s premises can only be tested by empirical means, and interestingly the only people currently working to test these assumptions are trying to build exactly the kind of machines that Searle claims are impossible.