The Assumptions of Searle’s Chinese Room Thought Experiment

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Fri Oct 14, 2011 6:26 pm UTC

Here's the wikipedia page for Searle's Chinese room thought experiment. Which is one of the more common examples given for how human consciousness is different from, or impossible for, machines. The experiment can be reduced to the following argument:

EDIT: Here's a summary of the argument from the Internet Encyclopedia of Philosophy.
(A1) "Programs are formal (syntactic)."
A program uses syntax to manipulate symbols and pays no attention to the semantics of the symbols. It knows where to put the symbols and how to move them around, but it doesn't know what they stand for or what they mean. For the program, the symbols are just physical objects like any others.
(A2) "Minds have mental contents (semantics)."
Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.
(A3) "Syntax by itself is neither constitutive of nor sufficient for semantics."
This is what the Chinese room argument is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics.
Searle posits that these lead directly to this conclusion:

(C1) "Programs are neither constitutive of nor sufficient for minds."
This should follow without controversy from the first three: Programs don't have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore programs are not minds.


The conclusion is a non-controversial claim which follows from these premises, but should we agree to all of these premises? The most common argument against the conclusion of the thought experiment is to try and prove that it is possible to get to semantics from syntax, but that’s obviously not the only possible argument. There are a number of assumptions necessary to support Searle’s experiment.

To arrive at these premises of the thought experiment Searle has to assert that there are two ways to have a conversation in Chinese (or indeed any language): 1) to understand the language or 2) to use a program which manipulates symbols. Here are the minimum assumptions needed to support this assertion:

1. Understanding is not a form of symbol manipulation. This must be assumed because while the definition of a program as something that manipulates symbols is clear, there is no clear definition given in this thought experiment of what it means to understand something. We have to rely on our self reporting of what process is going on when we have a conversation is in fact “understanding.”
2. Humans can understand things and can manipulate symbols. We claim to be able to do both of these things, but without being able to prove that these two processes are different we have to assume that they’re different and that we’re not just referring to the same process by two different names.
3. These are the only two ways to have a conversation. In theory we should be able to prove that it’s possible to have a conversation by manipulating symbols, and we’re assuming that the process of “understanding” is a second way, and we should also assume that there are no other possible ways (otherwise they should be considered as well). This would naturally raise the question of “why only two?”
4. That humans are capable of conducting a conversation using either process – we can either understand what’s being said, or we can follow rules to manipulate symbols to create a conversation. Or more importantly it’s supposed that we can internalize all the rules of symbol manipulation and carry them out that way.
5. If we do internalize all the rules, and carry out a conversation in that way, we wouldn’t claim that we were in fact understanding any concepts or meanings. This assumption is the one that draws the most attention since 1) it seems widely unrealistic that anyone could actually learn enough rules to have a conversation in this way and 2) the conclusion, that we wouldn’t claim to understand anything, doesn’t obviously follow. Humans do not have perfect access to describe the inner processes of their brains, so it’s not clear that we wouldn’t mistake following rules for having understanding.

Using these assumptions it’s possible to construct the Chinese Room experiment and come to Searle’s conclusions. But it’s also obvious that we don’t have to accept these assumptions, they’re neither self evident nor supported by empirical evidence. To provide support for these assumptions we would need to have a definition of understanding that made it clear it was incompatible with symbol manipulation – one that included some element beyond symbols, memory or mechanical processes. It’s not clear that there’s a non-dualistic element that would satisfy this rule. Baring that, any conclusions would have to wait for experimental evidence to show that the necessary assumptions are correct.

However, if for the moment, we do accept all these assumptions we can come to several conclusions besides Searle’s that are equally valid. For example:

1. If we accept that a human can have a conversation using only symbol manipulation (but wouldn’t understanding anything they saying) or can have a conversation using the process of understanding, then we have to accept that there’s no way, from an observers standpoint, to tell the difference. If we have two “English rooms” one with Searle in it, and another with a program will all the rules necessary to carry out a Searle-like English conversation, it would be impossible to tell the two rooms apart. And in fact we could even assume that they would have the exact same conversation. Both would claim to be “the real Searle” Both would claim to “understand” everything you’re saying, and both (we can imagine) would appear to become increasingly agitated when we pointed out that they’re behaving exactly how we would expect a symbol manipulating program to behave.
2. There’s no way to tell if any particular humans have suffered some brain injury that prevents them from understanding things, or were born lacking this ability. If they have the possibility to instead just follow rules to manipulate symbols then they could fall back on this process if the “understanding” process was unavailable.

Of course these seem ridiculous, but they’re equally valid conclusions to draw from the assumptions of the Chinese room thought experiment. In fact many people would claim that all conclusions we can draw from these premises are ridiculous, and that therefore this is not a useful thought experiment. It may be a useful experiment since it’s premises can only be tested by empirical means, and interestingly the only people currently working to test these assumptions are trying to build exactly the kind of machines that Searle claims are impossible.
Last edited by TrlstanC on Thu Oct 27, 2011 8:26 pm UTC, edited 3 times in total.

User avatar
Jplus
Posts: 1721
Joined: Wed Apr 21, 2010 12:29 pm UTC
Location: Netherlands

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Jplus » Fri Oct 14, 2011 9:14 pm UTC

I'm not sure whether your abstract is entirely faithful (especially the three premises), but in any case I think there are other reasons why the Chinese Room Argument is deeply problematic. These two did the job for me:

Circular reasoning: one of the assumptions is that humans can process language because they are conscious. This in itself is already questionable, and besides it's generally unwise to start making assumptions about the thing you want to explain (i.e. consciousness). In this case, Searle starts with the idea that humans are special and ends up concluding that computers can't be like humans.

Category mistake: by the inability of Searle to understand Chinese, he concludes that a computer can't understand Chinese. But he should be comparing the computer to the entire room, since Searle himself is only a part of the symbol manipulation machinery. For the entire room, you could definitely argue that it understands Chinese.

In fact, because of the latter argument Searle is also falling prey to the homunculus fallacy. For to draw a conclusion from specific properties of Searle-in-the-room, there would have to be an alternative version of the room that matches humans in which the contained person (i.e. the homunculus) does understand what they're doing.

I think none of what I'm saying is new, and you could probably find most of it on the wikipedia page that you linked to. Personally, I'm not taking any of Searle's publications seriously anymore.
"There are only two hard problems in computer science: cache coherence, naming things, and off-by-one errors." (Phil Karlton and Leon Bambrick)

coding and xkcd combined

(Julian/Julian's)

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26726
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby gmalivuk » Fri Oct 14, 2011 10:18 pm UTC

Did you have a specific question in mind or something when you started this thread?
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
stylus
Posts: 3
Joined: Thu Oct 13, 2011 4:52 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby stylus » Fri Oct 14, 2011 11:49 pm UTC

I agree that the conclusions you posted follow from the assumptions you listed, but I disagree that they are an argument against Searle's proposal.

TrlstanC wrote:1. If we accept that a human can have a conversation using only symbol manipulation (but wouldn’t understanding anything they saying) or can have a conversation using the process of understanding, then we have to accept that there’s no way, from an observers standpoint, to tell the difference. If we have two “English rooms” one with Searle in it, and another with a program will all the rules necessary to carry out a Searle-like English conversation, it would be impossible to tell the two rooms apart. And in fact we could even assume that they would have the exact same conversation. Both would claim to be “the real Searle” Both would claim to “understand” everything you’re saying, and both (we can imagine) would appear to become increasingly agitated when we pointed out that they’re behaving exactly how we would expect a symbol manipulating program to behave.


I disagree that conclusion number 1 is ridiculous. To describe your proposed thought experiment further (in order to get it clear in my head) we have a John Searle room, which consists of someone who doesn't speak English and a huge rule book containing all the ways John Searle would react to anything anyone said to him at that particular moment. Someone would post through the slot "I don't believe you are actually understanding anything I'm saying to you" and the non-English speaker would look up the appropriate response in his rule book and reply "Don't be stupid, of course I am understanding what I am saying to you" (but of course the non English speaker doesn't know what is going on at all). You then ask the question to the real John Searle, who of course reacts in exactly the same way.

The way I see it, the John Searle room is exactly the same concept as the Chinese room, just with a bigger rulebook. I see no reason why you theoretically couldn't build one. And I see nothing paradoxical about the idea that both real and fake John Searle claim understanding, but that only the real John Searle actually experiences it.

2. There’s no way to tell if any particular humans have suffered some brain injury that prevents them from understanding things, or were born lacking this ability. If they have the possibility to instead just follow rules to manipulate symbols then they could fall back on this process if the “understanding” process was unavailable.


Again, I don't have any problem with conclusion number 2. There is no way to determine that some humans aren't unconscious zombies either, but as far as most people go that doesn't make consciousness a ridiculous idea.

By the way as JPlus mentioned I do have doubts about Searle's Chinese room for other reasons, I'm just not sure I agree with the reasons that you have posted.

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby TrlstanC » Sat Oct 15, 2011 2:50 pm UTC

The reason I say the conclusions are ridiculous is because the hypothetical situation is so far outside our normal experience that either 1) we can't trust our intuition (based on normal experience) as a guide or 2) they may even be impossible. I think this is the worst kind of abuse of a hypothetical situation, we can clearly picture it in our minds, but we have to take for granted a couple unrealistic assumptions, and once we do that it's easy to accept other assumptions that are unnecessary and end up in circular logic, or just muddy the reasoning.

For example, Searle imagines that a person moving around symbols i.e., executing a computer program using large physical bits, could carry out a somewhat normal conversation in chinese. But, the speed that a computer would do the necessary symbol manipulation and the speed that a human could is different by many orders of magnitude. It would take a person years, if not hundreds of years to compose a single response. He even goes on to assume that he could learn all the rules, which again, would probably take several lifetimes, if it's even possible. Once he makes, and we accept, these completely unrealistic assumptions, he can add in a few more assumptions without changing the "unrealisticness" of the scenario. For example, there's no evidence to suppose that we should accept any of the assumptions I spelled out above, but most discussions of the Chinese room don't even question them. And in fact, assumption #1 that there exists "understanding" and "symbol manipulation" as separate processes is part of the conclusion that Searle is trying to argue for. Of course the hypothetical situation is going to support your claims when it includes an assumption like that.

A really good hypothetical situation that would help us would be one that demonstrates what it means to understand something i.e., what's going on in the machine we call our brain that allows us to understand things, and then compare that with symbol manipulation to show how they're different and incompatible. I don't know if such an example is possible, but it would seem like that would be the best way to approach this kind of example.

User avatar
An Enraged Platypus
Posts: 293
Joined: Sat Mar 19, 2011 10:17 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby An Enraged Platypus » Sat Oct 15, 2011 6:00 pm UTC

Jplus wrote:I
Circular reasoning: one of the assumptions is that humans can process language because they are conscious. This in itself is already questionable, and besides it's generally unwise to start making assumptions about the thing you want to explain (i.e. consciousness). In this case, Searle starts with the idea that humans are special and ends up concluding that computers can't be like humans.


The bolded section says everything you need to know about every argument purporting to show that machines cannot be conscious, or that the conscious mind is not a machine. Every one I have ever encountered sooner or later begs the question as to whether consciousness differs from all other kinds of processes. Dan Dennett's counter to the zombie argument (available at http://eripsa.org/files/dennett%20zombies.pdf) definitively knocks out absent qualia/ "magical consciousness" stories and arguments with that target, which in my opinion consequently knocks out qualia inversion. Mary and her black-and-white room were famously disowned by her creator; it's high time the rest of the world abandoned the idea that any part of consciousness is "special" too.
We consider every day a plus/To spend it with a platypus/We're always so ecstatic/'Cause he's semi-aquatic!

- Phineas & Ferb

User avatar
stylus
Posts: 3
Joined: Thu Oct 13, 2011 4:52 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby stylus » Sat Oct 15, 2011 10:01 pm UTC

TrlstanC wrote:For example, Searle imagines that a person moving around symbols i.e., executing a computer program using large physical bits, could carry out a somewhat normal conversation in chinese. But, the speed that a computer would do the necessary symbol manipulation and the speed that a human could is different by many orders of magnitude. It would take a person years, if not hundreds of years to compose a single response. He even goes on to assume that he could learn all the rules, which again, would probably take several lifetimes, if it's even possible. Once he makes, and we accept, these completely unrealistic assumptions, he can add in a few more assumptions without changing the "unrealisticness" of the scenario.


I don't think the complexity of the program is relevent though.

Couldn't you do a "Beginner's Chinese Room" experiment in which the rules were as follows?

1. Someone asks your name in Chinese
- you reply "My name is X" in Chinese

2. Someone says anything else to you in Chinese
- you reply "I'm sorry, I don't understand you" in Chinese.

You could get 2 people to carry out this program in 2 different ways: by explaining the meaning behind the Chinese words (understanding) or by telling just telling them to repeat certain sounds if they hear certain sounds (symbol manipulation).

Wouldn't such a thought experiment be equivalent to Searle's room? And surely the assumption that 1 person has understanding and one of them doesn't is a reasonable one.

pizzazz
Posts: 487
Joined: Fri Mar 12, 2010 4:44 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby pizzazz » Sat Oct 15, 2011 10:28 pm UTC

But then even if you understand those two sets of phrases, you don't "understand" Chinese in general. You certainly couldn't tell the difference between any two sentences in Chinese unless one was "What is your name?"

When we discussed this in my humanities class last year, I pointed out that, assuming conversations can be of nontrivial length, you quickly reach the point where such a machine would be larger than the entire universe. That is to say, if one were actually conducting a Turing Test, the "Chinese Room" theory is one that you could falsify in a finite amount of time. However, the linguistics professor leading our discussion said this wasn't necessarily the case, that for some reason the room's size did not have to increase past a certain point, but didn't explain why.

aoeu
Posts: 325
Joined: Fri Dec 31, 2010 4:58 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby aoeu » Sat Oct 15, 2011 11:41 pm UTC

It's obvious that Searle can't dictate the objective truth from his armchair. I'm under the impression that he constructed the Chinese Room as a response to other schools of thought, to make the point that he can't be reasonably proven wrong.
Last edited by aoeu on Sun Oct 16, 2011 3:48 am UTC, edited 1 time in total.

User avatar
Griffin
Posts: 1363
Joined: Sun Apr 08, 2007 7:46 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Griffin » Sun Oct 16, 2011 2:44 am UTC

I was always under the impression the major flaw in his argument was the assumption that an AI could only do strict rule-based symbol manipulation - in essence, his theorem applies to chatbots never gaining sentience, perhaps, but not to the wider scope of "all possible AI".

Not his "chinese room" is incredibly primitive. It may be arguably Turing complete, but Turing completeness is not the be all and end all of computing. There are things even a hypothetical universal Turing machine can't do that a modern day computer can - because a modern day computer is more than just a processor and memory and straightforward calculations. Heck, Turing completeness may not even be a necessary component for a decent AI, and it's surely not a sufficient one.
Bdthemag: "I don't always GM, but when I do I prefer to put my player's in situations that include pain and torture. Stay creative my friends."

Bayobeasts - the Pokemon: Orthoclase project.

User avatar
An Enraged Platypus
Posts: 293
Joined: Sat Mar 19, 2011 10:17 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby An Enraged Platypus » Sun Oct 16, 2011 9:49 am UTC

stylus wrote:
TrlstanC wrote:For example, Searle imagines that a person moving around symbols i.e., executing a computer program using large physical bits, could carry out a somewhat normal conversation in chinese. But, the speed that a computer would do the necessary symbol manipulation and the speed that a human could is different by many orders of magnitude. It would take a person years, if not hundreds of years to compose a single response. He even goes on to assume that he could learn all the rules, which again, would probably take several lifetimes, if it's even possible. Once he makes, and we accept, these completely unrealistic assumptions, he can add in a few more assumptions without changing the "unrealisticness" of the scenario.


I don't think the complexity of the program is relevent though.

Couldn't you do a "Beginner's Chinese Room" experiment in which the rules were as follows?

1. Someone asks your name in Chinese
- you reply "My name is X" in Chinese

2. Someone says anything else to you in Chinese
- you reply "I'm sorry, I don't understand you" in Chinese.

You could get 2 people to carry out this program in 2 different ways: by explaining the meaning behind the Chinese words (understanding) or by telling just telling them to repeat certain sounds if they hear certain sounds (symbol manipulation).

Wouldn't such a thought experiment be equivalent to Searle's room? And surely the assumption that 1 person has understanding and one of them doesn't is a reasonable one.



I disagree. You might be playing off the assumption the actors already have some language; the actor whom you claim "understands meaning" already had English. Who's to say that his understanding of English is not fundamentally about assertability conditions, that speaking any language is not about following a rule for when you can say X and when you can say y? There are some heavy-duty arguments to his effect, famously the ones stemming from Kripke's analysis of Wittgenstein's Philosophical Investigations.. Even if you go by truth-conditions as the basis of language, you're still begging the question that normal language isn't a complex of such instances of rule-following, whose baseline processes are equivalent to "symbol manipulation".

Or in other words,
2. Humans can understand things and can manipulate symbols. We claim to be able to do both of these things, but without being able to prove that these two processes are different we have to assume that they’re different and that we’re not just referring to the same process by two different names.
is deeply questionable, and amounts to licence to beg the question.
We consider every day a plus/To spend it with a platypus/We're always so ecstatic/'Cause he's semi-aquatic!

- Phineas & Ferb

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Technical Ben » Sun Oct 16, 2011 12:03 pm UTC

Can the Chinese room "learn" anything new? That seems to be a problem in the thought experiments construction. A person has the ability to learn. So should a learning algorithm/program not be a better comparison than a fixed list or program?

IE, I can compare a person with a saw to a motor driven saw in that they have exactly the same outcome in cutting down a tree. The two are not similar in underlying processes however.
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

User avatar
Jplus
Posts: 1721
Joined: Wed Apr 21, 2010 12:29 pm UTC
Location: Netherlands

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Jplus » Sun Oct 16, 2011 12:16 pm UTC

Nothing prevents Searle-in-the-room to write down new rules (for later use) as part of his symbol manipulation, right?
"There are only two hard problems in computer science: cache coherence, naming things, and off-by-one errors." (Phil Karlton and Leon Bambrick)

coding and xkcd combined

(Julian/Julian's)

billyswong
Posts: 41
Joined: Mon Nov 16, 2009 3:56 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby billyswong » Sun Oct 16, 2011 1:38 pm UTC

This topic reminds me of what I thought when I first heard the Chinese Room story. It is just SOOO wrong.

The error is fallacy of composition. Yeah, the "human operator" may not understand Chinese, but the whole system *could* understand Chinese. Just like nobody would say an individual brain cell can "read" Chinese, but the whole brain composited by such brain cells could.

Never understood why so many people accept the Chinese Room argument.

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby TrlstanC » Sun Oct 16, 2011 2:16 pm UTC

Technical Ben wrote:Can the Chinese room "learn" anything new? That seems to be a problem in the thought experiments construction. A person has the ability to learn. So should a learning algorithm/program not be a better comparison than a fixed list or program?

Jplus wrote:Nothing prevents Searle-in-the-room to write down new rules (for later use) as part of his symbol manipulation, right?

Right, the original thought experiment talks about having paper, pencils and filling cabinets in addition to all of the original rules, with the idea being these tools would be sufficient to replicate all of the actions of a computer, including memory i.e., learning.

billyswong wrote:The error is fallacy of composition. Yeah, the "human operator" may not understand Chinese, but the whole system *could* understand Chinese.

This is certainly true, but to me the real question is "what does it mean to 'understand' something?" And can we show that it isn't just a form of symbol manipulation? Searle assumes that the human mind is a certain kind of machine that's capable of understanding, and that a Turing machine is a different kind of computer that manipulates symbols (and then goes on to show if that's true then Turing machines can't understand concepts) but I can't see any good reason to make the original assumption i.e., why is 'understanding' some new kind of processes? It could easily just be explained as "the name we use to describe the kind of symbol manipulations humans do subconsciously when we think and talk about concepts" or something similar.

The same argument can be used for virtually any cognitive concept like consciousness or intelligence if we accept that humans might not (probably don't) have the ability to perfectly describe what's actually going on in our brains/minds. Consciousness is just what we call a certain process that we can't fully describe 'from the inside'. Instead of assuming that all cognitive concepts are different things that we need to piece together, we can instead think of them as the different names we use to describe different types of the same underlying process. Figuring out what kind of underlying process that actually is (or if it is actually more then one process) is more of an empirical question, exactly the kind of question AI researchers are working on, and one that Searle complete ignores in his thought experiment.

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Technical Ben » Sun Oct 16, 2011 5:32 pm UTC

billyswong wrote:This topic reminds me of what I thought when I first heard the Chinese Room story. It is just SOOO wrong.

The error is fallacy of composition. Yeah, the "human operator" may not understand Chinese, but the whole system *could* understand Chinese. Just like nobody would say an individual brain cell can "read" Chinese, but the whole brain composited by such brain cells could.

Never understood why so many people accept the Chinese Room argument.


Kind of one of the points I wanted to comment on, but thought not to. IE, a "room with a full rule set of Chinese" is a massive undertaking. The character list it's self it massive. So you've jumped the hurdles already and no longer have a "symbol manipulation" calculation. IE, if you've got infinite monkeys, you could type Shakespeare (in Chinese) if you wanted. :P

TristanC, I've never seen the example with "learning" in it. The "Chinese room" has not been described to me where someone can write down new rules. I thought you could only use the existing lists?

To me, "understanding" seems to have a process of comparison (IE turning machines) but also knowledge of mechanism. The knowledge of how something works, can also be constructed from further comparisons I guess. But it's the difference between having multiplication tables (symbol manipulation) or doing the calculations yourself (understanding what multiplication is).
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby morriswalters » Mon Oct 17, 2011 12:48 am UTC

The experiment presumes the person has no knowledge of Chinese. If he doesn't understand anything, he can't change the rules. The question could be is the mind more than language"? A more apt question might be can the room talk to itself?

billyswong
Posts: 41
Joined: Mon Nov 16, 2009 3:56 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby billyswong » Mon Oct 17, 2011 12:57 am UTC

Technical Ben wrote:
billyswong wrote:This topic reminds me of what I thought when I first heard the Chinese Room story. It is just SOOO wrong.

The error is fallacy of composition. Yeah, the "human operator" may not understand Chinese, but the whole system *could* understand Chinese. Just like nobody would say an individual brain cell can "read" Chinese, but the whole brain composited by such brain cells could.

Never understood why so many people accept the Chinese Room argument.


Kind of one of the points I wanted to comment on, but thought not to. IE, a "room with a full rule set of Chinese" is a massive undertaking. The character list it's self it massive. So you've jumped the hurdles already and no longer have a "symbol manipulation" calculation. IE, if you've got infinite monkeys, you could type Shakespeare (in Chinese) if you wanted. :P

TristanC, I've never seen the example with "learning" in it. The "Chinese room" has not been described to me where someone can write down new rules. I thought you could only use the existing lists?

To me, "understanding" seems to have a process of comparison (IE turning machines) but also knowledge of mechanism. The knowledge of how something works, can also be constructed from further comparisons I guess. But it's the difference between having multiplication tables (symbol manipulation) or doing the calculations yourself (understanding what multiplication is).


In order for the Chinese Room to participate a natural, stateful conversation, the rules must tell the "operator" to drop down notes (the system must have memory), and refer to the notes later on when checking the "Chinese table". However, it needs not "creating" new rules by itself.

If the Chinese Room only needs to provide stateless replies in Chinese, then no "learning" mechanism is required. And if the max length of each input is restricted, then the "table" size is finite. If not, then the rules have to be more "intelligent" then looking up one single table. Then it becomes more a software programming issue than hardware operators issue.

billyswong
Posts: 41
Joined: Mon Nov 16, 2009 3:56 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby billyswong » Mon Oct 17, 2011 1:03 am UTC

Some further elaboration:

If the human operator is allowed to drop notes, then nothing prevents the rulebook instructing him to write macros, or even complete programs on the paper. The rulebook could then tell the human operator to act as an interpreter of those program codes. The "human operator" may not know what new rules should be written according to the ongoing conversations, but the "rules" could.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby morriswalters » Mon Oct 17, 2011 2:01 am UTC

What if instead of a Chinese person pushing paper through a slot what if were instead another identical room, with identical software, and an identical man. Seed it with a question and start it. What type of conversation would you get, or am I missing the point?

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Technical Ben » Mon Oct 17, 2011 8:32 am UTC

Yep, billyswong, that does not solve the problem.
Without a "learning mechanism" it's practically a "Chinese dictionary" and nothing more. Even with a conversation going on, I could compare it to a "Chinese record player". IE, If recorded in advance, I could set up a audio recording of someone speaking Chinese, and play it back. Now, it is possible to have a perfectly coherent conversation with that recording (assuming I guessed the replies). However, the recording offers no insight into how the human mind, let alone the mechanical action of language, works. This shows the danger of allowing such a thought experiment to give us conclusions on the mechanism of action. We need a much better model to fit the processes going on with language or learning or human brains.

If you allow for a learning mechanism, well, you no longer have a simple list or recording. You have a full program (a learning and self modifying one at that). I don't know if that is still in the scope of the original thought experiment, or if it breaks it?
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby TrlstanC » Mon Oct 17, 2011 1:29 pm UTC

Technical Ben wrote:If you allow for a learning mechanism, well, you no longer have a simple list or recording. You have a full program (a learning and self modifying one at that). I don't know if that is still in the scope of the original thought experiment, or if it breaks it?

No, that's definitely within the scope of the thought experiment as Searle lays it out. His goal is to show that you can have a completely mechanical process that will replicate a conversation, and be just as fluent in giving answers (or even asking questions presumably) as any native speaker - but show that that process will lack any "understanding." To show this, he includes himself as the agent inside the room (instead of some machine or CPU) and has the rules for manipulating the symbols of a language he doesn't understand (chinese). The point of including a person in the room is so that there will be someone to judge if there is any understanding going on or not. Searle accepts that with a sufficiently complex machine (including memory, ect.), and enough rules you could replicate the human ability to have a conversation, but he also assumes that there's another way to do it, which is called "understanding" and isn't just "symbol manipulation."

This has the obviously flaw of relying on a person to judge what understanding is, and there's no proof that we can, and in fact Searle doesn't even attempt to define "understanding." If we could define "understanding" then I don't think we would need a hypothetical situation, we could just look at the room (or a computer, or any machine) and say "yes, there's understanding going on there" or "no, there isn't."

User avatar
The Great Hippo
Swans ARE SHARP
Posts: 7357
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby The Great Hippo » Mon Oct 17, 2011 4:58 pm UTC

TrlstanC wrote:The point of including a person in the room is so that there will be someone to judge if there is any understanding going on or not. Searle accepts that with a sufficiently complex machine (including memory, ect.), and enough rules you could replicate the human ability to have a conversation, but he also assumes that there's another way to do it, which is called "understanding" and isn't just "symbol manipulation."

...

This has the obviously flaw of relying on a person to judge what understanding is, and there's no proof that we can, and in fact Searle doesn't even attempt to define "understanding." If we could define "understanding" then I don't think we would need a hypothetical situation, we could just look at the room (or a computer, or any machine) and say "yes, there's understanding going on there" or "no, there isn't."
Isn't that the problem people are pointing out, here? That without a functional definition of 'understanding', it relies on the argument of "humans are special, machines aren't"? Particularly when what you're saying amounts to "humans are special, and it's because of that specialness that only humans can test for specialness"?

If the only way to test for X is to have someone look at it--and X is the trait which allows us to recognize X in the first place--doesn't it seem reasonable that X might, in fact, be bullshit?

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Technical Ben » Mon Oct 17, 2011 6:15 pm UTC

TrlstanC wrote:
Spoiler:
Technical Ben wrote:If you allow for a learning mechanism, well, you no longer have a simple list or recording. You have a full program (a learning and self modifying one at that). I don't know if that is still in the scope of the original thought experiment, or if it breaks it?

No, that's definitely within the scope of the thought experiment as Searle lays it out. His goal is to show that you can have a completely mechanical process that will replicate a conversation, and be just as fluent in giving answers (or even asking questions presumably) as any native speaker - but show that that process will lack any "understanding." To show this, he includes himself as the agent inside the room (instead of some machine or CPU) and has the rules for manipulating the symbols of a language he doesn't understand (chinese). The point of including a person in the room is so that there will be someone to judge if there is any understanding going on or not. Searle accepts that with a sufficiently complex machine (including memory, ect.), and enough rules you could replicate the human ability to have a conversation, but he also assumes that there's another way to do it, which is called "understanding" and isn't just "symbol manipulation."

This has the obviously flaw of relying on a person to judge what understanding is, and there's no proof that we can, and in fact Searle doesn't even attempt to define "understanding." If we could define "understanding" then I don't think we would need a hypothetical situation, we could just look at the room (or a computer, or any machine) and say "yes, there's understanding going on there" or "no, there isn't."


Well then, it's a failure to understand the goal posts of the problem. A single page in the thought experiment understands nothing. A single "mechanism" understands nothing. The mechanism could be a human, or a clock work symbol sorting machine, neither have any "understanding", they just function. However, we describe the whole system or room as having understanding. To change from the human to the room is to move the goal posts. You have to keep the description to only one, or only the other for the conclusion.

Can a complex language be simulated via physical processes? Well, it is, every day. In the physical brain. I don't think it matters if a brain is made of neurons or transistors or clock work. The underlying process is the same (Assuming objects larger than those under QM effects. Even then, theoretically we could make a QM cog or transistor). If a brain has understanding, a sufficiently complex Chinese room could too. The assumption is in the phrase "has all the rules" as it assumes it's already complex enough too pass the test. Thus the possible fallacy.
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby infernovia » Tue Oct 18, 2011 2:39 am UTC

"Understanding" in terms of the chinese room is just efficient processing that you are not aware of. What it needs to feel "genuine" is for it to associate language with what it is representing, which is why the chinese room experiment feels so artifical to us: the ability of language to evoke such mental associations are either gone or extremely diluted.

radams
Posts: 90
Joined: Fri May 14, 2010 12:49 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby radams » Tue Oct 18, 2011 5:32 pm UTC

Searle anticipated many of these objections in the original paper.

He considers the objection "While the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story." His answer is to think of a person who does all the symbol manipulation in their head. This person can carry out conversations in Chinese, but still does not understand Chinese, because they do not know that 'squiggle squiggle' [sic] means hamburger.

Searle says he does not know whether it is possible for a machine to be intelligent, but "In order to produce intentionality the system would have to duplicate the causal powers of the brain and that simply instantiating a formal program would not be sufficient for that." He's not explicit about what these 'causal powers' are, which is a shame, because they are (in my opinion) the crux of his argument. However, he does say this, about a computer program that simulates the physical makeup of a human brain:

"The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only
the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the
brain, namely its causal properties, its ability to produce intentional states."

So when he talks about 'causal properties', he's not talking about (say) non-determinism or the ability to modify one's own code/structure. He really does seem to believe that mental states are something produced by some process in the brain, separate from the process of neurons triggering other neurons to fire.

The debate between Searle and his critics often turned to dualism (the idea that the brain and mind are two separate objects); but I think the question is rather one of essentialism: are mental states things, or processes? Searle assumes that they are things. We know, by direct experience, that brains produce them, although we don't yet know how; we know (or at least believe) that moving paper around does not produce them; therefore, there cannot be a program P such that any implementation of P has mental states.

On the other hand, if mental states are processes, then the difficulty disappears. It's very counterintuitive to think of them as processes, because that's not at all what mental states feel like, and it's natural to believe that we have perfect knowledge of our own mental states. But the more we learn about psychology and neuroscience, the more that seems like an illusion.

Here's a link to the original paper, for convenience:

http://journals.cambridge.org/action/di ... id=6573580

Of all of the objections given there, Wilensky's has the most force in my opinion. Suppose I were to carry out, in my head, all the operations of a program that can pass the Turing test in Chinese, imagining them being performed by shuffling paper. Then I would indeed be host to a subsystem that is genuinely intelligent and understands Chinese. I would not know that that is what my subsystem is doing - likewise, my subsystem would not know that he is being implemented via shuffled imaginary paper, just as we did not know (for most of human history) that we were implemented by neurons sending each other electrical signals.

Searle's reply to Wilensky is disappointing; in my opinion, he does not seem to have understood Wilensky's objection.

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Technical Ben » Tue Oct 18, 2011 6:28 pm UTC

infernovia wrote:"Understanding" in terms of the chinese room is just efficient processing that you are not aware of. What it needs to feel "genuine" is for it to associate language with what it is representing, which is why the chinese room experiment feels so artifical to us: the ability of language to evoke such mental associations are either gone or extremely diluted.


Yep. Even the example of doing the symbol manipulation in your head is solved with this example. If you relate something to the symbol manipulation outside the "room" then you have understanding. If it's constrained to the room, then it's a mechanical response. If our Chinese room has pictures of apples and hamburgers and can eat and work and cook, then it can apply the words to actions. It would also need to "plan" or construct virtual responses for future use to be able to perform those actions. These things are absent from the thought experiment. Many other things required for understanding are also absent. A good example is "if you cannot model it, you cannot understand it". Perhaps the Chinese room needs to be able to self examine and model before it can be called "understanding"?

Plus "monkeys typing Shakespeare" seems to apply. The Chinese room assumes we have all the possible replies in Chinese. How many possible phrases are there in a language? Infinite?! It's practically granting infinite monkeys and infinite typewriters to crunch the replies out in Chinese. However, the real world only allows one monkey (the human mind in the singular) to comprehend the words.
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby morriswalters » Tue Oct 18, 2011 7:21 pm UTC

It's not important to think about how the experiment could be done. The point is that language without a reason is insufficient. Why did we evolve it. To the Chinese room the only reason to speak is to respond to inputs. Lacking that input it waits. The experiment, like the Turing test, is designed to fool the Chinese who talk to the room, not to think. The new Iphone App Siri would be an example. Does Siri think?

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby TrlstanC » Tue Oct 18, 2011 8:21 pm UTC

morriswalters wrote: To the Chinese room the only reason to speak is to respond to inputs. Lacking that input it waits.


That's an interesting point. I would argue that "understanding" is just the name we give to a certain process, and that this process could probably be duplicated by symbol manipulation (if that's not the way we actually do it). But is it possible to have a machine that can respond to any question as if it was a native speaker, but actually include this process, but instead just fakes it? I wouldn't think so, but if this means that the Chinese Room (assume it's just an intelligent machine, skip the human inside) understands what's being asked, would it just sit and wait for a question before answering? Or if it actually did understand what was going on, would it start to ask it own questions? Or just start to write messages to itself? It may be that this is another reason that Searle's thought experiment is impossible the way he describes it.

User avatar
Griffin
Posts: 1363
Joined: Sun Apr 08, 2007 7:46 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Griffin » Wed Oct 19, 2011 12:11 pm UTC

The reason Searles system lacks understanding is because it has really poor IO.

Give it some mobility, some cameras, rules for processing non-linguistic inputs, and it can start having novel experiences - these novel experiences will allow it to come to conclusions beyond the confines of the conversation - Think of it like this:

If a guy walks around in a robot suit, seeing everything and hearing everything, observing how chinese is used, asking people questions about what "that" object is, and then using the information gathered to hold a fluent conversation - isn't that understanding?

Language exists to mirror experience - without a way for the "room" to gain independent experience, the range of what it can "understand" is limited, and the understanding would be shallow. Certainly not anything we would associate with "true understanding".
Bdthemag: "I don't always GM, but when I do I prefer to put my player's in situations that include pain and torture. Stay creative my friends."

Bayobeasts - the Pokemon: Orthoclase project.

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Technical Ben » Wed Oct 19, 2011 8:13 pm UTC

That's the funny thing with such a concept or model. It could tell us "what is the grand unified theory of everything" if we could ask the question in Chinese. Because it "knows every response". It's not got any "understanding" of physics though. :lol:
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby TrlstanC » Wed Oct 19, 2011 8:39 pm UTC

Technical Ben wrote:That's the funny thing with such a concept or model. It could tell us "what is the grand unified theory of everything" if we could ask the question in Chinese. Because it "knows every response". It's not got any "understanding" of physics though. :lol:


I doubt Searle would agree with that. The idea of the thought experiment is that the person moving around characters in the room could give the same (or same kind of) responses as a native Chinese speaker.

Which is actually a really surprising assumption to make in the first place, if you're trying to show that symbol manipulation isn't the same as understanding (however he wants to define that). As far as I can tell the only reason to make that assumption is to make the hypothetical situation interesting.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby morriswalters » Wed Oct 19, 2011 8:55 pm UTC

Could the system ever give a response to a question which required knowledge not accounted for in its program?

Outchanter
Posts: 669
Joined: Mon Dec 17, 2007 8:40 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Outchanter » Wed Oct 19, 2011 9:22 pm UTC

It's an interesting thought experiment, but the conclusion that the man's brain doesn't "understand" Chinese on its own is about as enlightening as saying that the individual neurons in your brain don't "understand" English. It's a team effort.

Also symbol manipulation makes people think of words, but there'd actually need to be a lot of math involved (and probably a random number generator) to get anywhere close to simulating a real human. In particular, the idea that you could just look up an input sentence in a giant table and spout an appropriate output sentence is absurd - for one thing, a system like that would have no memory. You could say "hi" a dozen times in succession and it would cheerfully respond "hi" every time instead of getting annoyed like a real human would.

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby TrlstanC » Wed Oct 19, 2011 11:59 pm UTC

Here's a link to the Stanford Encyclopedia of Philosophy, which has a great summary of the thought experiment (most famous hypotheticals in fact). And here's Searle's concise version:
Spoiler:
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.


Outchanter wrote: In particular, the idea that you could just look up an input sentence in a giant table and spout an appropriate output sentence is absurd - for one thing, a system like that would have no memory. You could say "hi" a dozen times in succession and it would cheerfully respond "hi" every time instead of getting annoyed like a real human would.
The hypothetical is about symbol manipulation, not look-up tables (which it should be obvious would fail the test).

Outchanter wrote:It's an interesting thought experiment, but the conclusion that the man's brain doesn't "understand" Chinese on its own is about as enlightening as saying that the individual neurons in your brain don't "understand" English. It's a team effort.
Searle is trying to show that no matter how much symbol manipulation is going on, or how complex this kind of computer is, it will never be capable of understanding. The point of putting a person inside is so that there will be someone in there who can judge if understanding is going on or not. But the worst thing about this "experiment" is that you don't even need to use any logic to get to the conclusion. If you look at the requirements to set up the hypothetical one of the unstated assumptions is that understanding is not symbol manipulation, so of course it'll be possible to reach that conclusion.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby infernovia » Thu Oct 20, 2011 2:27 am UTC

morriswalters wrote:The point is that language without a reason is insufficient. Why did we evolve it?


Yes, this was the point of my post, thank you for posting it so clearly. Really, most of AI should really be called algorithm class...

Technical Ben
Posts: 2986
Joined: Tue May 27, 2008 10:42 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby Technical Ben » Thu Oct 20, 2011 9:13 am UTC

TrlstanC wrote:
Technical Ben wrote:That's the funny thing with such a concept or model. It could tell us "what is the grand unified theory of everything" if we could ask the question in Chinese. Because it "knows every response". It's not got any "understanding" of physics though. :lol:


I doubt Searle would agree with that. The idea of the thought experiment is that the person moving around characters in the room could give the same (or same kind of) responses as a native Chinese speaker.

Which is actually a really surprising assumption to make in the first place, if you're trying to show that symbol manipulation isn't the same as understanding (however he wants to define that). As far as I can tell the only reason to make that assumption is to make the hypothetical situation interesting.


Sorry, I'm lost there. If you ask it "what is 2+2?" does it understand addition? If you ask it "what is your favourite ice cream" does it understand flavour?
It's all physics and stamp collecting.
It's not a particle or a wave. It's just an exchange.

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby TrlstanC » Thu Oct 20, 2011 1:50 pm UTC

Technical Ben wrote:Sorry, I'm lost there. If you ask it "what is 2+2?" does it understand addition? If you ask it "what is your favourite ice cream" does it understand flavour?

That's the question that's trying to be answered. Certainly a machine can answer the question "what is 2+2" and most of us would say that it doesn't understand either the question or the answer. As the questions get progressively more complicated we don't know if it's possible to have a machine that could answer them without understanding the question, but the assumption generally is that eventually we'll be able to build a machine that can answer the questions using some process.

Searle's Chinese room is probably the most famous thought experiment that tries to address this issue, but I think that it's deeply flawed, which is unfortunate since it's used so often to make all kinds of similar arguments. The Chinese room is trying to show that symbol manipulation isn't the same as understanding and can never add up to it either. But to get to that conclusion Searle has to make a bunch of assumptions to set up the thought experiment. Assumptions like "it's possible to have a conversation just using symbol manipulation" and "symbol manipulation and understanding are two different ways to have a conversation." These are exactly the kinds of questions people are considering in hypotheticals like this, but they're already baked in from the beginning.

We should come up with an alternative to the Chinese room that avoids these kinds of assumptions, or maybe we'll realize that these questions can only be answered empirically, and we'll just have to try and build the kinds of machines that we're interested in discussing.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26726
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby gmalivuk » Thu Oct 20, 2011 1:56 pm UTC

TrlstanC wrote:The reason I say the conclusions are ridiculous is because the hypothetical situation is so far outside our normal experience that...they may even be impossible.
In this way, it's like some of the annoyingly frequent questions posed to physicists about relativity. "Imagine a perfectly rigid rod many lightyears long. [blahblahblah] Therefore isn't relativity wrong?" The person simply fails to understand that they implicitly assumed their conclusion the moment they started.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experim

Postby TrlstanC » Thu Oct 20, 2011 3:06 pm UTC

gmalivuk wrote:In this way, it's like some of the annoyingly frequent questions posed to physicists about relativity. "Imagine a perfectly rigid rod many lightyears long. [blahblahblah] Therefore isn't relativity wrong?" The person simply fails to understand that they implicitly assumed their conclusion the moment they started.

Exactly. One of the things that make a thought experiment useful, and certainly one of the things that make them "popular" is that they're easy to understand. But if we need to make assumptions that undermine the very idea we're trying to explain to make them easy to understand that doesn't do any good. Relativity is just a concept that's difficult to understand, if we try to create a thought experiment that simplifies it, then there's a good chance that we're not actually talking about relativity anymore. The same goes with a lot of famous hypotheticals in philosophy, like "Mary the color scientist" - if the hypothetical includes an assumption that's impossible there's a good reason to doubt the conclusions we draw from it. There are just some concepts we can't trust to our intuition, we have to actually go out and test them. Things like quantum mechanics and relativity definitely fall in to this camp, and I think things like AI should too.


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 13 guests