The Assumptions of Searle’s Chinese Room Thought Experiment

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Thu Nov 09, 2017 8:37 pm UTC

I recently stumbled on this Talk at Google where Searle discusses the Chinese Room at length. It's a great review of the argument, and a number of raise good, if typical, questions at the end.
Here's the wikipedia page for Searle's Chinese room thought experiment. The experiment is part of the following argument:

A. Programs use syntax to manipulate symbols
B. Minds have semantics
C. Syntax by itself is not sufficient for semantics
Therefore programs are not minds.

The more I think about the topic, the more I agree with Searle, although I do have a problem with a couple assumptions he seems to make, but doesn’t state clearly. For example when he talks about different ways of studying the brain, and about different animals that may or may not have consciousness. It seems that he assumes that consciousness is something which is created by a relatively large and complex structure. In the best example we have, humans need to have a relatively large and complex brain with billions of neurons to be conscious.

But this seems problematic because if a brain can’t become conscious by running a program, then the consciousness must be some physical characteristic, and that physical characteristic must be shared by the component parts. For example, if we use the analogy of an electromagnet, we wouldn’t expect that running a simulation of an electromagnet would create actual magnetism. But while we need a relatively large and correctly wired piece of equipment to create electromagnetism that way, we also know that the magnetism is the result of the characteristics of the individual particles.

Or to put it another way, we might not need a structure like a brain to create consciousness, it’s just that we need something relatively large and complex to harness the physical characteristics of its component parts in a useful way. Taking this idea to the extreme we end up with Hylopathism, and if we combine that idea with the idea that a program must be run on something, then while programs might not be conscious anything a program runs on probably is. And the running of the program might create experiences very much in the same way that the wiring of our brain creates our experiences. For example, maybe our brain uses electron tunneling to create the sensation of red, and the same mechanism in modern CPUs means the computer is having an experience of “twinkly red” whenever any program is running?

User avatar
Pfhorrest
Posts: 3967
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Thu Nov 09, 2017 10:44 pm UTC

Your conclusion there is basically my point of view as well. Anti-emergentist reasoning like yours successfully (IMO) concludes that any kind of phenomenal consciousness (as distinct from mere access consciousness) that may exist in human brains must have precursors in the constituent components of those brains. So either there is no such thing as phenomenal consciousness at all, or else everything has some degree of it, in a kind of pan-proto-psychism or hylopathism, as you say. Thought experiments like Mary's Room show that there is some kind of phenomenal consciousness, inasmuch as first-person experience of being a thing undergoing a process (e.g. being a brain perceiving redness) is not imparted by any amount of third-person knowledge about things undergoing such processes (e.g. studying what brains do in response to their eyes' response to red light); or as I like to say for a more visceral example, no amount of studying sexology will teach you what it's like to have sex yourself.

So the conclusion to draw is that there is a first-person, phenomenally "conscious" experience for everything, about which there isn't really much more to say, per se. The interesting distinctions to draw between e.g. humans and rocks is the nature of the kind of thing, which is given by the function of the thing, which determines its experience every bit as much as it determines its behavior: experience is the input to the function that defines a thing, behavior is the output from it. (And every behavior is something else's experience, and vice versa: they're all just interactions, seen from either the first or third person, as the subject or as the object). Functionality is what makes human consciousness notable and interesting: access consciousness is where all the interesting questions are. Saying that everything has a first-person experience as a subject isn't really any more interesting or substantive a statement than saying everything has a third-person behavior as an object: okay, but what is its behavior, what is its experience, in short what is its function? That's what really matters. Functionalism.

The Chinese Room argument is often put forth to dispute functionalism, but I think it fails at that in an important way. It successfully (IMO) proves that syntax is not sufficient for semantics, but it doesn't disprove functionalism because the supposedly (and I'd agree) not-conscious room is not functionally equivalent to an actual Chinese speaker. You can hand an actual Chinese speaker a picture of a duck on a lake and ask them (in Chinese) "What kind of bird is on the water?" and he can answer; the room cannot, because while it contains a person with eyes that can see the picture, and that person (with his books) could tell you (the Chinese equivalent of) that ducks are a kind of bird and a lake is a body of water, he cannot connect the words for "duck", "bird", "lake", or "water" to the images in the picture. That connection is where the semantics come from. Knowing that a symbol signifies some experiential phenomenon.

But we can in principle build programs that can do that, even though those programs still at their base only manipulate symbols, by translating experiential phenomena into huge arrays of symbols. An digital photo is a visual image translated into a bunch of numbers, and identifying patterns in such numbers and connecting them to more abstract symbols is what machine vision is all about. I feel a little unsure of how to translate this into a direct contradiction of any of Searle's premises, but it's like saying "Computers only do logical operations on boolean values. You can't do division using only logical operations on boolean values. Arithmetic involves doing division. Therefore computers can't do arithmetic." It looks on the surface like all the premises are cogent and the inferences valid but the conclusion is clearly false so something somewhere in there is wrong, and whatever it is, that's the same thing wrong with Searle's formal argument.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Fri Nov 10, 2017 2:27 am UTC

Saying that everything has a first-person experience as a subject isn't really any more interesting or substantive a statement than saying everything has a third-person behavior as an object: okay, but what is its behavior, what is its experience, in short what is its function? That's what really matters. Functionalism.


There was a part in the talk that I hadn't heard this explicitly stated before, and thought was really interesting, about the difference between something being subjective (or objective) in an ontological or epistemic sense. And when we think about the possibility that everything could have a subjective experience, then the question is why are people different? From a behavioral perspective our consciousness obviously can have a very big effect, whereas even if rocks or ants had conscious experiences they wouldn't be able to "do" anything with them. The experience would be there, but they wouldn't be wired up or made up in a way that those subjective experiences cause differences in behavior.

Of course rocks and minerals do have lots of characteristics that are objectively true and do affect their "behavior" such as it is. So the question I have, is why does consciousness need to be another characteristic of these things? Let's say we have iron atoms, they have all kinds of objective characteristic, their mass and density and the way they interact with electrons and magnetism, etc. Now let's say they also have conscious experiences, and maybe even that human brains use this characteristic of the iron in their blood to create our human experience of consciousness. Why would this conscious characteristic have to be new or additional? Why couldn't it be the subjective experience that corresponds with one of the objective characteristics, say magnetism for example.

When an atom of iron interacts with a magnetic field we can observe it's objective behavior, but that same field could be causing a subjective experience of consciousness as well. And in fact, if we think that our conscious experiences have some affect on our behavior, then we'd want consciousness to have an objective effect of some sort on the world right? If the neurons in our brain are experiencing consciousness and that subjective experience is causing a difference in the way they fire or interact, that's an objective change. And unless we've completely missed some other kind of physical force that acts on the neurons in our brains, then that would mean that consciousness is interacting via an existing force we already know of. Or to put it another way, that it's the subjective experience of an objective force we already recognize.

Personally, once I accept that chinese room argument, that programs can't be conscious, then I can only think of these two eventual conclusions. Either consciousness is the subjective experience of a known physical force, or it's the subjective experience of a physical force we haven't discovered yet.

User avatar
Pfhorrest
Posts: 3967
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Fri Nov 10, 2017 3:40 am UTC

I wouldn't say that (phenomenal) consciousness is the subjective experience of a physical force; it's not like, as in your example, magnetism is consciousness, or anything like that. Rather it's the subjective experience of all the physical stuff happening to a the physical thing having that experience, including in some cases (like human brains) physical stuff the physical thing is doing to itself (which is where it starts to get interesting).

I would actually prefer not to use the term "consciousness" for what is called "phenomenal consciousness" at all, and reserve that for the functional characteristic called "access consciousness", which is a form of reflexive function (the brain doing stuff to itself, experiencing itself, and experiencing being done-unto by itself). Calling the-having-of-subjective-experiences "consciousness" feels a little bit like calling the quantum nondeterminism of an electron its "free will". (In that case too, I would say that free will properly speaking is just a functional characteristic, nothing really metaphysical at all; a function highly analogous to access consciousness, in fact). So when you ask "why does consciousness need to be another characteristic of these things?", I'd say it's just not -- access consciousness is just a complex function built up from perfectly ordinary physical functions, and phenomenal consciousness is just the subjective experience of being done-unto in ordinary physical ways.

I find it especially interesting to combine this with Whitehead's ontology of "occasions of experience". Given the foregoing model of both behavior and experience being just different perspectives on interactions (the perspective of the subject and the perspective of the object), those being just ordinary physical interactions barring any reason to think there's any other kind, all of which boil down ultimately to exchanges of gauge bosons, I think it's justified to literally identify Whiteheadian "occasions of experience" with those bosons. Occasions of visual experience? The literal photons hitting your eye. Occasions of auditory or tactile experience? The literal photons mediating electrostatic repulsion between you and the air / whatever you're touching. Occasions of olfactory or gustatory experience? The literal photons mediating the chemical interactions between your taste buds /olfactory bulbs and whatever you're tasting or smelling. This unifies "materialism" and "idealism" into a single physicalist phenomenalism, quite parallel to the unification of functionalism and panpsychism we already seem to agree on.

All we are directly aware of, the fundamental building blocks of the reality we know, are the occasions of subjective experience we are subject to, but those are identical to the physical particles we are interacting with, and all the rest of physics as necessary to explain why those patterns of particles interact with us that way is implied by the experiencing of those patterns of experience. But we ourselves are not some kind of special entity beyond the world we experience, but just another arrangement of the same kind of stuff that we're experiencing, all of which in turn has some kind of experience itself.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Fri Nov 10, 2017 2:41 pm UTC

I wouldn't say that (phenomenal) consciousness is the subjective experience of a physical force; it's not like, as in your example, magnetism is consciousness, or anything like that.


Maybe it is, maybe it isn't? The fact that we have no idea what causes conscious experiences makes it hard to say either way. I just thought that Searle's talk about the difference be ontological or epistemic meaning was an interesting way to look at the question.

The literal photons hitting your eye. Occasions of auditory or tactile experience? The literal photons mediating electrostatic repulsion between you and the air / whatever you're touching. Occasions of olfactory or gustatory experience? The literal photons mediating the chemical interactions between your taste buds /olfactory bulbs and whatever you're tasting or smelling. This unifies "materialism" and "idealism" into a single physicalist phenomenalism, quite parallel to the unification of functionalism and panpsychism we already seem to agree on.


Are you saying that consciousness happens when photons actually hit the eye? That seems like it's unlikely to be true given that there's a lot of information in our visual field that we're not conscious of and/or that we actually get wrong. For example, all kinds of optical illusions are caused by the way the neurons in the eye and nervous system process information, but we're not consciously aware of all those steps, we're only aware of the experience at some later step.

User avatar
Pfhorrest
Posts: 3967
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Fri Nov 10, 2017 5:18 pm UTC

I think you are overlooking the distinction I keep emphasizing between phenomenal and access consciousness. Photons hitting you eye are not access consciousness, which is the real, important sense of consciousness, that meshes with our everyday use of the term like you just used it ("consciously aware"). But so-called phenomenal consciousness (which is what Searle et all seem interested in) is just (on my account) the experience of being a thing interacting with other things, and gauge bosons like photons are the constituent elements of all such interactions, and so the "occassions of experience" in the sense Whitehead uses that phrase.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

SuicideJunkie
Posts: 157
Joined: Sun Feb 22, 2015 2:40 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby SuicideJunkie » Fri Nov 10, 2017 6:28 pm UTC

You can hand an actual Chinese speaker a picture of a duck on a lake and ask them (in Chinese) "What kind of bird is on the water?" and he can answer; the room cannot, because while it contains a person with eyes that can see the picture, and that person (with his books) could tell you (the Chinese equivalent of) that ducks are a kind of bird and a lake is a body of water, he cannot connect the words for "duck", "bird", "lake", or "water" to the images in the picture.
I imagine the answer from the Chinese Room would be "Sorry, I've been blind since birth. Can you describe it for me?"
You'd get the same problems from an actual Chinese speaker who happened to be as blind as the Room. (While it does contain physical eyes in most constructions, they're a mechanism for thinking, not for seeing the environment)

In order to properly add vision to the Chinese Room, you'd have to put vastly more work into the instructions, and probably pass the image in as a bitmap to correspond with the existing speech-as-text input channel.

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Fri Nov 10, 2017 8:07 pm UTC

I think you are overlooking the distinction I keep emphasizing between phenomenal and access consciousness.


I've just never seen a good argument for why it should be called "access consciousness" as opposed to "access to consciousness". I think everyone would agree that phenomenal consciousness is consciousness, or more accurately might just say that it's the only kind of consciousness.

Take the experiment discussed here that looks at the difference between the sharpness of experience vs. the ability to name and report the parts of that experience:

Block's definitions of these two types of consciousness leads us to the conclusion that a non-computational process can present us with phenomenal consciousness of the forms of the letters, while we can imagine an additional computational algorithm for extracting the names of the letters from their form (this is why computer programs can perform character recognition). The ability of a computer to perform character recognition does not imply that it has phenomenal consciousness or that it need share our ability to be consciously aware of the forms of letters that it can algorithmically match to their names.


I understand it as saying that "additional computational algorithm for extracting the names" is what would be called "access consciousness"? But that doesn't sound like consciousness to me. I might be consciously aware of the output of that algorithm, but I'm not aware of how it works i.e. it's not part of my consciousness.

User avatar
Pfhorrest
Posts: 3967
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Fri Nov 10, 2017 9:04 pm UTC

Regardless of the aptness of the names, the meaning of what I said before hinges on distinguishing between the things they named. Let's call them something else and just leave the word "consciousness" out of it to avoid confusion. There are two concepts: one of them is the what-it's-like experience of being a thing of some kind (call that "P"), and the other is a kind of reflexive functionality that gives a thing access to information about itself (call that "A"). These can, of course both apply at once, and in the case of humans almost uncontroversially do: there is a what-it's-like-to-be-a-brain-with-reflexive-functionality-like-access-to-information-about-itself experience.

What I was saying before was that the photons, constituting as they do the elementary components of all the interactions anything is undergoing, likewise constitute the elementary components of the what-it's-like experience of being something undergoing that. But that doesn't mean that we have reflexive access to the information about every interaction with every photon we're undergoing.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Fri Nov 10, 2017 10:02 pm UTC

[url]But that doesn't mean that we have reflexive access to the information about every interaction with every photon we're undergoing.[/url]

Ahhh, ok! I get what you're saying, you're right I was getting hung up on the name. And I think that's a really interesting point because it raises some good questions about consciousness. If we start with a bit of a recap:

  • Consciousness is a physical process that happens in the brain
  • Since the brain is made up of neurons, their structure undoubtedly plays a role in either how we experience consciousness or what we experience
  • Neurons are made up of molecules, which are made up of atoms, which are made up of subatomic particles, etc.
  • Unless consciousness is a physical characteristic that's completely unlike any other physical thing, there's something about the nature of all those particles that creates consciousness
  • But there's nothing unique about the particles in my brain, they're the same kinds of particles that make up everything else
  • Question One: where do the particles that create my mind/conscious experience end and the non-my-consciousness particles that make up the rest of the world start?
  • Even if we assume that some particular large scale structure of the brain is required for consciousness (as Searle seems to do), we can just rearrange the question a bit
  • We don't have conscious access to information about every photon our brains are interacting with, or even every neuron, and there's even large parts of our brain which are doing things that we don't have too access too
  • But a defining characteristic of our consciousness is that it's unified. We don't experience separate consciousness that are happening in different parts of our brain, or even experience a graininess or "pixelation" of a consciousness that's made up of many smaller parts, we also don't experience an edge or limitation to our consciousness, we can experience as much information as can get in.
  • Question Two: At some point there's a neuron that's part of the system creating my consciousness, which is connected to a neuron that's not. Where is this boundary, and why can't we find it?
  • Question Two Rephrased: What keeps my consciousness, which is a unified creation of a bunch of neurons from "leaking" into other nearby neurons. Or at more basic level, how can I have access to the consciousness created by the atoms in my brain, but not the atoms in my skull or eyes that are interacting with them.

The easy way to ignore these questions is to reject the conclusion of the Chinese Room, but I can't find any reasonable way to do that that doesn't raise even more difficult questions. If minds can't be (matter independent) programs, then minds have to be dependent on the characteristics of the matter they're made of, but also seem to work in ways unlike anything else we've observed.

User avatar
Pfhorrest
Posts: 3967
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Fri Nov 10, 2017 10:33 pm UTC

If I understand you correctly, I think both of those questions are essentially questions about the nitty-gritty details of neurology. I don't have access to information about the state of a random calcium ion in my left femur because there is no structure that conveys that information to the part of my brain that I am having the experience of being. The boundary between that part of my brain and the rest of my brain or the rest of my body beyond that is just defined by whatever the physical neurology happens to be.

Kind of taking a step back, an image I hold in my mind to help understand this model goes something like this. Draw a diagram of every interaction between everything, represented by arrows flowing between points. The arrows flowing out of a point are the behaviors of that thing, which constitute all of its objective properties: a thing just is what it does. The arrows flowing into a point are the experiences of that thing, the qualia or the like. The arrows are essentially describing the flow of information, and the map of in-arrows to out-arrows of a point describes that thing's function: how it behaves in response to different experiences. Every kind of thing can be mapped this way, but most things have non-reflexive functions: something does something to them, they experience that and do something to something else in response.

Where it gets interesting is when a thing's function becomes reflexive, especially in particular ways. A human brain's function is highly reflexive, to the point that most of the arrows coming in or out of it bend around back to itself, and so our experience is not just of the world, but largely of ourselves, including how we're experiencing and reacting to the world; and likewise much of our (mental) behavior is not upon the world directly, but upon ourselves, changing our own state so that we react differently to our experience of the world. That's the interesting thing about human consciousness. If you stripped away that self-awareness and self-control and simplified the diagram down to just input from experience of the world leading straight to behavioral outputs to the world, you'd strip away everything that makes us "conscious" in an interesting way, even though there would still be some technical experience of the world, which we would be completely unaware we were having.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby ucim » Fri Nov 10, 2017 11:09 pm UTC

TrlstanC wrote:Consciousness is a physical process that happens in the brain
No. Consciousness is the result of a physical process that happens in the brain. Important difference. Consciousness is not a process.

TrlstanC wrote:Unless consciousness is a physical characteristic that's completely unlike any other physical thing, there's something about the nature of all those particles that creates consciousness
Consciousness doesn't come from the particles, it comes from the relationship between the particles. As a (simple) analogy, an orbit can't exist without two particles to be in orbit around each other. But the orbit is not embodied in the particles; it is a relationship between two (specific) particles. All particles have the capability of being in such a relationship, but not all of them are, and there's nothing intrinsic about the particles that are not.

TrlstanC wrote:Question Two: At some point there's a neuron that's part of the system creating my consciousness, which is connected to a neuron that's not. Where is this boundary, and why can't we find it?
Neurons don't create consciousness. The relationship between neurons is what does it. The boundary, if there is one, would be in whether these neurons are in that relationship with those neurons. This is a subject of neurology that is being explored, but we don't have much yet to go on.

As to the Chinese room, the problem with it is that consciousness embodies experience, and the Chinese room does not have the experience of the things it's conversing about. It might know the Chinese character for baseball, but it has never tasted a hot dog, never heard the roar of the crowd, and has never played ball with anybody. It doesn't grok baseball. Figure out how to get the Chinese room to experience baseball and you'll be well on your way towards convincing me there's no difference. But until then, that's a world of difference right there.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Sat Nov 11, 2017 1:59 am UTC

I don't think I'd be confident making any conclusions about what consciousness is or isn't given how little we know about it, from an objectively epistemic perspective we know essentially nothing.

No. Consciousness is the result of a physical process that happens in the brain. Important difference. Consciousness is not a process.


"Process" is a pretty vague category of things, virtually anything that changes and has some result could be a process of some sort.

Consciousness doesn't come from the particles, it comes from the relationship between the particles.


Maybe? But then the orbit example sounds a lot like a process to me? And of course, a lot of things are the result of a relationship between particles. Or from another perspective nothing exists by itself, what qualities does any particle have that aren't defined by it's interaction or relationship with something else?

Neurons don't create consciousness. The relationship between neurons is what does it.


Again, how can we be sure? Maybe there's a single consciousness causing neuron in all of us, and everything else just feeds it information? That's a testable hypothesis, but we can't say for sure whether it's true or not because we don't know enough about consciousness.

Figure out how to get the Chinese room to experience baseball and you'll be well on your way towards convincing me there's no difference.


Maybe someone wants to try and convince you there's no difference, but the point Searle was making (and I agree with) is that there is a difference.

User avatar
Pfhorrest
Posts: 3967
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Sat Nov 11, 2017 2:39 am UTC

FWIW my position, that I thought Tristan agreed with, is that the Room WOULD be no different from a conscious being IF IT WERE functionally the same (which it’s not), because anything beyond mere functionality that is needed for consciousness is something shared by all things, not something special about human brains.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Sat Nov 11, 2017 1:05 pm UTC

I would say that even if the chinese room were functionally equivalent to a person, that it wouldn't be the same as a human, for two reasons:

  • Defining what counts as the room is somewhat arbitrary. If we can't even figure out what parts of a human are necessary for consciousness, then I don't think we could say what parts of a room were either.
  • The chinese room argument concludes that programs are not minds, and while consciousness is required for a mind, I'm not sure it's sufficient. I could imagine something that accomplished the same thing as a mind, and was also conscious, but wasn't equivalent.

User avatar
doogly
Dr. The Juggernaut of Touching Himself
Posts: 5232
Joined: Mon Oct 23, 2006 2:31 am UTC
Location: Somerville, MA
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby doogly » Fri Nov 17, 2017 3:31 pm UTC

The Chinese room argument is the purest form of begging the question.
LE4dGOLEM: What's a Doug?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.

Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7312
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Zamfir » Fri Nov 17, 2017 7:56 pm UTC

I have never understood how the guy is supposed to change anything about the argument. He's clearly crucial - if we imagine a silicon CPU in the room instead of the guy, it's not The Chinese Room Argument anymore. Its another argument. But if the guy was only pedalling to provide power to the room, then that surely is the same non-Chinese-Room argument, with the guy as superfluous decoration. What if he's replacing a single connection in the CPU, by pushing a button whenever a light comes on? Does it matter whether the button pusher speaks Chinese?

Supposedly , there is some point where he starts to matter. Where his presence makes a difference compared to a thought experiment about a generic silicon machine that somehow passes a Turing test. But don't see it, at all. He just seems superfluous distraction all the way.

User avatar
Sizik
Posts: 1159
Joined: Wed Aug 27, 2008 3:48 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Sizik » Sat Nov 18, 2017 1:46 am UTC

Taking the thought experiment at face value, it only demonstrates that CPU chips don't "understand" Chinese, even if they're running a Chinese-speaking strong AI program that passes the Turing test.
gmalivuk wrote:
King Author wrote:If space (rather, distance) is an illusion, it'd be possible for one meta-me to experience both body's sensory inputs.
Yes. And if wishes were horses, wishing wells would fill up very quickly with drowned horses.

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby ucim » Sat Nov 18, 2017 3:42 am UTC

Neurons don't understand Chinese either, even when they are part of a brain belonging to a native Chinese speaker.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
Pfhorrest
Posts: 3967
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Sat Nov 18, 2017 3:53 am UTC

I think a more illustrative modification to the thought experiment is to imagine that the guy in the room just memorizes his rulebooks, and then you let him out of the room. I at least would say that the guy still does not speak Chinese, because although he knows the relations between a bunch of hanzi, he doesn’t know what any single hanzi means, in terms of the phenomenal world. They are all just empty symbols to him. That is why the room also does not speak Chinese. Not because of anything metaphysically wrong with the substrate running the program, but because the program is itself deficient. Those rule books wouldn’t teach a human Chinese, so why would we expect them to teach a (manually executed) computer it?
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
ucim
Posts: 5634
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby ucim » Sat Nov 18, 2017 4:42 am UTC

If the guy "memorized the rulebooks" and wanted a hot dog, could he order one in Chinese? If the waiter said that they didn't have any, would the person be able to "apply the rulebooks" and figure out what he said?

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
Pfhorrest
Posts: 3967
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Pfhorrest » Sat Nov 18, 2017 4:59 am UTC

ucim wrote:If the guy "memorized the rulebooks" and wanted a hot dog, could he order one in Chinese?

As I recall Searle's setup, nope, because the guy has no means by which to connect his feeling of want for a hot dog to any hanzi (that's Chinese characters in case not everyone knows that). He only knows that he's supposed to reply to certain strings of hanzi with certain other strings of hanzi according to certain rules, but those hanzi don't connect to anything else besides other hanzi. And that's why I think Searle's thought experiment only proves something trivial (syntax is not semantics) and not the substantial thing he wanted to prove (computers can't think). If the books the guy had did include means to connect hanzi to other things, to give referents to the symbols -- like if he had picture books and speak-and-spell books and scratch-and-sniff books and whatever -- then I would say that to memorize those books just would be to have learned Chinese, and the Room (with the guy in it having to look a bunch of things up in his non-memorized books) as a whole does understand Chinese, and so would a computer (with appropriate artificial senses available to it) running the same program as laid out in the books as well.

If the waiter said that they didn't have any, would the person be able to "apply the rulebooks" and figure out what he said?

He would probably understand that the waiter was saying the negation of the question he had asked, since that's a purely syntactic relation, but he wouldn't know what the question he had asked meant (and so, I guess, wouldn't have been able to ask it, since he wouldn't know what to say to convey his desire for a hot dog).
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

morriswalters
Posts: 6939
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby morriswalters » Sat Nov 18, 2017 12:11 pm UTC

If the guy "memorized the rulebooks" and wanted a hot dog, could he order one in Chinese? If the waiter said that they didn't have any, would the person be able to "apply the rulebooks" and figure out what he said?
Can your smart phone learn Chinese by using Google Translate?

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7312
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Zamfir » Sat Nov 18, 2017 1:21 pm UTC

I think a more illustrative modification to the thought experiment is to imagine that the guy in the room just memorizes his rulebooks, and then you let him out of the room.

Human beings just cannot memorize large (computer-sized) amounts of precise, context-free data. Let alone perform exact operations on them in their head. Phone numbers already strain our capacity in this regard.

The Chinese Room is at least conceivable, apart from the glacial speed. If 'glacial' is the right word - the room might, perhaps, do something like an operations per minute. That's a million year for every second of a desktop computer. Our hypothetical Turing-test AI program could be much more demanding than that desktop can handle. The remaining lifetime of the sun might well be too short for the Chinese Room to formulate a single answer.

Such considerations are more than technicalities, I think. The guy and the room and the 'rulebooks' are appeals to our intuition, but we do not have much intuition here. We find it misleadingly easy to imagine someone who learns a book by heart, and then has conservations he doesn't understand by applying rules from the book.

User avatar
chridd
Has a vermicelli title
Posts: 779
Joined: Tue Aug 19, 2008 10:07 am UTC
Location: ...Earth, I guess?
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby chridd » Thu Nov 23, 2017 4:39 am UTC

The assumption that I have issue with is the assumption that the question of whether a computer (or the Chinese room) understands things or is conscious is actually meaningful and has a truth value. (I don't think that what Searle is calling "observer-independent intelligence" is actually a meaningful concept.)

Words like "conscious", "understand", "intelligent", etc. are words humans have come up with to describe our experiences on the surface planet Earth, in a time period from sometime after humans developed language until the present day. In those conditions, things generally fall into either the category "has a brain that works in the particular way human brains work" or the category "lacks certain important abilities that humans have", and we say that things in the former category are conscious, can understand stuff, and can be intelligent, and things in the latter category aren't. If we're talking about whether something understands stuff, whether to say that it does understand stuff or to say that it doesn't understand stuff, we're making the assumption that it falls into one of those two categories, and that assumption doesn't hold in the Chinese room or in the case where we achieve strong AI.

We could expand the definitions of those words such that strong AI is or isn't intelligent, but then we're talking about how we should expand a definition, not about whether strong AI "really" understands stuff. If we want to know how we should expand the definition, then probably the relevant question is "Why does it matter?". If it matters because we want to treat conscious beings a certain way (e.g. giving them rights, considering them morally) then it's not a question about consciousness, it's a question about what we care about—do we care about AIs like we do humans? Maybe some people do and others don't; there might not be anything objective or observer-independent that can resolve this. Or maybe practical concerns will push us in a certain direction (e.g. we have to care, otherwise the robots will revolt, and we really don't want that; or maybe their desires and ways of modeling desires are so different from us that caring doesn't really make sense).
~ chri d. d. /tʃɹɪ.di.di/ (Phonotactics, schmphonotactics) · they (for now, at least) · Forum game scores
mittfh wrote:I wish this post was very quotable...
flicky1991 wrote:In both cases the quote is "I'm being quoted too much!"

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Thu Nov 30, 2017 8:51 pm UTC

And that's why I think Searle's thought experiment only proves something trivial (syntax is not semantics) and not the substantial thing he wanted to prove (computers can't think)


I believe he assumes that syntax is not semantics, or at least he uses the two words as if they mean different things. The Chinese Room part of the argument shows that syntax is not sufficient for semantics, ie. if you have a program that's a bunch of syntax and you want to make it conscious, it's impossible to do that by just adding more syntax or more complicated syntax or the right kind of syntax. And while the argument is often described in terms of computers (because that's what we usually use to run programs), it's really an argument about programs. I believe Searle would say that some computers can be conscious (or think, if consciousness is required for thinking) because humans are a kind of computer because we can perform computations and we can obviously think.

Human beings just cannot memorize large (computer-sized) amounts of precise, context-free data


But computers can, and all the person is really doing in the Chinese Room is acting like an inspector, their role of actually moving stuff around is trivially automated by things that are clearly not conscious. The only reason the person had to both be the inspector and the actor carrying out the actions is to prevent people from arguing that there was some secret consciousness happening somewhere the person couldn't detect. By having the person do all the actions this limits the places that consciousness could be happening to just them.

If we could actually make the Chinese Room that would be an amazing achievement, it would essentially be a simulation of consciousness. Which would imply that not only do we understand what consciousness is, but that we understand it so well we can create a perfect simulation of it. And I think that actually highlights a way that the Chinese Room fails. It's a fantastic argument that I appreciate more the more I think about it, but I believe it was originally constructed as an implicit or explicit counterargument to the Turing test. And in that I think it fails because the Turing test assumes that we don't know how consciousness works, if we did we wouldn't need a test, we could just have a consciousness detector of some sort or a program that reads MRI scans and tells you if the subject is conscious. But without a test like that we have to rely on other means, and right now the only way we test if something is conscious is by seeing if it acts like us. We know we're conscious and we believe consciousness is critical to how we act, so if something acts like us, we treat it like it's conscious. Turing just took that fact and applied it to the situation in which a machine acts like us, and even taking the Chinese Room in to account I think the Turing test is a great test because there's only two possibilities for passing a really robust Turing test:

1. We don't know how consciousness works but we've made a machine that acts conscious
2. We do know how consciousness works and we've made a machine that we know is merely simulating consciousness

In the second case we don't need the Turing test, and in the first it's far and away the safer choice from an ethical perspective to assume the machine is actually conscious then to assume it's not. And if consciousness has any evolutionary advantage at all (ie. it's better than other forms of information manipulation), then there's good reason to think that creating consciousness might be the easiest way to make something act conscious.

Words like "conscious", "understand", "intelligent", etc. are words humans have come up with to describe our experiences on the surface planet Earth


I agree that we generally don't use a very good definition of consciousness, and that's probably because we usually use casual or physical definitions and we don't know what the physical causes of consciousness are, so we can't do that. But that doesn't mean all definitions are impossible, for example something like "being able to experience and remember pain and pleasure" seems like a fairly robust definition to me, at least for now.

elasto
Posts: 3125
Joined: Mon May 10, 2010 1:53 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby elasto » Fri Dec 01, 2017 7:19 am UTC

I don't really get why that person thinks 'remembering' is a key part of consciousness. If I torture someone then wipe their memory, that doesn't mean they weren't conscious. That feels like it'd remain true even if the delay between the torture and the wipe were made very small.

So we're left with the other part of their definition 'experiences pain and pleasure', but that seems vaguely tautological.

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7312
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Zamfir » Fri Dec 01, 2017 8:27 am UTC

If I torture someone then wipe their memory, that doesn't mean they weren't conscious. That feels like it'd remain true even if the delay between the torture and the wipe were made very small.

"wipe their memory" already accepts that a memory is formed, right? Suppose that the "memory wipe" tool acts really fast (under 100 milliseconds perhaps), basically preventing memory formation at all. It's not obvious that such person would be conscious, in our normal sense of the word.

If you look at sleep and dreams, then memory looks crucial to our concept of consciousness. Nightmares are not that far from your thought experiment - harrowing tortures that are not remembered. We only care about them to the extent that we wake up in the middle, and remember parts of them.

User avatar
chridd
Has a vermicelli title
Posts: 779
Joined: Tue Aug 19, 2008 10:07 am UTC
Location: ...Earth, I guess?
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby chridd » Fri Dec 01, 2017 9:38 am UTC

TrlstanC wrote:
Words like "conscious", "understand", "intelligent", etc. are words humans have come up with to describe our experiences on the surface planet Earth


I agree that we generally don't use a very good definition of consciousness, and that's probably because we usually use casual or physical definitions and we don't know what the physical causes of consciousness are, so we can't do that. But that doesn't mean all definitions are impossible, for example something like "being able to experience and remember pain and pleasure" seems like a fairly robust definition to me, at least for now.
You can make a definition and start using it, but then you don't know how much of the reasoning and intuition from before the definition still applies. Maybe under your definition there are beings that (for example) are conscious but have the same moral worth as a rock, or beings that are not conscious but are morally equivalent to a human. Consider: From my understanding of your definition, any AI based on reinforcement learning, no matter how simple, is conscious; but if we discover or create a being that doesn't care about how things were in the past or are in the present, and only cares about things will be in the future (this is how a common algorithm used in chess programs works, for example), then it would not be conscious. (Also the definition of consciousness seems more like a definition of sentience to me—though my arguments here also apply to sentience. I don't think emotion is necessary for consciousness.)

But I don't think the disagreement is just about definitions. Both you and Searle (and plenty of other people talking about consciousness) seem to be assuming that consciousness is something fundamental, which is something I disagree with. I think our brains are just complex circuits, without anything extra to make them conscious; we've labeled certain types of circuits as conscious and intelligent, but from the universe's "point of view" there isn't anything special about those circuits as opposed to, say, the circuits in your calculator. Human brains tend to like to model human brains as fundamentally different from non-brains, but that doesn't mean that they actually are fundamentally different. The model that human consciousness is something fundamental does work in everyday life, just like the model that the earth is flat and gravity's constant and there's a constant "up" does, but just like the flat earth model fails when we go into space or travel long distances, the fundamental consciousness model fails when we achieve strong AI.

I think asking whether strong AI will be conscious is like asking what direction is up and down in space, outside a planet's gravity. If space travel becomes the norm, maybe people will find a way to use "up" and "down"—perhaps some universal arbitrary convention, like calling the direction the north pole is pointing up, or perhaps something relative to the orientation of the spaceship, or based on artificial gravity in the ship—but that's more a practical choice than a matter of finding what "up" actually is, and simply not using the term in that context is also reasonable. Importantly, whatever our choice, there won't be a direction that satisfies all the conditions we intuitively expect "up" to satisfy; finding "up" in that context isn't going to tell us what we might want it to tell us (e.g., gravity might not be pointing down). Likewise, if strong AI comes, maybe people will find a way to use words like "conscious" and "understand", but the important thing is that strong AI won't satisfy our intuitions about what a conscious thing is like, and won't satisfy our intuition about what a non-conscious thing is like, and deciding whether a machine is conscious isn't going to tell us much about things like whether and how we should care about it or whether it can be reasoned with or whether it can perform tasks that humans can. And trying to find a definition now may end up with something that won't be useful (like deciding that "down" means "towards Earth" even when you're on Mars).
Last edited by chridd on Fri Dec 01, 2017 9:43 am UTC, edited 1 time in total.
~ chri d. d. /tʃɹɪ.di.di/ (Phonotactics, schmphonotactics) · they (for now, at least) · Forum game scores
mittfh wrote:I wish this post was very quotable...
flicky1991 wrote:In both cases the quote is "I'm being quoted too much!"

elasto
Posts: 3125
Joined: Mon May 10, 2010 1:53 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby elasto » Fri Dec 01, 2017 9:42 am UTC

Zamfir wrote:"wipe their memory" already accepts that a memory is formed, right? Suppose that the "memory wipe" tool acts really fast (under 100 milliseconds perhaps), basically preventing memory formation at all. It's not obvious that such person would be conscious, in our normal sense of the word.

Imagine if we tortured that person and they were flailing their limbs and screaming out in pain. Would the fact that within 100ms of stopping the torture they had no memory of it mean they weren't conscious during? I dunno. You might be right, but I can't say I'd bet my life on it. If it looks like a duck etc.

Thinking about it some more, I feel like memory is a crucial component of identity and the concept of self, but those are distinct from the concept of consciousness. To be conscious you merely need to be able to subjectively experience the world: to be aware of your own feelings; there is no necessity for a memory of those feelings to persist.

If you look at sleep and dreams, then memory looks crucial to our concept of consciousness. Nightmares are not that far from your thought experiment - harrowing tortures that are not remembered. We only care about them to the extent that we wake up in the middle, and remember parts of them.

I think the fact that if you wake someone up during a nightmare they are conscious of it right there and then is consistent with the idea that they were in fact conscious throughout; they just can't ordinarily remember that they were.

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7312
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Zamfir » Fri Dec 01, 2017 10:31 am UTC

I think you're making this more black-or-white than it is. Dreaming sleep is clearly different from deep unconsciousness, like a coma. But it is also rather different from being awake.

As an example: many people, especially children, do exactly what you describe while sleeping. They flail their limbs in terror, even shout and scream. Quite often, they will not wake up, and are even difficult to rouse during the night terrors. If people have such agonizing, seemingly painful episodes during the day, it is considered as a serious medical issue. They might get therapy or medication. If it happens during sleep, it is only considered a problem if it leads to lack of rest. Most people do not suffer during the day from their dreaming panics, and then the issue is mostly ignored.

morriswalters
Posts: 6939
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby morriswalters » Fri Dec 01, 2017 1:07 pm UTC

I'm unsure to the relevance of this but I post it without comment. There are established cases of people who can't form new memories at all. They appear to be human. However they do have established personalities. They have short term memories but no long term ones.

elasto
Posts: 3125
Joined: Mon May 10, 2010 1:53 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby elasto » Fri Dec 01, 2017 2:08 pm UTC

Zamfir wrote:I think you're making this more black-or-white than it is. Dreaming sleep is clearly different from deep unconsciousness, like a coma. But it is also rather different from being awake.

Agreed, consciousness is a continuum. However, it's a black-and-white definition we are discussing ("person experiences and remembers feelings") so by necessity it's worth discussing edge cases to decide if that is a good definition.

As an example: many people, especially children, do exactly what you describe while sleeping. They flail their limbs in terror, even shout and scream. Quite often, they will not wake up, and are even difficult to rouse during the night terrors. If people have such agonizing, seemingly painful episodes during the day, it is considered as a serious medical issue. They might get therapy or medication. If it happens during sleep, it is only considered a problem if it leads to lack of rest. Most people do not suffer during the day from their dreaming panics, and then the issue is mostly ignored.

Agreed, but I'm not sure how much that informs us as to whether the above definition is correct.

Imagine you have to have a surgical procedure and there are two anaesthetics available: The first knocks you out in the conventional way but has a 10% chance of killing you. The second has no chance of killing you but you remain fully conscious during the surgery, instead you are unable to form any memories during the time.

Would you choose the second on the grounds that you aren't forming memories so aren't conscious? I remain unconvinced...

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 25817
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby gmalivuk » Fri Dec 01, 2017 4:26 pm UTC

Zamfir wrote:many people, especially children, do exactly what you describe while sleeping. They flail their limbs in terror, even shout and scream. Quite often, they will not wake up, and are even difficult to rouse during the night terrors. If people have such agonizing, seemingly painful episodes during the day, it is considered as a serious medical issue. They might get therapy or medication. If it happens during sleep, it is only considered a problem if it leads to lack of rest. Most people do not suffer during the day from their dreaming panics, and then the issue is mostly ignored.
Okay, but observations that support the already accepted fact that sleeping is different from being awake don't imply anything about consciousness, unless we start with the premise that sleepers are unconscious. And they only imply anything about the connection between memory and consciousness if we start with the assumption that lack of memory is why we treat night-terrors differently from similar waking experiences.

TrlstanC wrote:
Human beings just cannot memorize large (computer-sized) amounts of precise, context-free data

But computers can, and all the person is really doing in the Chinese Room is acting like an inspector, their role of actually moving stuff around is trivially automated by things that are clearly not conscious. The only reason the person had to both be the inspector and the actor carrying out the actions is to prevent people from arguing that there was some secret consciousness happening somewhere the person couldn't detect. By having the person do all the actions this limits the places that consciousness could be happening to just them.
Sure, that's how the traditional Room argument works, but the point about human memory was in response to a question about whether that human would know Chinese if they memorized the rulebooks.

The Chinese Room part of the argument shows that syntax is not sufficient for semantics, ie. if you have a program that's a bunch of syntax and you want to make it conscious, it's impossible to do that by just adding more syntax or more complicated syntax or the right kind of syntax.
I would say that if you're adding stuff that results in the Chinese Room being able to carry on a fluent, human-like conversation in Chinese, then you didn't solely add syntax. Within a human brain, both semantics and syntax amount to kinds of connections between different things, so you can't just declare that every internal connection is purely syntactic.
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Wed Dec 06, 2017 3:36 pm UTC

I would say that if you're adding stuff that results in the Chinese Room being able to carry on a fluent, human-like conversation in Chinese, then you didn't solely add syntax


I'd agree, given the construction of the chinese room, if a person could "operate" it smoothly just from memory than I think that would have to imply that it's more than merely memorizing a program. And that kind of highlights a benefit of consciousness, it's likely to be a very efficient way to manipulate at least some kinds of data. If we imagine that the Chinese Room not as a machine, but as a program that can communicate naturally in chinese, then we can think about the different ways different computers might run that program. In the original argument the human is the computer that runs the program, with the addition of printing, paper and pen which are used as the computer's memory. We could also imagine rewriting the same program in a different language, say C, and running it on a very large supercomputer. If we deconstructed how each operation and went through that program we could check each step to see if it required consciousness to work. Assuming such a program was possible then we wouldn't see a need to feel good or bad about anything or to have any conscious experiences, in fact each step would look almost exactly the same. Some electrons get pushed around through a transistor or a wire or through some ram, etc. Whether the computer experiences a conscious experience of pain or pleasure or nothing at all when electrons move has no bearing on the outcome of the program.

But if we tried to rewrite that program in a language that a human could run we would have to compress it enormously since we have neither the capacity to store that much info or the ability to carry out the number of required operations per second. So, for example, instead of having a giant database of how good or bad we feel about every experience, that has to be constantly resorted as new memories are formed and we reconsider past memories, we could attach all memories to a conscious experience of pleasure or pain. And also store all memories as relationships to each other, and to our perspective in the world instead of as absolutes. This would allow the human to run an equivalent program, but use the biological machinery we have to efficiently store and retrieve lots of data very easily. In essence we'd be substituting some of the syntax with semantic relationships, the program would change to make use of different hardware more effectively while still producing equivalent outputs.

Imagine you have to have a surgical procedure and there are two anaesthetics available: The first knocks you out in the conventional way but has a 10% chance of killing you. The second has no chance of killing you but you remain fully conscious during the surgery, instead you are unable to form any memories during the time.


This is a good thought experiment, but I think it misses how absolutely fundamental memory is to our experience and ideas about ourselves. There's certainly cases where people have limited short term memory, or lose long term memories or have memories blocked or lost. But I don't think we're aware of anyone who's ever just not had any memory at all. In fact, I think it would be hard to distinguish a person who was unconscious from a person who was conscious but actually had zero memory of any kind, no recall of the past or even a moment before, and no carry forward of any information at all. I suspect a person like that would act very much like they were unconscious. And while I think it would be terrible to undergo surgery even if I forgot it afterwards, I'm having trouble imagining what it would be like to experience surgery but not be able to relate it to anything else, to have no memory of who I am, or what pain was like during my life or a moment ago, and probably no ability to form an idea of what the future would be like or even what the future is. In fact, it seems entirely possible that the drugs that make us unconscious are actually doing just that, just cutting off all memory completely.

To be conscious you merely need to be able to subjectively experience the world: to be aware of your own feelings; there is no necessity for a memory of those feelings to persist.


I want to be clear that I'm not trying to describe the underlying physical causes of consciousness. I think there is some physical thing which allows for a consciousness experience, and it's entirely possible that that thing can cause just the experience of light or sound without pain or pleasure or memory. But I also think that when we eventually discover what the physical thing is, that we'll start thinking of human consciousness as being a subset of that larger class of things. It's the same way that we think of magnets as special even though virtually every physical interaction in our lives is caused by the electromagnetic force. Even though the underlying causes are the same, what's important to us is when they're aligned or used in particular ways. Here's a few examples I've used to convince myself:

  • Imagine that some theories of panpychism are correct and conscious experiences are a universal feature of matter, it's just some other kind of fundamental force, or the subjective experience of some force we already know about. A rock could be constantly experiencing some mishmash of light and sound and heat, but it doesn't have organs to trigger those experiences in an organized way, or a nervous system to channel and control them. They'd just be popping off at random depending on what unrelated physical interactions happened to be happening. I don't think we'd want to call that experience the same as what we call human consciousness. That's not necessarily the case, but given our understanding it's at least a possibility, so any definition of consciousness we have now should work with that possibility. Whether subjective experiences turns out to be very common or very rare, the thing we end up calling consciousness will probably be the unique way that humans use it, which I think has to include at least memory and pain/pleasure.
  • Memory carries with it a few requirements that I think are easy to ignore. It has to have a (physical) effect because I don't think there's anyway to carry information forward in time without making some kind of physical change in the world. And for the memory to be recallable it has to be able to interact with something in a meaningful way. The conscious rock from the example above might have a bunch of atoms that have stored a string of binary in their polarities, but unless that data can be retrieved somehow, I don't think we'd want to call it a memory. And memory also has to have some sort of relationship. Let's say the stone could just experience pain and pleasure, but didn't have any experience of space or time or anything else, without some other reference how is experiencing new pleasure different from remembering old pleasure? In humans our memories are a complex combination of conscious experiences, I don't think that kind of variety is required for consciousness, but I think some kind of minimum level of complexity is required to store and retrieve memories, and it seems like it would have to be more than just pain and pleasure. I don't know what the limits or possibilities of that additional requirement are, but I do think that memory does imply some additional kind of reference.
  • Imagine if I was born without the ability to experience pain and pleasure, so I could form memories and I could have other subjective experiences. I don't think I would act conscious at all, I might be able to have some basic instinctual responses, but even basic learning would seem impossible, and I doubt I'd even be able to form my subjective experiences into any kind of idea about the world or myself or any relationships at all. Pain and pleasure are the tools that allow us to piece together experiences in a useful way, and while they're very simple, through a recursive process of building up meaning on top of meaning, they're what holds together our entire experience of the world. Without those values to make sense of subjective experience I think anything like consciousness I'd have would be basically just random flashes of light and sound and pressure. It would be like being the rock in a panpsychism universe, the underlying physical effects would be there, but they wouldn't be organized in the way we recognize as human consciousness.

User avatar
chridd
Has a vermicelli title
Posts: 779
Joined: Tue Aug 19, 2008 10:07 am UTC
Location: ...Earth, I guess?
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby chridd » Wed Dec 06, 2017 9:32 pm UTC

TrlstanC wrote:I think there is some physical thing which allows for a consciousness experience
This is the assumption I'm objecting to—I think the only physical thing that allows for conscious experience is the fact that physics allows for complex circuits to exist. Consciousness isn't fundamental. Consciousness doesn't need something special to exist. Our brains aren't tapping into some fundamental force of consciousness. Our brains are just circuits that process and store data—just like a computer can. Our brains put things like themselves into a category, and labeled that category "consciousness", but it's only our brains that split up the universe that way, that divide the universe into things that are conscious and things that are not conscious—there's nothing different about conscious and unconscious beings at a fundamental physical level.

At the very least, whether this is true or not, this is one possibility we need to consider when reasoning about these things.
~ chri d. d. /tʃɹɪ.di.di/ (Phonotactics, schmphonotactics) · they (for now, at least) · Forum game scores
mittfh wrote:I wish this post was very quotable...
flicky1991 wrote:In both cases the quote is "I'm being quoted too much!"

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Wed Dec 06, 2017 10:37 pm UTC

This is the assumption I'm objecting to—I think the only physical thing that allows for conscious experience is the fact that physics allows for complex circuits to exist


Well, that's the exact opposite conclusion from Searle's argument. So to defend that point of view I think you're going to have to show that somehow the Chinese Room (or its equivalent) is conscious? A lot of people have tried, and I'm not aware of any that I believe succeeded.

elasto
Posts: 3125
Joined: Mon May 10, 2010 1:53 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby elasto » Thu Dec 07, 2017 12:16 am UTC

TrlstanC wrote:Well, that's the exact opposite conclusion from Searle's argument. So to defend that point of view I think you're going to have to show that somehow the Chinese Room (or its equivalent) is conscious? A lot of people have tried, and I'm not aware of any that I believe succeeded.

I could be wrong, but I think all the Chinese Room shows is that it's possible for something to converse convincingly without needing to be conscious. It doesn't say anything at all about systems constructed on a different basis - it particular systems with tight feedback loops that effect some kind of 'self-awareness'.

The Chinese Room probably isn't conscious but I am less sure that Google's Deepmind isn't.

And if the Chinese Room were twerked such that, instead of all conversations being enumerated, it had rules for altering its own data and rules based on external interactions, well, it might well be conscious too.

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Thu Dec 07, 2017 1:00 am UTC

It doesn't say anything at all about systems constructed on a different basis


That's right, and I think Searle would absolutely agree that there are some computers that can run a program to fluently speak Chinese, which are conscious. It's just that those computers are humans, and they're built in such a way that they create or utilize consciousness, and the program they run takes advantage of that fact.

If we want to apply that same logic to any other kind of machine, that's totally possible. And in fact, it should be much easier to figure out if an electric (or mechanical) computer running a program is conscious or not. We can just check the program for the points where it checks for feedback from the conscious machinery. It's like if a program is running image recognition, the program doesn't create the ability to capture images, there has to be a camera attached at some point. And we can look at the program and see at what points it gets information back from the camera, and if it's working then we know that the camera is on and providing data.

If a machine has been programmed to be conscious, and it's acting conscious, then we just look at its program and see where it's getting the feedback from the conscious part of the computer. But like a camera, the program can't create consciousness, it can only utilize the physical thing that exists already. At least that's the best interpretation of Searle's argument I can come up with. And it's entirely possible that someone could show that we've got the Chinese Room wrong, that it really could be secretly conscious somehow. But so far at least, I haven't seen any convincing argument for that.

User avatar
chridd
Has a vermicelli title
Posts: 779
Joined: Tue Aug 19, 2008 10:07 am UTC
Location: ...Earth, I guess?
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby chridd » Thu Dec 07, 2017 1:07 am UTC

TrlstanC wrote:Well, that's the exact opposite conclusion from Searle's argument. So to defend that point of view I think you're going to have to show that somehow the Chinese Room (or its equivalent) is conscious? A lot of people have tried, and I'm not aware of any that I believe succeeded.
My position, though, isn't so much that the Chinese Room or AI is conscious—nor that it isn't. My position towards the question of AI consciousness is more like the position ignostics have towards the question of whether a god exists. I don't think "consciousness" is really a useful concept when applied outside the context of purely distinguishing people who are awake from those who are sleeping/comatose/fainting/etc. (and those contexts don't generally use the word "consciousness", just "conscious"), and I think most philosophical arguments that involve "consciousness" at all, regardless of which way they're arguing, are starting from incorrect assumptions. In some sense, I think the premise that there's a thing called consciousness is wrong; consciousness is meaningful in some contexts, but the Chinese Room isn't one of those contexts.

TrlstanC wrote:If a machine has been programmed to be conscious, and it's acting conscious, then we just look at its program and see where it's getting the feedback from the conscious part of the computer. But like a camera, the program can't create consciousness, it can only utilize the physical thing that exists already.
No. Consciousness isn't a physical thing that exists already.

elasto wrote:And if the Chinese Room were twerked such that, instead of all conversations being enumerated, it had rules for altering its own data and rules based on external interactions, well, it might well be conscious too.
The Chinese Room isn't all conversations being enumerated; it's more like performing long division, but way more complicated (performing long division doesn't require a table enumerating all possible pairs of numbers and their quotient). And the Chinese Room does allow for storing data, by writing it down in the person's notes.
~ chri d. d. /tʃɹɪ.di.di/ (Phonotactics, schmphonotactics) · they (for now, at least) · Forum game scores
mittfh wrote:I wish this post was very quotable...
flicky1991 wrote:In both cases the quote is "I'm being quoted too much!"


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 12 guests