The Assumptions of Searle’s Chinese Room Thought Experiment

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

User avatar
TrlstanC
Flexo
Posts: 370
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby TrlstanC » Thu Dec 07, 2017 2:54 am UTC

I'd say that being awake and being conscious are correlated, but they're definitely not the same thing. Otherwise I don't think we'd need words for dreaming or sleepwalking.

And as far as being useful, I'd say that consciousness is, in a sense, the only thing that matters. To see why, imagine a pill that if you took it would end all subjective experience forever. Your consciousnes would go away, but it would let your body keep working exactly the same. From the outside your seem completely the same, but subjectively there'd be nothing.

I think we'll eventually discover that's physically impossible, but inside it could actually work like that. From your perspective, how would that pill be different than suicide? What would it cost to convince you to take the pill? To me, it sounds the same as death, which to me implies that the only important thing about being alive is being conscious.

At somepoint in the future we're going to have the ability to make machines that act consciousnes. Before that happens we should have a pretty good idea of if that's a good thing to do or not. Also, we're going to keep making machines that do all sorts of other stuff really well, and I think worrying about whether those machines are going to accidentally become conscious because they're really complex is probably missing the point.

SuicideJunkie
Posts: 157
Joined: Sun Feb 22, 2015 2:40 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby SuicideJunkie » Thu Dec 07, 2017 8:26 pm UTC

I'm pretty sure that a lot of people would take the trade fairly easily.
People give their lives for their children all the time. And an option where they still get to have you around, (although you don't get to have them around) is even better than that.


My interpretation of chridd's description, with computers:
Alive is powered on, consciousness is like playing a game.
There are many devices that can't play games out there. There are many that can, but happen to not do so. There are also a bunch that are playing stupid games worthy of derision.
There isn't a fine line between running the screen saver which generates a pretty dream-like state, drowsily "playing" progress quest, and actively being a bot in counterstrike.

eg: Humans have been playing Civilization, but other animals are puttering around with flappy bird and cookie clicker, and we readily dismiss those as not real games/consciousness.
Defining what counts as a game is tricky too.

elasto
Posts: 3125
Joined: Mon May 10, 2010 1:53 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby elasto » Thu Dec 07, 2017 8:47 pm UTC

TrlstanC wrote:At somepoint in the future we're going to have the ability to make machines that act consciousnes. Before that happens we should have a pretty good idea of if that's a good thing to do or not. Also, we're going to keep making machines that do all sorts of other stuff really well, and I think worrying about whether those machines are going to accidentally become conscious because they're really complex is probably missing the point.

I don't know. I think we'll have the ability to make an accidentally conscious program far earlier than we will understand how to do so deliberately. I also think it might not be until years after creating said code that we realise it was actually conscious all along. I think it's definitely worth considering because there's an off-chance we could theoretically be torturing conscious beings right this very second.

User avatar
ucim
Posts: 5633
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby ucim » Thu Dec 07, 2017 9:04 pm UTC

elasto wrote:I think it's definitely worth considering because there's an off-chance we could theoretically be torturing conscious beings right this very second.
...and there's a not-so-off chance that they will soon be able to exact their revenge.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
chridd
Has a vermicelli title
Posts: 778
Joined: Tue Aug 19, 2008 10:07 am UTC
Location: ...Earth, I guess?
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby chridd » Thu Dec 07, 2017 11:40 pm UTC

SuicideJunkie wrote:My interpretation of chridd's description, with computers:
Alive is powered on, consciousness is like playing a game.
There are many devices that can't play games out there. There are many that can, but happen to not do so. There are also a bunch that are playing stupid games worthy of derision.
There isn't a fine line between running the screen saver which generates a pretty dream-like state, drowsily "playing" progress quest, and actively being a bot in counterstrike.

eg: Humans have been playing Civilization, but other animals are puttering around with flappy bird and cookie clicker, and we readily dismiss those as not real games/consciousness.
Defining what counts as a game is tricky too.
Pretty much. I'd add to that: the Counterstrike bot is most likely programmed to think about games and care about games. As a result, it may end up thinking that the world is divided cleanly into games and non-games (because it's only programmed to think about games and non-games), and caring about whether any particular thing is or isn't a game. If it ends up doing philosophy, it may then end up building its philosophy on the idea that the universe is divided into games and non-games, that this distinction is fundamental, and that whatever game-like edge case it can come up with there's an objective answer to the question of "Is this a game?". None of this, though, means that the universe is in fact divided cleanly into games and non-games—that the bot thinks this way is just a result of its programming.

Humans are "programmed" to care about humans, and to think that the world is divided cleanly into humans and non-humans. That doesn't mean that it is divided that way, though.

TrlstanC wrote:And as far as being useful, I'd say that consciousness is, in a sense, the only thing that matters.
"Consciousness matters" is basically just another way of saying "we care about consciousness", which is perfectly consistent with the idea that the distinction between consciousness and non-consciousness is an artifact of how humans think about stuff rather than something which is objectively true. Just because we care about something doesn't mean that it's a coherent concept. (Perhaps the answer to "Should we build strong AI?" is "No, because it'll break humans' model of the world".)

If we care about conciousness, though, the question probably isn't really about consciousness, but rather about what we care about. This is sort of like the XY Problem: "I want to know whether I should care about robots. I know! I'll try to answer 'Are robots conscious?' and use that to answer my original question." But maybe determining whether robots are conscious isn't a good way to determine whether we should care about them. Maybe it'll turn out that certain robots do technically fit our definition of "conscious", but we don't really care about them, or vice versa. (As an aside: One of my thoughts a few years ago when reading a discussion about whether fetuses are people was, "Why can't we just say they're people who don't have a right to life?" Also this "maybe things will fit our definition but we won't care about them" is basically my main objection to the idea of objective morality.)

If we reframe the question as "Do we care about robots?", then that gives us additional possibilities. Maybe some people care about robots and others don't, and there's no objective fact that can resolve this. Or maybe it'll turn out that there's a practical reason why it's necessary or impossible to care about robots as we do humans, and any philosophical arguments are moot.
~ chri d. d. /tʃɹɪ.di.di/ (Phonotactics, schmphonotactics) · they (for now, at least) · Forum game scores
mittfh wrote:I wish this post was very quotable...
flicky1991 wrote:In both cases the quote is "I'm being quoted too much!"

User avatar
ucim
Posts: 5633
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby ucim » Fri Dec 08, 2017 12:24 am UTC

chridd wrote:If we reframe the question as "Do we care about robots?"
But the question is "Should we care about robots?", and it's a moving target because we keep on building new kinds of robots (including robots such as skysocial networks that incorporate people as a component). So we really can't answer the "x" directly, and need an x/y question to give some stability to the answer.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
Yakk
Poster with most posts but no title.
Posts: 11047
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby Yakk » Fri Dec 08, 2017 4:45 pm UTC

My large problem with Searle's Chinese Room is that it imagines something we know not to be conscoius -- a room with some books in it -- and pretends that room is capable of acts we'd consider conscious. And pretends that is an argument.

To me, it is like holding up some water and blood and charcoal, arrange it in some rude goldberg way, pretend that is an analogy of the brain. The rude goldberg contraption is obviously not concious, so neither is the brain.

A "Chinese Room" as described in some descriptions of the Chinese Room is a computing device that requires more mass than the entire universe to store its state. A consciousness-emulating "Chinese Room" is no more the dusty room with scraps of paper and someone following rules they look up out of books than a human brain is a [stick with a cotton head dipping into water](https://en.wikipedia.org/wiki/Drinking_bird).

---

The electromagnet argument, that a simulation of an electromagnet is not electromagnetic, depends on how well you simulate it. Suppose we make a box with a simulated electromagnet in it, and its interface is picking up iron filings. The simulation's job is to pick up the iron filings as if it was an electromagnet, have the ability to turn itself off and on with a switch, and in general act like an electromagnet.

That simulated electromagnet is pretty much an electromagnet.

I guess the difference is, you can *predict* what an electromagnet will do in a simplified version of reality withotu actually simulating an electromagnet to that level of fidelity. And insofar as your simplified model of the world and magnet is accurate, you can correctly predict what happens.

So one could imagine a system that *predicts* what a conscious being does based on a model of its experiences without that process itself being conscoius. You could even feed it a simplified environment (an abstraction of sense experience), and if your abstraction was rich enough generate the "correct" responses and fool everyone.

elasto wrote:Imagine you have to have a surgical procedure and there are two anaesthetics available: The first knocks you out in the conventional way but has a 10% chance of killing you. The second has no chance of killing you but you remain fully conscious during the surgery, instead you are unable to form any memories during the time.

Would you choose the second on the grounds that you aren't forming memories so aren't conscious? I remain unconvinced...

So, there is reason to believe that some forms of anaesthetics (A) prevent memory formation, (B) paralyze you.

And in some cases, (A) fails and (B) does not.

https://en.wikipedia.org/wiki/Anesthesia_awareness

What more, some anaesthetics result in the patient being able to respond to instructions to some limited extent and move and respond to stimulus in a way that seems to indicate pain, but are unable to form memory.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

elasto
Posts: 3125
Joined: Mon May 10, 2010 1:53 am UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby elasto » Fri Dec 08, 2017 8:59 pm UTC

Yakk wrote:What more, some anaesthetics result in the patient being able to respond to instructions to some limited extent and move and respond to stimulus in a way that seems to indicate pain, but are unable to form memory.

That's worrisome. It's possible to imagine someone in that situation either being conscious or unconscious - think of a sleepwalker responding to an instruction to return to bed. My point was that I personally would worry about being under the effect of such an anaesthetic because I might still be conscious...

Thinking about it, I suppose the concepts of consciousness and free will are pretty intertwined, and free will would seem to require memory - at least for the duration of the subjective reflection over the decision. How can you make a conscious decision if you can't consciously remember the stimuli triggering the decision during that decision (even if heavily compressed via, say, being converted into an emotion)?

Additionally, my hunch is consciousness is not possible without feedback loops (if you are self-aware you are also aware of your self-awareness) and what is a feedback loop if not a form of memory?

So, on reflection, I guess I agree that some form of memory is required for consciousness - though it could take a form totally unlike what human beings experience and so be totally incomprehensible to us - yet still consist of subjective qualia none-the-less.

Maybe some versions of the Chinese Room could be conscious after all...
Last edited by elasto on Sat Dec 09, 2017 9:33 pm UTC, edited 2 times in total.

ericgrau
Posts: 82
Joined: Sat Dec 13, 2008 7:14 pm UTC

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby ericgrau » Sat Dec 09, 2017 12:59 pm UTC

I don't know about the feasibility of A.I., but the thought experiment seems extremely weak.

The paper filing cabinet itself could have A.I. The filing cabinet (and/or pencil & paper for writing data files) may fill a city and take thousands of years of years for a human running it to respond to a single query, but it could still have A.I. It seems to presuppose that an inanimate object doesn't have A.I. which is your first natural response to a filing cabinet. But the whole thought experiment is trying to prove or disprove A.I. in a man made object so it's circular.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 25815
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Postby gmalivuk » Sun Dec 10, 2017 2:46 am UTC

elasto wrote:
I think all the Chinese Room shows is that it's possible for something to converse convincingly without needing to be conscious.

No, the Chinese Room argument more or less assumes that (and then concludes that therefore passing a linguistic Turing test isn't sufficient to demonstrate consciousness).
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 10 guests