Page 2 of 2

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 07, 2017 2:54 am UTC
by TrlstanC
I'd say that being awake and being conscious are correlated, but they're definitely not the same thing. Otherwise I don't think we'd need words for dreaming or sleepwalking.

And as far as being useful, I'd say that consciousness is, in a sense, the only thing that matters. To see why, imagine a pill that if you took it would end all subjective experience forever. Your consciousnes would go away, but it would let your body keep working exactly the same. From the outside your seem completely the same, but subjectively there'd be nothing.

I think we'll eventually discover that's physically impossible, but inside it could actually work like that. From your perspective, how would that pill be different than suicide? What would it cost to convince you to take the pill? To me, it sounds the same as death, which to me implies that the only important thing about being alive is being conscious.

At somepoint in the future we're going to have the ability to make machines that act consciousnes. Before that happens we should have a pretty good idea of if that's a good thing to do or not. Also, we're going to keep making machines that do all sorts of other stuff really well, and I think worrying about whether those machines are going to accidentally become conscious because they're really complex is probably missing the point.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 07, 2017 8:26 pm UTC
by SuicideJunkie
I'm pretty sure that a lot of people would take the trade fairly easily.
People give their lives for their children all the time. And an option where they still get to have you around, (although you don't get to have them around) is even better than that.


My interpretation of chridd's description, with computers:
Alive is powered on, consciousness is like playing a game.
There are many devices that can't play games out there. There are many that can, but happen to not do so. There are also a bunch that are playing stupid games worthy of derision.
There isn't a fine line between running the screen saver which generates a pretty dream-like state, drowsily "playing" progress quest, and actively being a bot in counterstrike.

eg: Humans have been playing Civilization, but other animals are puttering around with flappy bird and cookie clicker, and we readily dismiss those as not real games/consciousness.
Defining what counts as a game is tricky too.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 07, 2017 8:47 pm UTC
by elasto
TrlstanC wrote:At somepoint in the future we're going to have the ability to make machines that act consciousnes. Before that happens we should have a pretty good idea of if that's a good thing to do or not. Also, we're going to keep making machines that do all sorts of other stuff really well, and I think worrying about whether those machines are going to accidentally become conscious because they're really complex is probably missing the point.

I don't know. I think we'll have the ability to make an accidentally conscious program far earlier than we will understand how to do so deliberately. I also think it might not be until years after creating said code that we realise it was actually conscious all along. I think it's definitely worth considering because there's an off-chance we could theoretically be torturing conscious beings right this very second.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 07, 2017 9:04 pm UTC
by ucim
elasto wrote:I think it's definitely worth considering because there's an off-chance we could theoretically be torturing conscious beings right this very second.
...and there's a not-so-off chance that they will soon be able to exact their revenge.

Jose

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 07, 2017 11:40 pm UTC
by chridd
SuicideJunkie wrote:My interpretation of chridd's description, with computers:
Alive is powered on, consciousness is like playing a game.
There are many devices that can't play games out there. There are many that can, but happen to not do so. There are also a bunch that are playing stupid games worthy of derision.
There isn't a fine line between running the screen saver which generates a pretty dream-like state, drowsily "playing" progress quest, and actively being a bot in counterstrike.

eg: Humans have been playing Civilization, but other animals are puttering around with flappy bird and cookie clicker, and we readily dismiss those as not real games/consciousness.
Defining what counts as a game is tricky too.
Pretty much. I'd add to that: the Counterstrike bot is most likely programmed to think about games and care about games. As a result, it may end up thinking that the world is divided cleanly into games and non-games (because it's only programmed to think about games and non-games), and caring about whether any particular thing is or isn't a game. If it ends up doing philosophy, it may then end up building its philosophy on the idea that the universe is divided into games and non-games, that this distinction is fundamental, and that whatever game-like edge case it can come up with there's an objective answer to the question of "Is this a game?". None of this, though, means that the universe is in fact divided cleanly into games and non-games—that the bot thinks this way is just a result of its programming.

Humans are "programmed" to care about humans, and to think that the world is divided cleanly into humans and non-humans. That doesn't mean that it is divided that way, though.

TrlstanC wrote:And as far as being useful, I'd say that consciousness is, in a sense, the only thing that matters.
"Consciousness matters" is basically just another way of saying "we care about consciousness", which is perfectly consistent with the idea that the distinction between consciousness and non-consciousness is an artifact of how humans think about stuff rather than something which is objectively true. Just because we care about something doesn't mean that it's a coherent concept. (Perhaps the answer to "Should we build strong AI?" is "No, because it'll break humans' model of the world".)

If we care about conciousness, though, the question probably isn't really about consciousness, but rather about what we care about. This is sort of like the XY Problem: "I want to know whether I should care about robots. I know! I'll try to answer 'Are robots conscious?' and use that to answer my original question." But maybe determining whether robots are conscious isn't a good way to determine whether we should care about them. Maybe it'll turn out that certain robots do technically fit our definition of "conscious", but we don't really care about them, or vice versa. (As an aside: One of my thoughts a few years ago when reading a discussion about whether fetuses are people was, "Why can't we just say they're people who don't have a right to life?" Also this "maybe things will fit our definition but we won't care about them" is basically my main objection to the idea of objective morality.)

If we reframe the question as "Do we care about robots?", then that gives us additional possibilities. Maybe some people care about robots and others don't, and there's no objective fact that can resolve this. Or maybe it'll turn out that there's a practical reason why it's necessary or impossible to care about robots as we do humans, and any philosophical arguments are moot.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Fri Dec 08, 2017 12:24 am UTC
by ucim
chridd wrote:If we reframe the question as "Do we care about robots?"
But the question is "Should we care about robots?", and it's a moving target because we keep on building new kinds of robots (including robots such as skysocial networks that incorporate people as a component). So we really can't answer the "x" directly, and need an x/y question to give some stability to the answer.

Jose

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Fri Dec 08, 2017 4:45 pm UTC
by Yakk
My large problem with Searle's Chinese Room is that it imagines something we know not to be conscoius -- a room with some books in it -- and pretends that room is capable of acts we'd consider conscious. And pretends that is an argument.

To me, it is like holding up some water and blood and charcoal, arrange it in some rude goldberg way, pretend that is an analogy of the brain. The rude goldberg contraption is obviously not concious, so neither is the brain.

A "Chinese Room" as described in some descriptions of the Chinese Room is a computing device that requires more mass than the entire universe to store its state. A consciousness-emulating "Chinese Room" is no more the dusty room with scraps of paper and someone following rules they look up out of books than a human brain is a [stick with a cotton head dipping into water](https://en.wikipedia.org/wiki/Drinking_bird).

---

The electromagnet argument, that a simulation of an electromagnet is not electromagnetic, depends on how well you simulate it. Suppose we make a box with a simulated electromagnet in it, and its interface is picking up iron filings. The simulation's job is to pick up the iron filings as if it was an electromagnet, have the ability to turn itself off and on with a switch, and in general act like an electromagnet.

That simulated electromagnet is pretty much an electromagnet.

I guess the difference is, you can *predict* what an electromagnet will do in a simplified version of reality withotu actually simulating an electromagnet to that level of fidelity. And insofar as your simplified model of the world and magnet is accurate, you can correctly predict what happens.

So one could imagine a system that *predicts* what a conscious being does based on a model of its experiences without that process itself being conscoius. You could even feed it a simplified environment (an abstraction of sense experience), and if your abstraction was rich enough generate the "correct" responses and fool everyone.

elasto wrote:Imagine you have to have a surgical procedure and there are two anaesthetics available: The first knocks you out in the conventional way but has a 10% chance of killing you. The second has no chance of killing you but you remain fully conscious during the surgery, instead you are unable to form any memories during the time.

Would you choose the second on the grounds that you aren't forming memories so aren't conscious? I remain unconvinced...

So, there is reason to believe that some forms of anaesthetics (A) prevent memory formation, (B) paralyze you.

And in some cases, (A) fails and (B) does not.

https://en.wikipedia.org/wiki/Anesthesia_awareness

What more, some anaesthetics result in the patient being able to respond to instructions to some limited extent and move and respond to stimulus in a way that seems to indicate pain, but are unable to form memory.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Fri Dec 08, 2017 8:59 pm UTC
by elasto
Yakk wrote:What more, some anaesthetics result in the patient being able to respond to instructions to some limited extent and move and respond to stimulus in a way that seems to indicate pain, but are unable to form memory.

That's worrisome. It's possible to imagine someone in that situation either being conscious or unconscious - think of a sleepwalker responding to an instruction to return to bed. My point was that I personally would worry about being under the effect of such an anaesthetic because I might still be conscious...

Thinking about it, I suppose the concepts of consciousness and free will are pretty intertwined, and free will would seem to require memory - at least for the duration of the subjective reflection over the decision. How can you make a conscious decision if you can't consciously remember the stimuli triggering the decision during that decision (even if heavily compressed via, say, being converted into an emotion)?

Additionally, my hunch is consciousness is not possible without feedback loops (if you are self-aware you are also aware of your self-awareness) and what is a feedback loop if not a form of memory?

So, on reflection, I guess I agree that some form of memory is required for consciousness - though it could take a form totally unlike what human beings experience and so be totally incomprehensible to us - yet still consist of subjective qualia none-the-less.

Maybe some versions of the Chinese Room could be conscious after all...

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Sat Dec 09, 2017 12:59 pm UTC
by ericgrau
I don't know about the feasibility of A.I., but the thought experiment seems extremely weak.

The paper filing cabinet itself could have A.I. The filing cabinet (and/or pencil & paper for writing data files) may fill a city and take thousands of years of years for a human running it to respond to a single query, but it could still have A.I. It seems to presuppose that an inanimate object doesn't have A.I. which is your first natural response to a filing cabinet. But the whole thought experiment is trying to prove or disprove A.I. in a man made object so it's circular.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Sun Dec 10, 2017 2:46 am UTC
by gmalivuk
elasto wrote:
I think all the Chinese Room shows is that it's possible for something to converse convincingly without needing to be conscious.

No, the Chinese Room argument more or less assumes that (and then concludes that therefore passing a linguistic Turing test isn't sufficient to demonstrate consciousness).

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Wed Dec 20, 2017 3:38 pm UTC
by TrlstanC
gmalivuk wrote:No, the Chinese Room argument more or less assumes that (and then concludes that therefore passing a linguistic Turing test isn't sufficient to demonstrate consciousness).


Despite the fact that it seems to argue against the Turing test, I don't think that's actually the point of the Chinese Room argument. Since as you point out it assumes that it's possible to converse convincingly without being conscious. We're not actually sure if that's possible or not, it's like assuming that it's possible to calculate the input of a SHA-256 algorithm and then saying that it's therefore not a secure function. The conclusion can't be something that's we assumed to be true.

As far as we know the only way to process the huge amount of data required, fast enough, to pass a robust Turing test is to actually be conscious. If we wanted to be more confident that a machine that passed the Turing test was conscious, we could impose additional constraints, like size and energy usage, on it as well. That would limit the possible ways to solve the problem, and it increases the chances that actually being conscious is the only viable solution.

The problem I have with the conclusion of the argument, that "programs are neither constitutive of nor sufficient for minds" is that "program" is never defined. Particularly, what is the dependency of a program on the hardware that will run it? Is a program necessarily hardware independent? Is it something that's defined by inputs of data in any format, and it's output of data? Or are two programs that run on completely different hardware, but achieve the same result, actually different programs?

It seems like a difficult question because I can't think of a definition of "program" that would definitely rule out the difference between a sleeping and awake human. In some sense isn't a person programmed to wake up at a certain point, and then react to stimulus in certain ways, use whatever it is that causes conscious experiences? Before and after waking up the person has the capability for consciousness, but isn't actually conscious until the "awake" program runs. Of course, that's just speculation since we don't know what causes us to become conscious when we wake up. In fact we don't even know if consciousness still exists or not while we're asleep.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Wed Dec 20, 2017 9:40 pm UTC
by morriswalters
One possible way of seeing the point is to consider what the Chinese Room is doing when it isn't talking?

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 21, 2017 1:22 am UTC
by chridd
TrlstanC wrote:As far as we know the only way to process the huge amount of data required, fast enough, to pass a robust Turing test is to actually be conscious. If we wanted to be more confident that a machine that passed the Turing test was conscious, we could impose additional constraints, like size and energy usage, on it as well. That would limit the possible ways to solve the problem, and it increases the chances that actually being conscious is the only viable solution.
But you're still making the assumption that I think makes the Chinese room flawed—that there is some consciousness out there to tap into, that there is something more to our brains than just data processing.

To make my point more explicit, I think the real question here is vitalism vs. physicalism*.

In vitalism, there's some fundamental thing that makes us conscious—whether that's called "consciousness" or a "soul" or whatever. If vitalism is true, then even if you could make something that behaves exactly like a human, that stores all the information that we would store in our memory, that has variables representing happiness, sadness, etc. that it reacts to the way a human would, but does not use this fundamental thing (i.e., a philosophical zombie), then there is a difference between that and a real human because whether it uses this fundamental "consciousness" thing is significant.

In physicalism, consciousness as vitalists think about it—consciousness as you are thinking about it when making these arguments—consciousness as Searle is thinking about it—simply doesn't exist at all. Rather, humans are merely machines that take inputs (sight, sound, touch, etc.), calculate the action that's most likely to keep them alive and help them reproduce (using a complicated and imperfect algorithm), and tell muscles to move based on those calculations. It so happens that if this brain gets the input "Are you conscious", it produces the answer "Yes", but that doesn't mean anything about the universe. If physicalism is true, then there isn't really any difference between a human and a philosophical zombie—both would not be tapping into consciousness because there is no consciousness to tap into—and, likewise, there wouldn't be a difference between a human and a computer that emulates a human, unless we arbitrarily define there to be one.

(Do you at least understand the difference between these two, and agree that the way you're thinking about things is vitalist?)

I think the natural mode of thought for a human is vitalism, but that the way the world actually works is physicalism. (I don't think vitalism is logically impossible—an example of a vitalist system is a typical video game world, where there aren't any physical circuits in the world controlling character actions—but I think the evidence points to our world being physicalist.) And I think the problem with the Chinese room is that it's really an argument about vitalism vs. materialism, but it implicitly assumes that vitalism is true from the start (or relies on our intuitions, which assume vitalism). You seem to be arguing that, assuming vitalism, something made of silicon could tap into the fundamental consciousness, which I think is a valid point but not a sound one since I don't think vitalism is true.

I'm generally averse to the word "consciousness" because basically any time I see the word used in a philosophical context, the person using it is assuming vitalism is true. On the other hand, I'm not quite saying "consciousness" doesn't exist, for a few reasons: I think that the vitalism model, like the flat Earth and Newtonian physics, is something that, while it isn't really true, mostly works in everyday situations; I think people are likely to interpret "humans aren't conscious" as meaning something it doesn't; and because I'm in favor of defining words such that they refer to what does exist, even if it's different from how we might expect at a fundamental level (e.g., I'd say that whatever I do when I put my hand on a wall is touching it, even if there's space in between the atoms).

* or maybe it's called mechanism?

Edit to add: I think there's also the separate issue that AI probably won't think or act like a human, even if it becomes as intelligent or more intelligent than a human.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 21, 2017 2:18 am UTC
by ucim
chridd wrote:It so happens that if this brain [(in the physicalist version)] gets the input "Are you conscious", it produces the answer "Yes", but that doesn't mean anything about the universe.
I don't know if you are conscious, but I do know that I am. It is however impossible for me to convince you directly. The best one can do is infer based on observed similarities to oneself.

But the fact that experience is strictly a first person thing does not make it not real.

Jose

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 21, 2017 2:46 am UTC
by chridd
ucim wrote:
chridd wrote:It so happens that if this brain [(in the physicalist version)] gets the input "Are you conscious", it produces the answer "Yes", but that doesn't mean anything about the universe.
I don't know if you are conscious, but I do know that I am. It is however impossible for me to convince you directly. The best one can do is infer based on observed similarities to oneself.

But the fact that experience is strictly a first person thing does not make it not real.
(You know that you think you're conscious...)

I do think there's something which we're experiencing that we call consciousness. I just don't think that it's fundamental to the universe, or works the way vitalists think it does—but that the way brains work makes it seem like something fundamental. So I guess the real reason why I'm not saying "consciousness doesn't exist" is that what I'm really saying is "many fundamental assumptions that vitalists make about the nature of consciousness are wrong".

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 21, 2017 2:55 am UTC
by ucim
chridd wrote:I do think there's something which we're experiencing that we call consciousness. I just don't think that it's fundamental to the universe...
What does it mean to be "fundamental to the universe"? We are talking about something that is inherently self-referential.

Jose

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 21, 2017 3:25 am UTC
by chridd
ucim wrote:What does it mean to be "fundamental to the universe"? We are talking about something that is inherently self-referential.
Not fully explainable in terms of other laws of physics, particles, etc.

For instance, gears aren't fundamental to the universe, because there aren't any laws of the universe that say anything about gears specifically; the fact that gear-shaped objects in the right configuration turn each other is just something that arises naturally out of laws of physics that have nothing to do with gears. The question is whether consciousness (and other human-specific things like understanding and emotion and thought) is like a gear in that sense; do the laws of the universe say anything about consciousness etc., or is the fact that we're having this discussion simply what happens when laws governing electrons moving are applied to a specific type of complex circuit?

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 21, 2017 5:45 am UTC
by ucim
chridd wrote:Not fully explainable in terms of other laws of physics, particles, etc.
Ok. I'll go with consciousness is not fundamental to the universe. It seems very close to the statement that vitalism is incorrect.

It is also not a demonstrable thing... that is, there doesn't seem to be a way to measure it or to detect it. It's not even a defined thing... we don't know what the "it" is that we're trying to detect. This is because it is fundamentally a reflexive thing - a "first person" thing. it is an experience. It is the color red, as opposed to the spectrum of red light. It is the sound that nobody hears if there's nobody there to hear it.

Jose

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 21, 2017 11:40 am UTC
by elasto
chridd wrote:
ucim wrote:What does it mean to be "fundamental to the universe"?

Not fully explainable in terms of other laws of physics, particles, etc.

For instance, gears aren't fundamental to the universe, because there aren't any laws of the universe that say anything about gears specifically; the fact that gear-shaped objects in the right configuration turn each other is just something that arises naturally out of laws of physics that have nothing to do with gears.

I think the term you're searching for is that consciousness is emergent: An emergent property is one that is not a property of any component of a system, but is still a feature of that system as a whole

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 21, 2017 5:29 pm UTC
by TrlstanC
chridd wrote:
ucim wrote:What does it mean to be "fundamental to the universe"? We are talking about something that is inherently self-referential.
Not fully explainable in terms of other laws of physics, particles, etc.


As ucim said, the question of whether consciousness exists or not is something that everyone can answer. I personally can't see any way that Physicalism can be true because it denies the existence of actual subjective experiences. Searle talks about this as the difference between something being subjective/objective in an epistemic meaning (is it true) or an ontological meaning (does it exist). I know that when my brain gets the input "are you conscious" I have a subjective experience of that, and I have a subjective experience of the response. Also, it's entirely possible for someone who's conscious to get the input "are you conscious" and have no meaningful response at all (babies, animals, etc.). Whether the response is (objectively) correct isn't important, what's important is the subjective experience actually (ontologically) exists.

Right now, there's only one actual subjective experience in the ontological sense, and that's consciousness. Which makes comparisons with other phenomenon very difficult. As for whether consciousness is fundamental or not, I think there's really only three options:

  • Consciousness is the subjective experience of an unknown fundamental force
  • Consciousness is the subjective experience of a known fundamental force
  • Consciousness is the subjective experience that results from the interaction of multiple forces (which would make it like most objective phenomenon, including emergent properties)

Of course, if we ever figure out how to reduce the number of fundamental forces down to one, then the last two are the same. Logically I don't think we can rule any of those options out. But given the possibilities it seems more likely that as we learn more about consciousness that we'll discover that it, or some important part of it, is fundamental.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 21, 2017 6:03 pm UTC
by Pfhorrest
TrlstanC wrote:Physicalism [...] denies the existence of actual subjective experiences.

That's actually debated amongst physicalists. See for the most prominent example Galen Strawson, who I think you will like.

Like me, he holds phenomenal consciousness* to be just the subjective experience of being a thing, and all things to be physical (thus making physicalism true), and all things, not just access-conscious* things like humans, to have such a subjective experience of being that kind of (physical) thing they are, with the qualities of each thing's subjective experience depending on its objective function. So our full human consciousness is the subjective experience of being the kind of functionally complex things we are, but everything else has some (usually non-noteworthy) subjective experience of being whatever it is too, and if we were to build something functionally similar to a human brain, its subjective experience would thereby become similar to the subjective experience of being a human.

*(Phenomenal consciousness as in the experiential thing, as distinguished from access consciousness which is the functional thing. Two different senses of the word "consciousness" here. Not two different accounts of what the same thing is. Two different things referred to by the same name. That's important to keep in mind.)

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 21, 2017 7:36 pm UTC
by doogly
elasto wrote:
chridd wrote:
ucim wrote:What does it mean to be "fundamental to the universe"?

Not fully explainable in terms of other laws of physics, particles, etc.

For instance, gears aren't fundamental to the universe, because there aren't any laws of the universe that say anything about gears specifically; the fact that gear-shaped objects in the right configuration turn each other is just something that arises naturally out of laws of physics that have nothing to do with gears.

I think the term you're searching for is that consciousness is emergent: An emergent property is one that is not a property of any component of a system, but is still a feature of that system as a whole

People can use "emergent" in both ways though. In physics, an emergent behavior is one that is present at a higher level, but is most definitely caused by things at the lower level. If you want to explain what goes on in a baseball game, you use Baseball Laws, not Newton Laws, but nobody would ascribe to Baseball Laws some ontology which makes them operate independently of, or "downwardly causal" to, Newton's Laws.

Some people do want to think of Consciousness in that weird way, which does extreme violence to everything we know to be true, but they have a soft spot for consciousness and a bit of physics envy. They will use the word "emergent" to mean that it is not *capable* of explanation in terms of the constituent matter, whereas physicists would use it to mean not *efficiently explainable* in terms of the constituent matter.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Thu Dec 21, 2017 8:00 pm UTC
by elasto
Yes. Emergence is still somewhat magical though.

Imagine a high res picture which renders the numeral '4'.

The '4' is emergent: The picture has a property which no single pixel has. In fact, you could remove 1% of the pixels at random and most of the time the picture would still be entirely recognisable as a '4'.

Likewise, you could remove 1% of your neurons at random and you'd most likely still be conscious and probably not even notice the difference.

It's us that ascribes these 'higher levels of meaning' - whether it's atoms that happen to be part of baseball game, a numeral, or a conscious brain. The atoms chug along under their set of simple laws, it's the meaning of the arrangement of atoms that is magically emergent.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Fri Dec 22, 2017 4:50 pm UTC
by Yakk
Physicalism doesn't deny the existence of subjective experiences.

If consciousness is what information feels like being processed, then rocks are conscious (in that there is lots of information processing going on). We happen to be both conscious and self-conscious (we are aware of our own consciousness).

If we are conscious all the way down, what we call our human consciousness is our self-consiousness at a high level.

Everything is experiencing everything; when information processing is arranged in a particular manner that permits it to mostly know itself, it has an experience not unlike our own.

No need for anything other than physics. In this case, physics is just the medium in which the consciousness is embedded. And a sufficiently powerful computer simulation can create conscoiusness without direct correspondence to physical matter.

Re: The Assumptions of Searle’s Chinese Room Thought Experiment

Posted: Tue Feb 27, 2018 11:09 pm UTC
by Dr34m(4+(h3r
TrlstanC wrote:[*]But a defining characteristic of our consciousness is that it's unified. We don't experience separate consciousness that are happening in different parts of our brain, or even experience a graininess or "pixelation" of a consciousness that's made up of many smaller parts


A healthy person doesn't.

Pfhorrest wrote:Where it gets interesting is when a thing's function becomes reflexive, especially in particular ways. A human brain's function is highly reflexive, to the point that most of the arrows coming in or out of it bend around back to itself, and so our experience is not just of the world, but largely of ourselves, including how we're experiencing and reacting to the world; and likewise much of our (mental) behavior is not upon the world directly, but upon ourselves, changing our own state so that we react differently to our experience of the world. That's the interesting thing about human consciousness. If you stripped away that self-awareness and self-control and simplified the diagram down to just input from experience of the world leading straight to behavioral outputs to the world, you'd strip away everything that makes us "conscious" in an interesting way, even though there would still be some technical experience of the world, which we would be completely unaware we were having.


So basically, self-consciousness is the distortion in a feedback loop? Or maybe a kind of solve-and-coagula thing of both the distortion and the way it is re-processed into data from noise, or the stable confluence of the two. In that case we seem to arrive at a kind of straight epiphenomenalism or even total physicalism in which phenomenological experiences are just the relationship between irreducible terms in a total system and the heuristics used to parse them back into the form of whatever they originally emerged from.