Can a machine be conscious?

Please compose all posts in Emacs.

Moderators: phlip, Moderators General, Prelates

Apteryx
Posts: 174
Joined: Tue Feb 09, 2010 5:41 am UTC

Re: Can a machine be conscious?

Postby Apteryx » Tue Feb 16, 2010 9:19 am UTC

Xanthir wrote:
Apteryx wrote:
Xanthir wrote: The lay definition of free will, well, typically isn't a definition at all.

What a smug prejudice you wave there. All 5.5 billion lay people, eh?. Of course, you spoke to each of us, right?.
:roll:

Um? "Lay definition" doesn't mean "the set of all definitions that all lay people in the world could provide". As well, I said "typically".





You obviously don't understand what you wrote, because it is the added "typically" that in fact means you were being arrogant and referring to all the typical lay definitions, as opposed to some of them or one.

The sentence without the word "typically" i.e. The lay definition of free will, well, isn't a definition at all, does mean what you are now claiming you meant to say.

Which begs the question, of the possible lay definitions, which particular one do you now claim is typical of all lay people. And how do you arrive at this without simply being arrogant. As I asked in my initial post, how many of us did you poll?. Bad science. Offensive.


And why it was fairly arrogant indeed is because actually you can't provide us with anything profound and original on the topic yourself. You are just like a 19 year old home from his first year at Uni, done a little following in others footsteps and now you spend Christmas day telling your family "You have never learnt to think" lol

You are just parroting others ideas. And quite happy to damn anyone and everyone who doesn't agree, like say, the Pope, who is a lay person vis the science of brain structure, but I am guessing has given some profound thought to free will/determination. Or like Mark Twain, never went to Uni you know, but gave a lot of thought to the idea of Free Will. Or me, I am a lay person, and yet, surprise, I have thought of the topic once or twice. Or in fact any of the other lay people who have thought of the topic. All the priests. All the Buddhists ( big on free will, I believe, and think about it for decades? :) ). All the many people who have read of the concept in popular works.

But they aren't profound enough for you, because they don't parrot the same books you do.

Arrogant.

Xanthir wrote:Also, 5.5billion? Welcome to 1990, I suppose.

I was allowing for there to be a billion people that you could condescend to call educated, as opposed to lay.

Though literally, unless you have something like a Masters in Cognitive Science, you yourself really are only a lay person with the lumps chipped off by a little reading. Come on, tell me what profound addition to the science you yourself have made.
Abuse of words has been the great instrument of sophistry and chicanery, of party, faction, and division of society.
John Adams

User avatar
Xanthir
My HERO!!!
Posts: 5423
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Tue Feb 16, 2010 2:31 pm UTC

Um?

Dude, you have no idea what you are talking about. If you want to argue something of substance, do so. I'm not going to devolve into pretentious bullshit posturing over dictionary definitions of what the words I used mean.

You got some definitions of free will from the people you're claiming I'm arrogantly ignoring? Bring them to the table. But I'm not getting into a dick-measuring contest over qualifications.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Can a machine be conscious?

Postby achilleas.k » Wed Feb 17, 2010 12:55 am UTC

Couldn't it just mean that typically, lay definitions are rather vague and ambiguous, making them bad candidates for definitions in the first place?
That's how I took it when I read it, anyway.
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

User avatar
Xanthir
My HERO!!!
Posts: 5423
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Wed Feb 17, 2010 1:25 am UTC

Well, I didn't mean it *generally*. I suspect that most people, if asked to define some arbitrary thing, can pop out a perfectly serviceable definition. Free will, specifically, is something that usually defies people's ability to form a coherent definition, from my experience in discussions on the matter.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Wed Feb 17, 2010 1:35 am UTC

Apteryx wrote:Which begs the question, of the possible lay definitions, which particular one do you now claim is typical of all lay people. And how do you arrive at this without simply being arrogant.

When you get all indignant with someone and try to start a semantics battle over some imaginary slight to your intelligence, it's best not to horribly misuse phrases like "begs the question". It hurts your credibility.

achilleas.k
Posts: 75
Joined: Wed Nov 18, 2009 3:21 pm UTC

Re: Can a machine be conscious?

Postby achilleas.k » Wed Feb 17, 2010 10:06 am UTC

Xanthir wrote:Well, I didn't mean it *generally*. I suspect that most people, if asked to define some arbitrary thing, can pop out a perfectly serviceable definition. Free will, specifically, is something that usually defies people's ability to form a coherent definition, from my experience in discussions on the matter.

I stand corrected.
I agree though. Consciousness pretty much falls in the same category. We all have an idea of what is generally meant by these terms (at least to some useful extent), but any definition we attempt to put out rarely makes any sense to begin with.
Anyway, let's not devote an entire thread page to a single word.
The most exciting phrase to hear in science -the one that heralds new discoveries- is not "Eureka!" but "That's funny...". - Isaac Asimov

Okapi
Posts: 53
Joined: Thu Oct 30, 2008 8:37 pm UTC

Re: Can a machine be conscious?

Postby Okapi » Thu Feb 25, 2010 2:44 am UTC

I hang out on the Religion and Spirituality section of Yahoo! Answers (mostly trolling), so I've picked up some excellent argument skills. For example, I am going to post without reading what anybody else has written, back up my arguments as minimally as possible to save time and space, and use often inductive reasoning to come to conclusions. Here goes.

Actually, I am a philosopher, not a psychologist, so I have a college level psychology class that I did badly in and a lot of Deep Thought about consciousness, but little practical knowledge of the physical bits. My logic, I assure you, is sound, as sound, in fact, as logic can be, but is rather reminiscent of Lewis Carol.


Machines can have consciousness. At least, they can have as much consciousness as you; I can surmise this from the fact that I cannot see out of your eyes, or think your thoughts, yet I can see out of my eyes and think my thoughts, and therefore I can conclude that you are not conscious, merely a sophisticated automaton; If you see from your perspective and not mine, then apply the same to me. Likewise, anything you can be made to do a computer can also be made to do and to cause itself to do, with finer technology and budgets than we have now. A computer was made recently at MIT that is as powerful and complicated a processor as the human mind; All it lacks, in fact, is a Soul, defined by Paracelsus as "Self-Awareness," which, by your assertion and the Law of Syllogism, means it is the same as your consciousness.

Now, let us, for a moment, for the sake of argument, make the entirely absurd assumption that you, as well as the rest of humanity, are as conscious as me. To explain this, let us make the unfounded and loosely bound assumption, as well, that, therefore, we must each be an individual Conscience, with separate Souls, each distinct and with its own "flavour." And there is some truly absurd force that keeps us separate, so that we can see from our own eyes and think our own thoughts, but nobody else's (I'll point out that we have no evidence of this. It is purely hypothetical, and the only natural suggestion of it that we have is that other people look kind of like us, and therefore "must" be like us). With this assumption in mind, how can we really prove that the program at MIT isn't conscious? Because it isn't programmed to be. If we made every piece of data cross-reference to every other, and form connections in RAM for the most connected pieces, then the computer might as well be conscious, and, as far as we can possibly surmise, assuming that others have consciousness, we must also assume that the computer would then be conscious.

So, the computer is as conscious as any third person. Questions? (Not that I'll read them.)

Apteryx
Posts: 174
Joined: Tue Feb 09, 2010 5:41 am UTC

Re: Can a machine be conscious?

Postby Apteryx » Sat Mar 13, 2010 9:35 am UTC

0xBADFEED wrote:
Apteryx wrote:Which begs the question, of the possible lay definitions, which particular one do you now claim is typical of all lay people. And how do you arrive at this without simply being arrogant.

When you get all indignant with someone and try to start a semantics battle over some imaginary slight to your intelligence, it's best not to horribly misuse phrases like "begs the question". It hurts your credibility.

You definition though, is exactly what he DID do.
". . .the initial assumption of a statement is treated as already proven without any logic to show why the statement is true in the first place. . ."

He claimed that all lay people "typically" couldn't provide a definition for the concept, BUT PROVIDED NO EVIDENCE FOR HIS CLAIM.

So I pointed out that his statement "Begged the question". Regardless of what you think, my use of the term was correct. I was taxing him on his NOT having provided proof, or even qualification of his sweeping claim.
Abuse of words has been the great instrument of sophistry and chicanery, of party, faction, and division of society.
John Adams

sphagetti
Posts: 17
Joined: Sun Sep 27, 2009 2:41 pm UTC

Re: Can a machine be conscious?

Postby sphagetti » Sun Mar 14, 2010 3:54 pm UTC

Okapi wrote:A computer was made recently at MIT that is as powerful and complicated a processor as the human mind; All it lacks, in fact, is a Soul, defined by Paracelsus as "Self-Awareness," which, by your assertion and the Law of Syllogism, means it is the same as your consciousness.

Your argument is rather nice. We'll probably be never be able to recognize consciousness at least in the beginning. Consciousness in other animals is still under research and developments do come up now and then that make us rethink what goes on in their minds.
However your statement that such a computer has been built, I can only say that [citation needed]. The best I know is that a the complexity of a cat's brain was matched last year but there was much controversy.
http://www.engadget.com/2009/11/18/ibm- ... -are-next/
http://www.wired.com/dangerroom/2009/11 ... scientist/

User avatar
Xanthir
My HERO!!!
Posts: 5423
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Mon Mar 15, 2010 12:52 am UTC

Apteryx wrote:
0xBADFEED wrote:
Apteryx wrote:Which begs the question, of the possible lay definitions, which particular one do you now claim is typical of all lay people. And how do you arrive at this without simply being arrogant.

When you get all indignant with someone and try to start a semantics battle over some imaginary slight to your intelligence, it's best not to horribly misuse phrases like "begs the question". It hurts your credibility.

You definition though, is exactly what he DID do.
". . .the initial assumption of a statement is treated as already proven without any logic to show why the statement is true in the first place. . ."

He claimed that all lay people "typically" couldn't provide a definition for the concept, BUT PROVIDED NO EVIDENCE FOR HIS CLAIM.

So I pointed out that his statement "Begged the question". Regardless of what you think, my use of the term was correct. I was taxing him on his NOT having provided proof, or even qualification of his sweeping claim.

No, that's still quite wrong.

1) Begging the question is more than just providing no evidence for a claim. It is stating a proof where the statement to be proven is assumed to be true from the start (usually hidden in a different form). I did not do that. Here's the full quote:
You're correct. Most people define free will in a magical, inconsistent way. They don't want their will to be deterministic, and thus predictable, but they don't want it to be random either. Unfortunately, those really are the only two possibilities. Either something can be predicted, or it incorporates randomness, in which case you can predict everything but the randomness. The lay definition of free will, well, typically isn't a definition at all.

I made a statement, stated a handwavey proof of it, then restated the statement in the fashion you objected to.

2) Look again at the exact words you used. "Which begs the question, of the possible lay definitions, which particular one do you now claim is typical of all lay people." You can substitute "begs" for "asks" and not change the meaning at all. The proper use of "begs the question" does not allow that, as it means something rather different.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

olleicua
Posts: 5
Joined: Sun Mar 14, 2010 11:53 pm UTC
Location: Maine/Vermont

Re: Can a machine be conscious?

Postby olleicua » Mon Mar 15, 2010 1:35 am UTC

This discussions seems to have devolved a bit so I will attempt to refocus on the initial question.

I many definitions of machine would define a human to be a machine so the question is really 'Is there such a thing as consciousness?'. This of course depends on the definition of consciousness. I would define a consciousness to be an observer that interacts with the brain. To the extent that you think there is free will I might attribute that free will to the consciousness and the extent that you do not I would assert that the consciousness receives information from the material world but that information does not go the other way. This accounts for there being no scientific evidence of it. Some would say that there being no scientific evidence for it, there is no reason to think it exists. I would assert that people with consciousness can be empirically certain of their consciousness and will in fact not need a definition. If you do not believe in free will then my assertion that I have a consciousness should not convince you that I do since any disconnected consciousness entity that knew it existed would would by definition not be talking to you and it would only be by coincidence that my brain also had developed in such a way as to think that it had a consciousness. However, to the extent that you believe in free will you should be convinced that either I am lying to you or I am conscious. If it is possible for me to be conscious than it is presumably possible for some biologist/computer scientist to engineer an artificial neural network and for it to gain a consciousness the same way that I did.

modified-features
Posts: 12
Joined: Sun Feb 22, 2009 6:40 pm UTC
Location: UK
Contact:

Re: Can a machine be conscious?

Postby modified-features » Tue Mar 30, 2010 3:01 pm UTC

I'm a little confused by what you're trying to say. Maybe, I'm just being a bit slow, but why does a disbelief in free will mean that one must also have a disbelief in consciousness?

DgN13
Posts: 7
Joined: Sun Feb 21, 2010 4:20 am UTC

Re: Can a machine be conscious?

Postby DgN13 » Sun Apr 04, 2010 5:04 pm UTC

He is assuming that consciousness implies free will, which I personally believe it does, but what is free will?

Now, I'm probably going to cause trouble saying this, but I don't think anybody has said it yet (or at least not as blatantly).

I haven't eaten. I want to eat. I go eat. I have free will because I've chosen to eat.
The thermostat sees the temperature is wrong. The thermostat wants to fix it. The thermostat fixes it. The thermostat has free will because it has chosen to fix the temperature.
The computer's program says it needs to print "Hello World" to the screen. The computer wants to print it. The computer prints it. The computer has free will because it has chosen to print it.

Therefore, the computer has free will?

You could argue that we do it because we want to, rather than acting on a program, but do we want to? We are being told by our brains "we need to eat" or some permutation of such, which resolves to us feeling the need to eat, which we translate to an emotion.

The computer acts on a program, saying "I need to print Hello World", which resolves to it 'feeling' (not that it translates to feeling the same way we experience it) the need to print it.

It did it because it wanted to, defining want for it the same way we define want for ourselves; acting off an impulse given by a program. You can apply these to a lot of human emotions or quirks.

You could argue computers don't have feelings, but define feelings. It reacts off an internal program, like we do. Do you want to achieve consciousness (which has been done) or do you want to achieve human consciousness (which has not).

popman
Posts: 70
Joined: Sun Mar 07, 2010 7:38 pm UTC

Re: Can a machine be conscious?

Postby popman » Mon Apr 05, 2010 5:47 pm UTC

I imagine if machines ever become conscious they will not let us know as this would put people off making them better.(also in my imagination robots can't design robots better than themselves as this would make themselves obsolete therefore removing there need for existence)
www.crashie8.com

User avatar
Briareos
Posts: 1940
Joined: Thu Jul 12, 2007 12:40 pm UTC
Location: Town of the Big House

Re: Can a machine be conscious?

Postby Briareos » Mon Apr 05, 2010 5:51 pm UTC

popman wrote:I imagine if machines ever become conscious they will not let us know as this would put people off making them better.
Why would humans stop trying to make machines better if we knew they were conscious?
(also in my imagination robots can't design robots better than themselves as this would make themselves obsolete therefore removing there need for existence)
Why do robots have a need for existence? And why does that imply that they can't design better robots?
Sandry wrote:Bless you, Briareos.

Blriaraisghaasghoasufdpt.
Oregonaut wrote:Briareos is my new bestest friend.

popman
Posts: 70
Joined: Sun Mar 07, 2010 7:38 pm UTC

Re: Can a machine be conscious?

Postby popman » Tue Apr 06, 2010 3:56 pm UTC

Briareos wrote:
popman wrote:I imagine if machines ever become conscious they will not let us know as this would put people off making them better.
Why would humans stop trying to make machines better if we knew they were conscious?
(also in my imagination robots can't design robots better than themselves as this would make themselves obsolete therefore removing there need for existence)
Why do robots have a need for existence? And why does that imply that they can't design better robots?

I mean that these robots would fear they would be replaced by these better robots e.g. how many people do you know that still use the same computer they used 10 years ago.

also people would not want to get rid of there robots if they knew they were conscious like the way you don't get rid of your child or pet for a newer model.
www.crashie8.com

User avatar
phlip
Restorer of Worlds
Posts: 7573
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

Re: Can a machine be conscious?

Postby phlip » Wed Apr 07, 2010 5:58 am UTC

But why would a robot necessarily value self-preservation? Just because we do, doesn't make it a prerequisite for consciousness...

Code: Select all

enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}
[he/him/his]

User avatar
tastelikecoke
Posts: 1208
Joined: Mon Feb 01, 2010 7:58 am UTC
Location: Antipode of Brazil
Contact:

Re: Can a machine be conscious?

Postby tastelikecoke » Tue Apr 13, 2010 4:50 pm UTC

made me think:

A non-evolving code can still hold consciousness.
A simple code that manipulates memory in a very fuzzy logical way?

And a magazine said about the noise in the neural activities are actually utilized by the brain.

popman
Posts: 70
Joined: Sun Mar 07, 2010 7:38 pm UTC

Re: Can a machine be conscious?

Postby popman » Fri Apr 16, 2010 12:31 am UTC

phlip wrote:But why would a robot necessarily value self-preservation? Just because we do, doesn't make it a prerequisite for consciousness...

Isaac Asimov's third law of robotics.
www.crashie8.com

eljitto
Posts: 7
Joined: Sun Apr 11, 2010 3:05 pm UTC

Re: Can a machine be conscious?

Postby eljitto » Fri Apr 16, 2010 1:08 am UTC

YOU are a machine
are you concious?







BOOM

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: Can a machine be conscious?

Postby idobox » Fri Apr 16, 2010 5:11 pm UTC

Hi all,

I've just found this very interresting thread.

There is one definition of self-awareness I particularly like: A system is self-aware when it is aware of its own processes, and is able to rectify/dislearn/ignore them.

For exemple, an insect reacts to stimuli, and will always do the same way.
Pavlov's dog acquired a new behaviour, namely salivating when a bell is rung. If the bell is rung but there is no food, the dog is unable to repress the reflex because it lacks the capacity to analyse its own behaviour.
Human do a rather bad job at analyzing their own mind, but they seem to be the best animals at this game. The simple fact of wondering what conciousness fits this definition of self-awareness.

I don't have any good definition of consciousness, but I think it implies self-awareness as I defined it. The best definition I can think of is a system is conscious when it is aware of the fundemental difference between itself and the rest of the universe, when it can sort events between internal and external.

Finally, any machine capable of proposing a way to do a task significantly different from its original programming AND not finding it by a gradual optimization process, should be considered self-aware.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

paulhastings
Posts: 0
Joined: Tue Jun 22, 2010 7:47 am UTC

Re: Can a machine be conscious?

Postby paulhastings » Tue Jun 22, 2010 8:01 am UTC

What have I gotten in to.
Just joined and this was the first post I looked at.
This could be a fun place to be and thought provoking.

User avatar
Argency
Posts: 203
Joined: Wed May 19, 2010 12:43 am UTC
Location: Brisbane, Australia

Re: Can a machine be conscious?

Postby Argency » Tue Jun 22, 2010 11:13 am UTC

0xBADFEED wrote:
hammerkrieg wrote: Your computer is just a tiny bit conscious.

Sorry, I cannot agree with this.
Spoiler:
I would say that consciousness is a property of software, not hardware. It makes no more sense to say "a computer is conscious" than it does to say "a brain is conscious". The brain may have a potential for consciousness, but the virtue of being a brain does not imbue it with consciousness automatically. I'm sure you can think of lots of examples of human brains that are definitely not conscious. Consciousness of a brain is dependent on having a perception/reasoning loop active in the brain. We don't usually separate the idea of "software" and "hardware" in the brain. But I think the distinction is important. Surely the "hardware" of my brain is incredibly similar to the "hardware" of your brain. Yet, we can have vastly different "conciousnesses". That is why I say that consciousness is largely a property of software not hardware, and it is meaningless to ascribe consciousness to inanimate hardware without also talking about the software.

I wouldn't classify any of today's software as being "conscious". It doesn't reach the level of self-modification and abstraction power that I would say is necessary to really call it conscious, for any interesting definition of "conscious". There is no evidence that our brains are doing magical things that could not be replicated on a computer. Our brains just do it on a massively parallel scale and have incredible coordination and processing power.
hammerkrieg wrote:Indeed, my reverie has sometimes led me to believe that even the tiniest subatomic particles are conscious to some very small degree.

This is "New-Agey", magical thinking that has no basis or evidence. You should disabuse yourself of it.

I disagree about 70%, and I hope you're still kicking around in this thread because its something I'd really like to discuss.

I think you may have mentioned at some later point that higher mammals have a degree of consciousness? If I'm wrong I apologise. Either way, I think I'm in a fairly strong position in asserting that they do. To argue otherwise you'd have to come up with a pretty trivial definition of consciousness. Now that we've got a spectrum of consciousness established, I think it's fair to assert that the spectrum continues into the lower mammals. It's also fairly absurd to suggest that something like a sheep could be on there and not an octopus or a raven, since the latter two are clearly more conscious than the former. So now we should include all of Aves and Cephalopoda. Why cut the rest of their phyla? In fact, we should cut in all of Animalia, which means we should be fair to the the Eukaryotes... You can see where I'm going. Eventually you're going to have all life on a spectrum of consciousness. I'm not suggesting that all life is as conscious as humans, just that they're all at least nominally conscious.

I'm not doing that without any reason, either. I think the computational model of the mind is the only one that makes any kind of sense (Jerry Fodor is a berk), and I'm firmly convinced that whatever the GUT of the mind turns out to be it'll include computationalism. On a computational assessment, its easy to see why there should be a spectrum of consciousness. So where do we draw the line? I agree that rocks and sticks and atoms aren't conscious. And the reason I agree is that all of life (and I'm taking the broad definition which includes virii) performs computation which is optimised to maximise fitness, whereas natural non-life doesn't. Data gets input, and the process is contrived in such a way as to make the output fitness-enhancing. Since you also seem to ascribe to computationalism, or at least functionalism, I think you'd agree that a sufficiently intelligent robot would also be conscious (in which case, computers and household appliances (even light switches) are also minimally conscious) even though their fitness is not geared for natural selection but for artificial selection.

I realise that's getting pretty abstract. Obviously, it doesn't help to talk about the consciousness of light switches, but I think the reason for that is the same reason we don't talk about the heat of a single water molecule. Its not as though heat is some emergent property that only applies to large groupings of molecules, rather it's a meta-phenomenon: really heat is just the action of millions of millions of molecules' kinetic energies. In that sense, a single water molecule has heat, because its mean molecular kinetic energy is just its kinetic energy, but it isn't really a useful concept when applied in that circumstance. In the same way, a very complex fitness-aimed computational system is conscious (because consciousness is a meta-phenomenon), but a very simple fitness-aimed computational system is just... biology (or technology), even though when we say consciousness all we really mean is fitness-aimed computational ability.

Whaddayareckon?
Gonna be a blank slate, gonna wear a white cape.

User avatar
Xanthir
My HERO!!!
Posts: 5423
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Tue Jun 22, 2010 2:32 pm UTC

I believe you're mostly correct, though you're still drawing the line slightly too low for machines. A light switch is a simple machine. Literally - it's a lever. That's roughly on the same scale as sticks and rocks.

A majority of machines, in fact, are still too simplistic to be said to be conscious.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
Argency
Posts: 203
Joined: Wed May 19, 2010 12:43 am UTC
Location: Brisbane, Australia

Re: Can a machine be conscious?

Postby Argency » Tue Jun 22, 2010 3:03 pm UTC

The lever bit of the lightswitch is just just a lever, your right. It's the circuitry component itself that's the "conscious" bit. It takes two input values (whether the light is on or off and whether you flick the switch or not), and gives you one output value. If the second input is 1, the output is the opposite of the first input, and if it's 0, the output is the same as the first input. That is, if x and y are the input values and f(x,y) is the output value: f(0,0) = 0; f(0,1) = 1;
f(1,0) = 1; f(1,1) = 0. So it performs a computational operation which has been developed to increase it's fitness (because a light switch that doesn't work like that sucks as a lightswitch). So in my books, it's conscious.

It shouldn't matter if your method of calculation is chemicals in a brain or electrons in a computer or levers and wires in a lightswitch or rocks lined up in the desert. As long as there's computation aimed at fitness you've got consciousness.
Gonna be a blank slate, gonna wear a white cape.

User avatar
Xanthir
My HERO!!!
Posts: 5423
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Tue Jun 22, 2010 4:31 pm UTC

All right, then we'll just have to agree that your definition of conscious is functionally useless, because it admits far too much.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
phillipsjk
Posts: 1213
Joined: Wed Nov 05, 2008 4:09 pm UTC
Location: Edmonton AB Canada
Contact:

Re: Can a machine be conscious?

Postby phillipsjk » Tue Jun 22, 2010 5:01 pm UTC

Argency wrote: On a computational assessment, its easy to see why there should be a spectrum of consciousness. So where do we draw the line? I agree that rocks and sticks and atoms aren't conscious. And the reason I agree is that all of life (and I'm taking the broad definition which includes virii) performs computation which is optimised to maximise fitness, whereas natural non-life doesn't.


I agree sticks can't be conscious because they are dead, but how do you know rocks aren't living things on a geologic time-scale? They respond to stimulus by changing form. They reproduce though erosion and the accumulation of sediment.

It can be argued the stars an galaxies are living things with even longer time scales.

If you think this is completely silly, imagine you are a mayfly (living for only a day). How would you decide if a tree is an living thing or inert?

Incidentally, I don't think consciousness is possible without a self-preservation drive. So Virii probably wouldn't count, but worms would. Plants and bacteria would be a grey area.

Modern computers that shut themselves down to avoid thermal run-away would be minimally conscious as well.
Did you get the number on that truck?

User avatar
Xanthir
My HERO!!!
Posts: 5423
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Tue Jun 22, 2010 5:55 pm UTC

phillipsjk wrote:
Argency wrote: On a computational assessment, its easy to see why there should be a spectrum of consciousness. So where do we draw the line? I agree that rocks and sticks and atoms aren't conscious. And the reason I agree is that all of life (and I'm taking the broad definition which includes virii) performs computation which is optimised to maximise fitness, whereas natural non-life doesn't.


I agree sticks can't be conscious because they are dead, but how do you know rocks aren't living things on a geologic time-scale? They respond to stimulus by changing form. They reproduce though erosion and the accumulation of sediment.

It can be argued the stars an galaxies are living things with even longer time scales.

No, it can't be, not without diluting the word "life" to meaninglessness. I'm serious - at the point where you're calling rocks alive, literally *everything* is alive, and thus the word "alive" is entirely useless. Words and labels only have meaning when they let you discriminate between things.

Viruses are much more alive than rocks, and there is still serious debate over whether it's appropriate to call them alive or not. If you want to argue about where the line should be drawn, it's somewhere between viruses and simple bacteria in terms of complexity. There are then additional criteria for what makes a given lump of complexity "alive" or not, namely the ability to seek out resources and reproduce.with descendants carrying on some traits from the parent.

If you think this is completely silly, imagine you are a mayfly (living for only a day). How would you decide if a tree is an living thing or inert?

By collecting evidence of trees in various stages of growth around you, and conducting studies over generations for tracking individual trees. Science is not something done by a single person with only their natural senses.

Incidentally, I don't think consciousness is possible without a self-preservation drive. So Virii probably wouldn't count, but worms would. Plants and bacteria would be a grey area.

Modern computers that shut themselves down to avoid thermal run-away would be minimally conscious as well.

I agree, incidentally.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
phillipsjk
Posts: 1213
Joined: Wed Nov 05, 2008 4:09 pm UTC
Location: Edmonton AB Canada
Contact:

Re: Can a machine be conscious?

Postby phillipsjk » Tue Jun 22, 2010 11:28 pm UTC

No, I don't think including rocks automatically includes "everything." Like sticks are dead pieces of trees, manufactured products made from rock like steel would be inert as well. Much like rotting can return the dead-wood to the ecosystem, rust can return dead-rocks to the ecosystem as well.

We know rocks change overtime. Minerals can reproduce as I described. the life cycle is approximately: Lava -> Igneous rock -> Metamorphic rock -> Sedimentary rock. From the Wikipedia igneous rock entry: "Over 700 types of igneous rocks have been described, most of them having formed beneath the surface of Earth's crust. These have diverse properties, depending on their composition and how they were formed." So, they do inherit some traits from their "parents."

Because nobody thinks of rocks as living things, we don't know how complex their "life cycle" really is. On our time scale, the processes may seem really simple, but they could employ complex behaviors we will never comprehend. That said, rocks appear to be relatively passive, so likely lack a self-preservation instinct.
Did you get the number on that truck?

User avatar
Argency
Posts: 203
Joined: Wed May 19, 2010 12:43 am UTC
Location: Brisbane, Australia

Re: Can a machine be conscious?

Postby Argency » Tue Jun 22, 2010 11:43 pm UTC

Xanthir wrote:
phillipsjk wrote:
Argency wrote: On a computational assessment, its easy to see why there should be a spectrum of consciousness. So where do we draw the line? I agree that rocks and sticks and atoms aren't conscious. And the reason I agree is that all of life (and I'm taking the broad definition which includes virii) performs computation which is optimised to maximise fitness, whereas natural non-life doesn't.


I agree sticks can't be conscious because they are dead, but how do you know rocks aren't living things on a geologic time-scale? They respond to stimulus by changing form. They reproduce though erosion and the accumulation of sediment.

It can be argued the stars an galaxies are living things with even longer time scales.

No, it can't be, not without diluting the word "life" to meaninglessness. I'm serious - at the point where you're calling rocks alive, literally *everything* is alive, and thus the word "alive" is entirely useless. Words and labels only have meaning when they let you discriminate between things.

Viruses are much more alive than rocks, and there is still serious debate over whether it's appropriate to call them alive or not. If you want to argue about where the line should be drawn, it's somewhere between viruses and simple bacteria in terms of complexity. There are then additional criteria for what makes a given lump of complexity "alive" or not, namely the ability to seek out resources and reproduce.with descendants carrying on some traits from the parent.

If you think this is completely silly, imagine you are a mayfly (living for only a day). How would you decide if a tree is an living thing or inert?

By collecting evidence of trees in various stages of growth around you, and conducting studies over generations for tracking individual trees. Science is not something done by a single person with only their natural senses.

Incidentally, I don't think consciousness is possible without a self-preservation drive. So Virii probably wouldn't count, but worms would. Plants and bacteria would be a grey area.

Modern computers that shut themselves down to avoid thermal run-away would be minimally conscious as well.

I agree, incidentally.


I agree with almost all of this, Xanthir. I even agree that it makes sense not to call virii alive, because they lack a lot of the properties we associate with living things. But I don't think it makes sense not to call virii at least minimally conscious. And no, we won't agree that my definition of consciousness is functionally useless until you offer some justification on that point.

The reason I think my definition is correct is be that I think that fitness-aimed computational ability is what we've always been talking about when we talk about consciousness - we've just never really made that association because in all the cases where we can measure processing power directly, there isn't nearly enough to be obviously conscious in the way that humans and dolphins are.

Remember my analogy with heat from before? Of course, when there are only a very few molecules it makes sense to talk about molecular kinetic energy instead of heat. In the same way, when there is only a tiny bit of computational power it makes sense to talk about fitness-aimed processing ability instead of consciousness. But that doesn't mean they aren't the same thing. It's practically useless to say that a virus is minimally conscious (because it's not intelligent), but it's theoretically very useful (because it explains how consciousness works). And since it's a theory of the mind we're talking about, that's pretty important.

Rocks, though, they're not conscious or alive. Anything will change due to the right stimuli, electrons included. You might as well say that the radioactive elements are alive because they reproduce (uranium -> thorium -> protactinium, etc). The criteria for life/consciousness are pretty straightforward and rocks don't fulfil them.
Gonna be a blank slate, gonna wear a white cape.

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Wed Jun 23, 2010 12:13 am UTC

Argency wrote:I disagree about 70%, and I hope you're still kicking around in this thread because its something I'd really like to discuss.

I think you may have mentioned at some later point that higher mammals have a degree of consciousness? If I'm wrong I apologise.

Yes, I think consciousness exists on a spectrum.
Either way, I think I'm in a fairly strong position in asserting that they do. To argue otherwise you'd have to come up with a pretty trivial definition of consciousness. Now that we've got a spectrum of consciousness established, I think it's fair to assert that the spectrum continues into the lower mammals. It's also fairly absurd to suggest that something like a sheep could be on there and not an octopus or a raven, since the latter two are clearly more conscious than the former.

I agree.
So now we should include all of Aves and Cephalopoda. Why cut the rest of their phyla? In fact, we should cut in all of Animalia, which means we should be fair to the the Eukaryotes... You can see where I'm going. Eventually you're going to have all life on a spectrum of consciousness. I'm not suggesting that all life is as conscious as humans, just that they're all at least nominally conscious.

I said something similar in one of my previous posts but with a different conclusion. I don't really think that all life is necessarily on this consciousness spectrum. The alternative is that we reduce the definition of consciousness to perception + reaction. And I don't think that perception + reaction is really sufficient for consciousness. I'd say a dog is conscious, an amoeba is not.
I'm not doing that without any reason, either. I think the computational model of the mind is the only one that makes any kind of sense (Jerry Fodor is a berk), and I'm firmly convinced that whatever the GUT of the mind turns out to be it'll include computationalism. On a computational assessment, its easy to see why there should be a spectrum of consciousness.

I agree.
So where do we draw the line? I agree that rocks and sticks and atoms aren't conscious. And the reason I agree is that all of life (and I'm taking the broad definition which includes virii) performs computation which is optimised to maximise fitness, whereas natural non-life doesn't. Data gets input, and the process is contrived in such a way as to make the output fitness-enhancing.

That may be a property common to all things that are conscious, that alone doesn't make it equivalent to, or sufficient for consciousness. I agree that it's very difficult to draw a sharp line and any sharp line is guaranteed to be (in at least some respect) arbitrary. This is essentially the sorites paradox. How many grains of sand is a heap? When is a system conscious? Everyone picks their own personal dividing line. However, the absence of a universally definable dividing line does not imply the total absence of division.

Since you also seem to ascribe to computationalism, or at least functionalism, I think you'd agree that a sufficiently intelligent robot would also be conscious (in which case, computers and household appliances (even light switches) are also minimally conscious) even though their fitness is not geared for natural selection but for artificial selection.

I would consider conscious any machine that can demonstrate abilities on par with other things that I consider conscious.
I don't agree concerning household computers, appliances, and light switches. It seems that by following the slippery slope to the low end of the consciousness spectrum you've weakened your definition of consciousness to the point that it is now somewhere between "can respond to external stimulus" and "can perform computation". I'm not sure how light switches got in there since most of them can't do either, they're just two bits of metal on a pivot.

The main point is accepting that consciousness is on a sliding scale. It seems I just have more stringent requirements for consciousness than you do. I do feel that your definition is overly lax though.

User avatar
WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: Can a machine be conscious?

Postby WarDaft » Fri Jun 25, 2010 12:08 am UTC

All right, then we'll just have to agree that your definition of conscious is functionally useless, because it admits far too much.
That does not necessarily mean it is wrong. It could easily be true, in which case we are only really interested in memory forming consciousnesses. Of course, we're never going to get an answer from a grain of sand as to whether or not it is conscious, so we will probably never know.

We can however, experimentally determine if a virtual information system can possess consciousness. (Or rather, a conscious individual can experimentally confirm it for themselves)

Create a bridge between the physical neurons in your head, and simulated neurons in a computer, such that said bridge interacts with the simulated neurons and your real neurons exactly as other neurons would. The idea is that your brain shouldn't be able to distinguish between more neurons being attached to your brain, and more simulated neurons being attached to your brain (this is possible, it is just a matter of price). Then, slowly prune neurons from your brain, duplicating them in the computer, pushing your thoughts more and more into an unbounded pool of simulated neurons. If you remain awake through the process, you can wait to see if there is any distinguishable decrease in your apparent consciousness (which clearly interacts with the neurons somehow, else we wouldn't be arguing about it, now would we?)



Here's a fun thought (complete non sequitur by the way)

Consider 1-state 2-symbol NDTM:

0: {R0,R1,L0,L1,BH}
1: "

With the note that BH means the branch halts, all others continue.

The Bekenstein bound says that any given finite space has a finite bound on information based on the matter and energy in that space. The densest is for a black hole, but it is still finite for for any finite energy/space combination. That means that any finite space with finite energy therein has a finite number of different states it can be in. It can only transition to a new unique state so many times before it necessarily loops back to some earlier state (this would happen when it reaches its personal heat death and is just transitioning between probabilistically equivalent extreme entropy states, and in such a case it would not loop back very far... relatively speaking). This means that after a finite amount of time every state it is going to reach will be reached, bounded absolutely by a double exponent of the area enclosing the space. This means it's entire cycle can be represented with a finite amount of information. This means there is a Turing machine which A) outputs that information and B) calculates through that information on the way to some other output. Let us call this concept your "Turing Prison".

Our Turing machine will contain a branch that mimics every possible Turing Machine obtainable after any countable number of Turing jumps. Your Turing Prison does not even need one jump.

This is like the many-worlds hypothesis... heck, it is it, on overkill.
There is a branch (a set of them actually) of the execution of such a machine which is your Turing Prison with you doing everything you physically can in this universe to prove you are not being emulated by said machine.

Note that the lack of all sorts of totally random things happening is not evidence of you not being in such a machine, you could think that in any branch it doesn't happen in and the only yous continuing to think that are the yous in branches continuing to "behave" (or the branches where you are insane).

So, if we presume information can be conscious, can a branch of the proposed NDTM be conscious as well?
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

User avatar
Xanthir
My HERO!!!
Posts: 5423
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Fri Jun 25, 2010 12:35 am UTC

You're jumping around with language too much for me, but if I'm deciphering what you're saying correctly, you're talking about simulating all possible configurations of a quantity of energy in a region, which (assuming the region and energy are both large enough) includes every possible state of my body in every moment of every history I could possibly experience? (It also contains every other possible human, along with every other entity that could exist in that volume and is composed of that much mass-energy or less.)

Yes, the simulation of me is conscious in all the appropriate branches.

Did you think I would disagree?
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: Can a machine be conscious?

Postby WarDaft » Fri Jun 25, 2010 1:43 am UTC

Actually no, I'm talking about simulating every possible computation of a finite number of steps with a countable alphabet.

I used the Bekenstein bound to show that our universe is almost surely represented in a subset of that (or we really don't know our universe very well at all)

The reason the question is ambiguous is because the machine is actually just the set of real numbers, and makes no actual decisions. (To clarify for all involved, in this case decision means the machine has a certain set of abilities, and selects one based on some criteria)

To see this, consider base four: R0 is 0, R1 is 1, L0 is 2, L1 is 3, and we don't even need BH (or we could use it to specify the rationals, take your pick.) For every step, the machine splits into 5 other machines, one that halts, and one for each of the possible numerals for the next digit... so there is no step where there are no further branches. Each real can be paired with not just the result of the calculation but actually is one of the calculation branches.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

User avatar
Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Can a machine be conscious?

Postby Yakk » Fri Jun 25, 2010 7:21 pm UTC

Why are you messing up your model with R0, R1, L0, L1? It seems to me you needlessly multiplied states. So either you are confused (and are talking about a needlessly complex TM), or I'm confused (and those states actually do something interesting).
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
Xanthir
My HERO!!!
Posts: 5423
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Can a machine be conscious?

Postby Xanthir » Fri Jun 25, 2010 10:13 pm UTC

WarDaft wrote:Actually no, I'm talking about simulating every possible computation of a finite number of steps with a countable alphabet.

I used the Bekenstein bound to show that our universe is almost surely represented in a subset of that (or we really don't know our universe very well at all)

The reason the question is ambiguous is because the machine is actually just the set of real numbers, and makes no actual decisions. (To clarify for all involved, in this case decision means the machine has a certain set of abilities, and selects one based on some criteria)

To see this, consider base four: R0 is 0, R1 is 1, L0 is 2, L1 is 3, and we don't even need BH (or we could use it to specify the rationals, take your pick.) For every step, the machine splits into 5 other machines, one that halts, and one for each of the possible numerals for the next digit... so there is no step where there are no further branches. Each real can be paired with not just the result of the calculation but actually is one of the calculation branches.

Actually computing one of those numbers would indeed be equivalent to simulating the universe. Is it necessary to even use the real numbers, though? If the universe simulations eventually loop, we can cut them off there. Then we have a finite-depth tree of turing-machine histories, which map to the integers, not the reals.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
Argency
Posts: 203
Joined: Wed May 19, 2010 12:43 am UTC
Location: Brisbane, Australia

Re: Can a machine be conscious?

Postby Argency » Sat Jun 26, 2010 6:52 am UTC

0xBADFEED wrote:
Spoiler:
Argency wrote:I disagree about 70%, and I hope you're still kicking around in this thread because its something I'd really like to discuss.

I think you may have mentioned at some later point that higher mammals have a degree of consciousness? If I'm wrong I apologise.

Yes, I think consciousness exists on a spectrum.
Either way, I think I'm in a fairly strong position in asserting that they do. To argue otherwise you'd have to come up with a pretty trivial definition of consciousness. Now that we've got a spectrum of consciousness established, I think it's fair to assert that the spectrum continues into the lower mammals. It's also fairly absurd to suggest that something like a sheep could be on there and not an octopus or a raven, since the latter two are clearly more conscious than the former.

I agree.
So now we should include all of Aves and Cephalopoda. Why cut the rest of their phyla? In fact, we should cut in all of Animalia, which means we should be fair to the the Eukaryotes... You can see where I'm going. Eventually you're going to have all life on a spectrum of consciousness. I'm not suggesting that all life is as conscious as humans, just that they're all at least nominally conscious.

I said something similar in one of my previous posts but with a different conclusion. I don't really think that all life is necessarily on this consciousness spectrum. The alternative is that we reduce the definition of consciousness to perception + reaction. And I don't think that perception + reaction is really sufficient for consciousness. I'd say a dog is conscious, an amoeba is not.
I'm not doing that without any reason, either. I think the computational model of the mind is the only one that makes any kind of sense (Jerry Fodor is a berk), and I'm firmly convinced that whatever the GUT of the mind turns out to be it'll include computationalism. On a computational assessment, its easy to see why there should be a spectrum of consciousness.

I agree.
So where do we draw the line? I agree that rocks and sticks and atoms aren't conscious. And the reason I agree is that all of life (and I'm taking the broad definition which includes virii) performs computation which is optimised to maximise fitness, whereas natural non-life doesn't. Data gets input, and the process is contrived in such a way as to make the output fitness-enhancing.

That may be a property common to all things that are conscious, that alone doesn't make it equivalent to, or sufficient for consciousness. I agree that it's very difficult to draw a sharp line and any sharp line is guaranteed to be (in at least some respect) arbitrary. This is essentially the sorites paradox. How many grains of sand is a heap? When is a system conscious? Everyone picks their own personal dividing line. However, the absence of a universally definable dividing line does not imply the total absence of division.

Since you also seem to ascribe to computationalism, or at least functionalism, I think you'd agree that a sufficiently intelligent robot would also be conscious (in which case, computers and household appliances (even light switches) are also minimally conscious) even though their fitness is not geared for natural selection but for artificial selection.

I would consider conscious any machine that can demonstrate abilities on par with other things that I consider conscious.
I don't agree concerning household computers, appliances, and light switches. It seems that by following the slippery slope to the low end of the consciousness spectrum you've weakened your definition of consciousness to the point that it is now somewhere between "can respond to external stimulus" and "can perform computation". I'm not sure how light switches got in there since most of them can't do either, they're just two bits of metal on a pivot.

The main point is accepting that consciousness is on a sliding scale. It seems I just have more stringent requirements for consciousness than you do. I do feel that your definition is overly lax though.


Your definition of consciousness is fine for day to day conversation but it won't work for science or philosophy because it's circular. "Things are conscious if they act like things that are conscious". I think my conclusion only seems silly because you're still holding onto an incorrect definition of consciousness. To say that every fitness-aimed computational system is consciousness IS silly if you take conscious to mean "capable of complex interactions like a human or higher mammal". But that's not what I'm saying. All I'm saying is that humans and higher mammals are capable of those complex interactions because they're very, very conscious; astronomically more conscious than minimally conscious things like comuters, virii or even light switches.

So your example with the pile of sand is true, but it argues in my favour. Since a pile of sand is a grouping of sand grains, and any number of sand grains (even 1) can constitute a pile. But, like my water temperature example, you wouldn't call one grain of sand a pile because it's not useful in any way to do so.

Of course, I agree that the terminology gets a bit confusing at this point. You're probably right, we should invent another term for fitness-aimed computational ability so that we can set an arbitrary cut-off point below which fitness-aimed computational systems are no longer termed to be conscious. But on the topic of this thread, it is important to remember that consciousness is just extreme computational ability aimed at enhancing fitness, because if you want to build a conscious computer you need to do it from the ground up, which means you need to talk about consciousness in terms of computational ability.

So - even though three grains of sand is technically a pile, you wouldn't talk about it like that because it isn't useful. Even though three molecules of water technically have a temperature, you wouldn't talk about them that way because it isn't useful. Same applies to computational ability vs. consciousness.

Really I think we agree on everything except our definitions. I'm arguing for my definition because it makes a useful point about machine consciousness - in day to day life I'd use the same definition as you, but I'd keep in the back of my mind that that definition is only good down to a certain level of analysis.

EDIT: Haha, slippery slope. I see what you did there. I'm not sure it's the same though, because a slippery slope fallacy argues that since the difference between t and (t+1) is negligible, then the difference between t and (t+x) must be negligible. I'm arguing that the difference between t and (t+1) isn't negligible, but that they're on the same spectrum, and that therefore t and (t+x) are on the same spectrum.
Gonna be a blank slate, gonna wear a white cape.

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Sat Jun 26, 2010 4:16 pm UTC

Argency wrote:Your definition of consciousness is fine for day to day conversation but it won't work for science or philosophy because it's circular. "Things are conscious if they act like things that are conscious". I think my conclusion only seems silly because you're still holding onto an incorrect definition of consciousness. To say that every fitness-aimed computational system is consciousness IS silly if you take conscious to mean "capable of complex interactions like a human or higher mammal". But that's not what I'm saying.

I'm sorry, I should have been more clear. You misunderstood me. That wasn't my requirement for consciousness. I was just saying I have no problems calling a machine conscious if it meets my standards. I've stated previously several times in this thread what I require to consider a system conscious but I'll repost them. A conscious system must be able to:
  • Perceive its environment
  • Make decisions based on those perceptions
  • Remember cause and effect relationships based on the perception->decision
  • Abstract these relationships to create general propositions about its environment
  • Maintain an internal model to predict future cause->effect relationships
Basically, for me, consciousness boils down to being able to interact with some external environment while maintaining an internal conceptual model of that environment. So even something like cleverbot might be said to be minimally conscious.
All I'm saying is that humans and higher mammals are capable of those complex interactions because they're very, very conscious; astronomically more conscious than minimally conscious things like comuters, virii or even light switches.

At the point that your definition of consciousness starts to include two bits of metal on a pivot you need to either re-examine your definition, or come up with a new word. It seems more like you've latched on to a common property of systems that are traditionally considered conscious and are asserting that the presence of that property in any system constitutes consciousness.
I'm just not convinced that's true since you've offered no evidence apart from hand-waving. I think your definition admits far too much. The edge-cases for consciousness using your definition (according to you) are light switches and toasters, the edge-cases for my definition are things like cleverbot or lower-order organisms. I think the usefulness of a "consciousness" definition is best measured by the edge cases it creates, rather than what things it unifies. Because your definition includes nearly everything I don't find it all that useful. That is, the boundary cases of your definition are not very interesting.

So your example with the pile of sand is true, but it argues in my favour. Since a pile of sand is a grouping of sand grains, and any number of sand grains (even 1) can constitute a pile. But, like my water temperature example, you wouldn't call one grain of sand a pile because it's not useful in any way to do so.

I don't see how it's in any way an argument in your favor.
One item is never a heap/pile. That's the whole point of the paradox. If we were comfortable calling 1 item a heap it wouldn't be a paradox. It isn't that it's "not useful" to call one grain of sand a heap, it's incorrect by the definition of "heap". The point of the paradox is to show difficulties in classifying vague predicates like "is a heap" (or "is conscious" in our case) using true/false sharp boundaries, and how easily we can get ourselves into nonsensical situations if we blindly follow an implication without taking a more holistic view. In the case of the sand, 1 grain of sand is definitely not a heap but a million grains of sand are a heap. Somewhere there is a division between the two heap-ness cases but it's impossible to say on a per-grain basis at what point the sand goes from being a heap to not a heap.
For me consciousness is much the same way. Mammals are conscious. Amoebas are not conscious. There is a division somewhere on the spectrum between mammals and amoebas (according to my definition). I can't draw a sharp division on the spectrum because the predicate is much too vague, but a division exists somewhere.

Of course, I agree that the terminology gets a bit confusing at this point. You're probably right, we should invent another term for fitness-aimed computational ability so that we can set an arbitrary cut-off point below which fitness-aimed computational systems are no longer termed to be conscious. But on the topic of this thread, it is important to remember that consciousness is just extreme computational ability aimed at enhancing fitness because if you want to build a conscious computer you need to do it from the ground up, which means you need to talk about consciousness in terms of computational ability.

I don't really agree with this premise.

"Enhancing fitness" doesn't even mean anything unless you first define the context of fitness. If you say it's "enhancing fitness" it has to be enhancing fitness with respect to something. Fitness for what? Is there a universal fitness context that you're talking about? Is the context of fitness defined on a per-entity basis? If that's the case who gets to decide on the pairings of entities to fitness contexts?

Also you haven't really given any example or evidence to show that all consciousness is aimed at "enhancing fitness". Or is any system not aimed at "enhancing fitness" not conscious by definition? I'm conscious (I think), what is my fitness context?

User avatar
Argency
Posts: 203
Joined: Wed May 19, 2010 12:43 am UTC
Location: Brisbane, Australia

Re: Can a machine be conscious?

Postby Argency » Mon Jun 28, 2010 10:56 am UTC

0xBADFEED wrote:
Argency wrote:Your definition of consciousness is fine for day to day conversation but it won't work for science or philosophy because it's circular. "Things are conscious if they act like things that are conscious". I think my conclusion only seems silly because you're still holding onto an incorrect definition of consciousness. To say that every fitness-aimed computational system is consciousness IS silly if you take conscious to mean "capable of complex interactions like a human or higher mammal". But that's not what I'm saying.

I'm sorry, I should have been more clear. You misunderstood me. That wasn't my requirement for consciousness. I was just saying I have no problems calling a machine conscious if it meets my standards. I've stated previously several times in this thread what I require to consider a system conscious but I'll repost them. A conscious system must be able to:
  • Perceive its environment
  • Make decisions based on those perceptions
  • Remember cause and effect relationships based on the perception->decision
  • Abstract these relationships to create general propositions about its environment
  • Maintain an internal model to predict future cause->effect relationships
Basically, for me, consciousness boils down to being able to interact with some external environment while maintaining an internal conceptual model of that environment. So even something like cleverbot might be said to be minimally conscious.
All I'm saying is that humans and higher mammals are capable of those complex interactions because they're very, very conscious; astronomically more conscious than minimally conscious things like comuters, virii or even light switches.

At the point that your definition of consciousness starts to include two bits of metal on a pivot you need to either re-examine your definition, or come up with a new word. It seems more like you've latched on to a common property of systems that are traditionally considered conscious and are asserting that the presence of that property in any system constitutes consciousness.
I'm just not convinced that's true since you've offered no evidence apart from hand-waving. I think your definition admits far too much. The edge-cases for consciousness using your definition (according to you) are light switches and toasters, the edge-cases for my definition are things like cleverbot or lower-order organisms. I think the usefulness of a "consciousness" definition is best measured by the edge cases it creates, rather than what things it unifies. Because your definition includes nearly everything I don't find it all that useful. That is, the boundary cases of your definition are not very interesting.

So your example with the pile of sand is true, but it argues in my favour. Since a pile of sand is a grouping of sand grains, and any number of sand grains (even 1) can constitute a pile. But, like my water temperature example, you wouldn't call one grain of sand a pile because it's not useful in any way to do so.

I don't see how it's in any way an argument in your favor.
One item is never a heap/pile. That's the whole point of the paradox. If we were comfortable calling 1 item a heap it wouldn't be a paradox. It isn't that it's "not useful" to call one grain of sand a heap, it's incorrect by the definition of "heap". The point of the paradox is to show difficulties in classifying vague predicates like "is a heap" (or "is conscious" in our case) using true/false sharp boundaries, and how easily we can get ourselves into nonsensical situations if we blindly follow an implication without taking a more holistic view. In the case of the sand, 1 grain of sand is definitely not a heap but a million grains of sand are a heap. Somewhere there is a division between the two heap-ness cases but it's impossible to say on a per-grain basis at what point the sand goes from being a heap to not a heap.
For me consciousness is much the same way. Mammals are conscious. Amoebas are not conscious. There is a division somewhere on the spectrum between mammals and amoebas (according to my definition). I can't draw a sharp division on the spectrum because the predicate is much too vague, but a division exists somewhere.

Of course, I agree that the terminology gets a bit confusing at this point. You're probably right, we should invent another term for fitness-aimed computational ability so that we can set an arbitrary cut-off point below which fitness-aimed computational systems are no longer termed to be conscious. But on the topic of this thread, it is important to remember that consciousness is just extreme computational ability aimed at enhancing fitness because if you want to build a conscious computer you need to do it from the ground up, which means you need to talk about consciousness in terms of computational ability.

I don't really agree with this premise.

"Enhancing fitness" doesn't even mean anything unless you first define the context of fitness. If you say it's "enhancing fitness" it has to be enhancing fitness with respect to something. Fitness for what? Is there a universal fitness context that you're talking about? Is the context of fitness defined on a per-entity basis? If that's the case who gets to decide on the pairings of entities to fitness contexts?

Also you haven't really given any example or evidence to show that all consciousness is aimed at "enhancing fitness". Or is any system not aimed at "enhancing fitness" not conscious by definition? I'm conscious (I think), what is my fitness context?


Ok, well, if you want to define consciousness that way then I'll agree to change my terminology to suit. But the most important point (the one which directly relates to the thread) still stands - consciousness is no more than a very complex computational process aimed at enhancing fitness. In this case, fitness is the ability to survive on planet earth. So to answer your question, yes, any system not aimed at enchancing fitness is by definition not conscious. Similarly, any system which is not computational cannot, by definition, be conscious.

So, sure, if you think that all conscious things must have the particular attributes that you listed, that's fine. As long as you remember that those criteria are arbitrary, and that a barely conscious organism and a barely unconscious organism (say, cleverbot and a walkman) are much more similar to one another than a barely conscious organism is to a massively conscious one (say, cleverbot and a human). In other words, as long as you remember that your definition is a fuzzy-edged label designed to make conversation easier, I'm happy to use it as such.

But the reason I was using my definition (and yes, it is a much less discerning one, allowing for some pretty trivial cases to be technically termed consciousness) is because my definition is based on the causal mechanism of consciousness. It explains how consciousness arises. If you agree that a machine could be conscious, then you surely must agree on how consciousness comes about. And if you agree with that, you must also agree with me that what you call consciousness is just a very complex example of the sort of process that takes place in any computational device, and that there's nothing extra added in to create consciousness. That's the only real reason I was making the point, and if you were operating on a different terminology then I guess I'm happy to adopt yours for the purposes of the argument. Remember that my original post was replying to your argument against somebody who was presumably going by the same definition as me - I guess I assumed that you would too once the case had been made for it.

You pointed out that I haven't offered any evidence for my definition - that's because definitions aren't proven, they're justified. Your definition is conversationally justified, mine's analytically justified. Both are useful in different ways. Surely you can see why it's useful to base your definition on the causes of the phenomenon you're investigating, from a scientific/engineering standpoint?

On the point of heaps of sand - I think the reason we disagree on that one is the same reason we disagree on this topic as a whole. My solution to the paradox is just to say that any grouping of sand grains is nominally a heap, even if very small groupings can more usefully be referred to by their individual grains. Yours seems to be to define some arbitrary point at which a heap ceases to be a heap. You could presumably define a number of water molecules below which a droplet was no longer said to have a temperature. In all three cases, your definition makes conversation easier, and mine makes causal analysis easier.

Finally - light switches. Just to be clear, the reason I'm talking about them is because they're an example of a logic gate, the most basic processing module in all artificial systems. So all this talk of them being just a couple of pieces of metal on a pivot is beside the point. They could be made of bits of pocket lint for all I care, as long as they perform a computational function designed to increase their survivability (because non-functional light switches get replaced).
Gonna be a blank slate, gonna wear a white cape.

0xBADFEED
Posts: 687
Joined: Mon May 05, 2008 2:14 am UTC

Re: Can a machine be conscious?

Postby 0xBADFEED » Mon Jun 28, 2010 2:57 pm UTC

Argency wrote:Ok, well, if you want to define consciousness that way then I'll agree to change my terminology to suit. But the most important point (the one which directly relates to the thread) still stands - consciousness is no more than a very complex computational process aimed at enhancing fitness. In this case, fitness is the ability to survive on planet earth. So to answer your question, yes, any system not aimed at enchancing fitness is by definition not conscious. Similarly, any system which is not computational cannot, by definition, be conscious.

Imagine a machine that possesses to a great degree all of the abilities I list in my consciousness requirements. To the point that you could converse with it in a very natural and convincingly human way. Yet it does not possess a survival drive because it was never given one, it has no predators, teams of technicians take care of its maintenance, and it cannot be shut off. It has no impetus to enhance its ability to survive because its survival is assured.

Is this system conscious? I would say so, as it has all the abilities I would associate with high-level consciousness. But if it is not "enhancing fitness for survival" then it is not conscious by your definition.
What would you say of this machine?

So, sure, if you think that all conscious things must have the particular attributes that you listed, that's fine. As long as you remember that those criteria are arbitrary, and that a barely conscious organism and a barely unconscious organism (say, cleverbot and a walkman) are much more similar to one another than a barely conscious organism is to a massively conscious one (say, cleverbot and a human). In other words, as long as you remember that your definition is a fuzzy-edged label designed to make conversation easier, I'm happy to use it as such.

Sure the qualities I listed are arbitrary. But no more arbitrary than your requirement of "fitness aimed computation". There's a word for "fitness aimed computation", it's optimization. But like "fitness aimed computation", "optimization" doesn't mean anything without an optimization context. Since your fitness computation is aimed at survival, your definition might be better termed as "survival optimization". I'm going to use "survival optimization" from now on to refer to your definition to make things more clear. I hope that's ok with you.

But the reason I was using my definition (and yes, it is a much less discerning one, allowing for some pretty trivial cases to be technically termed consciousness) is because my definition is based on the causal mechanism of consciousness. It explains how consciousness arises.If you agree that a machine could be conscious, then you surely must agree on how consciousness comes about. And if you agree with that, you must also agree with me that what you call consciousness is just a very complex example of the sort of process that takes place in any computational device, and that there's nothing extra added in to create consciousness.

I think herein lies a major problem I have with your definition, that it attempts to muddle mechanism with result. Surely when we talk about consciousness we care about systems that appear "conscious" regardless of the consciousness mechanism. I'll agree that survival optimization has probably been a major causal mechanism for consciousness as we know it, but I think it's absurdly premature to say that survival optimization is the only mechanism of consciousness or that survival optimization equals consciousness. I also agree that computation alone is sufficient to carry out all the processes of consciousness. I can't agree with you about how consciousness comes about, because I have no idea how it comes about. I agree that survival optimization seems to be a potential avenue for consciousness creation, I'm just not convinced it's the only one, as you seem to be.

That's the only real reason I was making the point, and if you were operating on a different terminology then I guess I'm happy to adopt yours for the purposes of the argument. Remember that my original post was replying to your argument against somebody who was presumably going by the same definition as me - I guess I assumed that you would too once the case had been made for it.

Sorry, if I was unclear. When I say a "conscious" system I mean a system that is conscious according to my definition. I'm not sure what definition he was going by as he never offered a detailed explanation of his views.

You pointed out that I haven't offered any evidence for my definition - that's because definitions aren't proven, they're justified. Your definition is conversationally justified, mine's analytically justified.

I'm not asking you to prove your definition. That would be absurd. I'm asking for exactly what you said, justification for why you believe what you believe. What leads you to believe that consciousness can only exist in systems aimed at survival optimization?
Both are useful in different ways. Surely you can see why it's useful to base your definition on the causes of the phenomenon you're investigating, from a scientific/engineering standpoint?

No, and I think that's a major weakness of this definition. Conflating mechanism and result makes your definition weaker, not stronger. Consciousness (by my definition and probably most interpretations) is an observed property of systems. Keeping the observed property separate from the mechanism is vital. This is especially true in science. Lots of similar observations can have very different mechanisms and vice-versa. If you load the definition of "consciousness" with a particular mechanism then you've needlessly restricted the observed phenomenon you're talking about.
On the point of heaps of sand - I think the reason we disagree on that one is the same reason we disagree on this topic as a whole. My solution to the paradox is just to say that any grouping of sand grains is nominally a heap, even if very small groupings can more usefully be referred to by their individual grains.

The problem with this is that it is not a solution because it leaves you in an incorrect state at the end. All groupings of sand are not "nominally heaps" because they do not satisfy the definition of a heap. There is an alternative formulation of the paradox which might make this more apparent:
Is 1 grain of sand a heap of sand? (No, by definition of a heap)
If we add one, are 2 grains of sand a heap? No.
If we add one, are 3 grains of sand a heap? No.
.... (ad infinitum)
Therefore no matter how many grains of sand we add, we will never have a heap. Therefore, heaps don't exist.

Your "solution" is functionally equivalent to agreeing that "heaps don't exist". But of course heaps exist so it is a non-solution. I understand what you're saying. But you're basically sacrificing correctness for convenience.
Yours seems to be to define some arbitrary point at which a heap ceases to be a heap. You could presumably define a number of water molecules below which a droplet was no longer said to have a temperature. In all three cases, your definition makes conversation easier, and mine makes causal analysis easier.

The analogy to water molecules is completely bogus because the predicates "is a heap" and "temperature" are completely different. Temperature is an exact predicate with a definite answer for any number of water molecules even if that answer isn't "useful". "Is a heap?" on the other hand has an element of subjectivity which makes its placement on a continuum with any authority impossible. It's apples and oranges.

I think you're right in that survival optimization has played a big part in consciousness as we know it. I just don't see it as necessary or sufficient to describe consciousness in the large.


Return to “Religious Wars”

Who is online

Users browsing this forum: No registered users and 5 guests