The first rule of consciousness?

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Prelates, Moderators General

The first rule of consciousness?

Postby TrlstanC » Fri Sep 14, 2012 2:52 pm UTC

As I've read more about trying to replicate human consciousness or at least intelligence, as well as attempts to set a standard for consciousness, either artificial, or in other animals, there's one area that seems overlooked to me. The fact that our understanding of the world is based on our conscious experience, and so we should always try to remember that our consciousness is limited, so not only could the world be different than we think, but our consciousness could be different than we realize. So, whenever thinking about consciousness I try to remember this rule:

Ri >> Ru >> Rc

or

the Rate at which useful information becomes available >> the Rate at which a brain can make use of it >> the Rate at which we can consciously understand it.

Basically, we can only be rational about a very small amount of our thinking and decisions. Most of our understanding of the world will happen subconsciously, and most of that will be based on habit, bias and shortcuts. When I look at most theories of consciousness or intelligence (or what we could call conscious intelligence, intelligence we're aware of), there's very little acknowledgement of this. And attempts at artificial intelligence mostly seem to be based on capturing huge amounts of information digitally, and then processing all of it. Which might be a good strategy to compliment human intelligence, but it doesn't seem like the right strategy to duplicate or study human intelligence.

Although I have seen some projects that seem to be going in the right direction, like Million ARM Processor computer to the model of the Human Brain]using a million low power chips to model neurons, without relying on an internal clock, and giving up the idea of repeatability.
Last edited by TrlstanC on Fri Sep 14, 2012 5:28 pm UTC, edited 1 time in total.
Defining Consciousness - discussion on a theory of consciousness. Are conscous thoughts a prediction of what you would say to yourself?

The Bias Against Creativity: Why People Desire But Reject Creative Ideas
User avatar
TrlstanC
Flexo
 
Posts: 358
Joined: Thu Oct 08, 2009 5:16 pm UTC
Location: Boston, MA

Re: The first rule of consciousness?

Postby Felstaff » Fri Sep 14, 2012 3:45 pm UTC

You do not talk about consciousness.

Also - fixed your URL:

http://www.kurzweilai.net/low-power-chi ... on-neurons
Habent sua fata libelli et balli
User avatar
Felstaff
Occam's Taser
 
Posts: 4920
Joined: Fri Feb 01, 2008 7:10 pm UTC
Location: ¢ ₪ ¿ ¶ § ∴ ® © ™ ؟ ¡ ‽ æ Þ ° ₰ ₤ ಡಢ

Re: The first rule of consciousness?

Postby TrlstanC » Fri Sep 14, 2012 5:32 pm UTC

If anyone missed the point about consciousness then: Rc, the Rate at which we can consciously understand the information available to us, is much much less than the rate at which information becomes available.

People often make the assumption that we're rational, or mostly rational, that we consciously consider our choices, or are at least aware enough to know when we have important choices so that we can devote some conscious attention to them. I don't think this could be further from the truth. I'd go so far as to say that when I write this, and when anyone reads and/or responds to it, the percentage of that action that is controlled consciously is very small.
Defining Consciousness - discussion on a theory of consciousness. Are conscous thoughts a prediction of what you would say to yourself?

The Bias Against Creativity: Why People Desire But Reject Creative Ideas
User avatar
TrlstanC
Flexo
 
Posts: 358
Joined: Thu Oct 08, 2009 5:16 pm UTC
Location: Boston, MA

Re: The first rule of consciousness?

Postby Tyndmyr » Fri Sep 14, 2012 5:54 pm UTC

Just because an action is unconcious does not mean it is irrational.

Most of our automatic, unconcious actions are actually the rational choice most of the time. Pace of heartbeat, breathing, vision tracking of movement, etc...we do indeed have a great many unconcious actions that happen basically constantly. Thing is, the stuff that's unconcious is mostly stuff that's extremely repetitive, and that would almost invariably be a waste of brainpower to focus on. From a standpoint of making correct decisions, it is only logical to focus the problem solving ability on the decisions that are most uncertain, and to automate the rest as much as possible.

We are, overall, mostly pretty rational creatures even if unusual circumstances do occasionally muck things up.
Tyndmyr
 
Posts: 4309
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: The first rule of consciousness?

Postby TrlstanC » Fri Sep 14, 2012 6:06 pm UTC

I'm not even talking about heart beat, breathing, etc. Although, calling those the "rational" choice seems a bit weird. I mean things like, choosing which route to drive to work, actually driving the car, obeying traffic laws. Even things like conversation, there is research supporting the theory that we say, or do, things first and then justify them consciously when questioned.

I think the idea that we: review all of the information available to us, decide what needs conscious consideration, and make rational choices about it; is deeply flawed. It could much more easily be explained as our brains filtering out most of the information available, reacting to most of the rest on habit, and then creating an useful fiction that we're making conscious choices. Things like the idea of self, of free will, and rational behavior could be mostly useful fictions that don't actually exist, but allow us to be rational occasionally without having to put in the effort of being rational all the time.

We've been trying to replicate a perfectly rational human intelligence on some level for decades, and have been failing pretty consistently. It seems like a much better course to try and model intelligence (and possibly consciousness) in a way that can be studied, and see what we can learn about how it actually works, instead of how we think it works. It's probably also a useful reminder to think "I'm probably not actually thinking about what I'm doing most of time, maybe even now" sometimes.
Defining Consciousness - discussion on a theory of consciousness. Are conscous thoughts a prediction of what you would say to yourself?

The Bias Against Creativity: Why People Desire But Reject Creative Ideas
User avatar
TrlstanC
Flexo
 
Posts: 358
Joined: Thu Oct 08, 2009 5:16 pm UTC
Location: Boston, MA

Re: The first rule of consciousness?

Postby aoeu » Fri Sep 14, 2012 6:14 pm UTC

What most people understand by rational is something along the lines of "optimal given the information available". If we did not justify something consciously it does not imply we did not act rationally.

What you are saying is more like "reported motivation is different than actual motivation".
aoeu
 
Posts: 280
Joined: Fri Dec 31, 2010 4:58 pm UTC

Re: The first rule of consciousness?

Postby TrlstanC » Fri Sep 14, 2012 6:36 pm UTC

aoeu wrote:What most people understand by rational is something along the lines of "optimal given the information available". If we did not justify something consciously it does not imply we did not act rationally.


I think most people would say rational is "optimal given the information available, to date." And it seems pretty clear that the amount of information available (calculate the number of bits of info available per second) is much higher than we can store (the number of bits our brain could potentially store), and not just much higher, but it greatly overwhelms our capacity. We can only remember a tiny fraction of the information available, and that tiny fraction has to be compressed considerably. I could stare at a city map for an hour, and consciously try to remember as much important information as possible. Within minutes I'd have forgotten most of it, within a day I'd only have a few important details, and within a week I could be remembering more incorrect information (that I filled in somehow) than true. And this is probably the best possible case scenario. Even if I subconsciously remember some small part of it, most of it is lost forever. But, through habit, and various other shortcuts I can probably make pretty good guesses after some experience walking or driving around the city. In the best case scenario these will sometimes approach the traditional definition of a rational decision. It's unlikely though that any decision, in this best possible case, could be described as truly rational.

aoeu wrote:What you are saying is more like "reported motivation is different than actual motivation".
What I'm guessing at is that "motivation" is probably a useful fiction that's part of our conscious experience because it makes it easier for our brains to keep track of all the things we're doing, not to mention all the things everyone else is doing.
Defining Consciousness - discussion on a theory of consciousness. Are conscous thoughts a prediction of what you would say to yourself?

The Bias Against Creativity: Why People Desire But Reject Creative Ideas
User avatar
TrlstanC
Flexo
 
Posts: 358
Joined: Thu Oct 08, 2009 5:16 pm UTC
Location: Boston, MA

Re: The first rule of consciousness?

Postby TheGrammarBolshevik » Fri Sep 14, 2012 7:34 pm UTC

Christine Korsgaard wrote:Reason” and related terms like “rational” can be used in either a normative or a descriptive way. When we use these terms normatively, “reason” is synonymous with “good reason.” When we use the term descriptively, a reason is some consideration on the basis of which we decide to do something. In that sense, there things you do for good reasons, things you do for bad reasons, and things you do, but not for reasons at all – like, say, scream when you see a monster at the window. (It’s not like you think, “oh, a monster. I guess I had better scream now.”) It’s because we use the terms both ways that we can say, “that’s a bad reason” (descriptive use) and “that’s no reason at all” (normative use) and mean essentially the same thing.

That might clear up the different senses of "rational" that are being used here.

TrlstanC wrote:Basically, we can only be rational about a very small amount of our thinking and decisions. Most of our understanding of the world will happen subconsciously, and most of that will be based on habit, bias and shortcuts. When I look at most theories of consciousness or intelligence (or what we could call conscious intelligence, intelligence we're aware of), there's very little acknowledgement of this.

Perhaps because this is because they're theories of conscious intelligence, and not theories of subconscious perception? I mean, if you have a specific example of how facts about subconscious perception should affect a theory of consciousness, that's one thing, but there's no presumption that a Theory of X should need to talk about Y, just because Y goes on in roughly the same area as X.

TrlstanC wrote:And attempts at artificial intelligence mostly seem to be based on capturing huge amounts of information digitally, and then processing all of it. Which might be a good strategy to compliment human intelligence, but it doesn't seem like the right strategy to duplicate or study human intelligence.

Why not? We do process huge amounts of information, even if a lot of that processing is saying "This doesn't matter; throw it out."
Nothing rhymes with orange,
Not even sporange.
TheGrammarBolshevik
 
Posts: 4562
Joined: Mon Jun 30, 2008 2:12 am UTC
Location: Where.

Re: The first rule of consciousness?

Postby Tyndmyr » Fri Sep 14, 2012 8:11 pm UTC

aoeu wrote:What most people understand by rational is something along the lines of "optimal given the information available". If we did not justify something consciously it does not imply we did not act rationally.

What you are saying is more like "reported motivation is different than actual motivation".


Precisely. An act can be entirely rational, even if the reason reported for it is not...or if the actor wasn't aware that he made it.

Tristan...EVERYTHING, computerized, human, whatever, works off models. All models and all information perception are inaccurate to some degree. None of us are perfectly rational...but we can be pretty damned close sometimes. It's all about how big the error margins are.

And of course, TheGrammarBolshevik is correct. The human brain both stores and processes ludicrous amounts of material. IIRC, storage capacity for a single human brain is estimated at 2.5 petabytes. This is...pretty ludicrously large compared to even very ambitious AI projects. I would not assume that over-supply of information is a concern for AI as of yet...relating information to itself in different ways seems to be the issue, really. Even the best of chat bots often lose context.
Tyndmyr
 
Posts: 4309
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: The first rule of consciousness?

Postby morriswalters » Fri Sep 14, 2012 9:57 pm UTC

Think of consciousness as a resultant. Consciousness is the story that your mind creates to to put the world in perspective. It happens after the fact. By the time your conscious mind is aware of anything it's already history. Or so the current thinking goes.
morriswalters
 
Posts: 3413
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The first rule of consciousness?

Postby EMTP » Sat Sep 15, 2012 12:28 am UTC

And attempts at artificial intelligence mostly seem to be based on capturing huge amounts of information digitally, and then processing all of it. Which might be a good strategy to compliment human intelligence, but it doesn't seem like the right strategy to duplicate or study human intelligence.


It certainly isn't. Rational thought is an outgrowth of goal-directed behavior. Goal-directed behavior is a product of desire. Software doesn't want anything. A prerequisite to developing consciousness is developing a computer that wants things. That desires things. It's a tricky problem, and I don't see AI researchers focusing on it, which is a shame.
"“The practice of violence, like all action, changes the world, but the most probable change is to a more violent world."
-- Hannah Ardent, "Reflections on Violence"
User avatar
EMTP
 
Posts: 1062
Joined: Wed Jul 22, 2009 7:39 pm UTC
Location: Elbow deep in (mostly) other people's blood.

Re: The first rule of consciousness?

Postby TheAmazingRando » Sat Sep 15, 2012 12:48 am UTC

EMTP wrote:It certainly isn't. Rational thought is an outgrowth of goal-directed behavior. Goal-directed behavior is a product of desire. Software doesn't want anything. A prerequisite to developing consciousness is developing a computer that wants things. That desires things. It's a tricky problem, and I don't see AI researchers focusing on it, which is a shame.
I think you'd need to generate some sort of general-purpose AI before the concept of a machine desiring something even means anything. Most AI research right now is about developing AI for specific applications.

However, if you look at modern machine learning techniques, it is based around a rudimentary punishment/reward system and learning behavior based around that. It's absolutely goal-directed and, in the absence of general purpose AI, seems like the closest you can get to "desire."
User avatar
TheAmazingRando
 
Posts: 2304
Joined: Thu Jan 03, 2008 9:58 am UTC
Location: San Diego, CA

Re: The first rule of consciousness?

Postby Spambot5546 » Sat Sep 15, 2012 1:34 am UTC

TrlstanC wrote:Ri >> Ru >> Rc


Anyone else reading this as a bitwise right shift? WTH are you trying to say here?
"It is bitter – bitter", he answered,
"But I like it
Because it is bitter,
And because it is my heart."
Spambot5546
 
Posts: 1348
Joined: Thu Apr 29, 2010 7:34 pm UTC

Re: The first rule of consciousness?

Postby TrlstanC » Sat Sep 15, 2012 1:46 am UTC

TheGrammarBolshevik wrote:Why not? We do process huge amounts of information, even if a lot of that processing is saying "This doesn't matter; throw it out."


I know that's a form of processing, but it's certainly not making use of, especially if the throwing out happens way before we get a chance to "think" about it, at least consciously. This is in contrast to most AI projects, that specifically aim at capturing lots of information and analyzing it all.

And it's a good thing that we do throw out most of it because while 2.5 petabytes of information (which is wildly speculative since we don't really understand how information is stored in the brain, and that's basically just counting the number of connections between neurons to get to that number) it would only be enough for about 1/2 lifetime's worth of HD video. So, no intelligent thoughts, feelings, or any other connections or sensory data. A small fraction of the data generated by one of our senses, with no processing or intelligence behind would completely fill up a brain in about 30 years.

Not to say that's not an impressive amount of storage (if that's actually an accurate number), but clearly the actual amount of data we deal with in a life time in many magnitudes greater.
Defining Consciousness - discussion on a theory of consciousness. Are conscous thoughts a prediction of what you would say to yourself?

The Bias Against Creativity: Why People Desire But Reject Creative Ideas
User avatar
TrlstanC
Flexo
 
Posts: 358
Joined: Thu Oct 08, 2009 5:16 pm UTC
Location: Boston, MA

Re: The first rule of consciousness?

Postby The Great Hippo » Sat Sep 15, 2012 2:13 am UTC

TrlstanC wrote:I know that's a form of processing, but it's certainly not making use of, especially if the throwing out happens way before we get a chance to "think" about it, at least consciously. This is in contrast to most AI projects, that specifically aim at capturing lots of information and analyzing it all.
I don't know a lot about AI projects, but I know that part of any rigorous analysis involves 'culling' the irrelevant data. And 'culling' is a type of analysis--ideally, a cheap and fast analysis, but an analysis nevertheless. Humans do this (we make some quick, dirty, often unconscious decisions about what is and isn't relevant) and programs do this (they apply some quick, dirty algorithms to reduce the amount of data they have to use more expensive algorithms on). Effective culling techniques means analyzing enormous quantities of data very quickly and very cheaply.

I don't know why you think AIs are analyzing 'all' the data when humans aren't--we're both analyzing 'all' the data--the only real difference is that the type of analysis that computers do is more transparent, and therefore can be much more effectively structured to accomplish certain tasks.
TrlstanC wrote:And it's a good thing that we do throw out most of it because while 2.5 petabytes of information (which is wildly speculative since we don't really understand how information is stored in the brain, and that's basically just counting the number of connections between neurons to get to that number) it would only be enough for about 1/2 lifetime's worth of HD video. So, no intelligent thoughts, feelings, or any other connections or sensory data. A small fraction of the data generated by one of our senses, with no processing or intelligence behind would completely fill up a brain in about 30 years.

Not to say that's not an impressive amount of storage (if that's actually an accurate number), but clearly the actual amount of data we deal with in a life time in many magnitudes greater.
We don't store everything in our brains. We employ clever compression techniques (like storing data we're likely to forget in a book, for example, or mnemonic devices). Computers also have their own clever compression techniques (like reducing five hundred zeros to a single zero associated with the number '500').

I think you're making a lot of unnecessary distinctions here.
User avatar
The Great Hippo
 
Posts: 5964
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: The first rule of consciousness?

Postby morriswalters » Sat Sep 15, 2012 3:00 am UTC

We don't throw out data so much as we never acquire it, or we acquire it and use it and then forget it. And a lot of data never enters our conscious experience at all. Try driving your route to work in your mind. How much of what you remember is actually there? When I play with this I find that a lot of what I visualize is fabricated. For instance I find that I almost always viz it with trees in the summer. The things that I remember best are the things that I focus on the most. The thing is that most of what you see or experience is not needed other than at the moment it happens. I expect that someone is right in that it's probably true that for AI they may need to figure out why any life struggles to live in the first place. That's an opinion.
morriswalters
 
Posts: 3413
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The first rule of consciousness?

Postby The Great Hippo » Sat Sep 15, 2012 3:06 am UTC

morriswalters wrote:We don't throw out data so much as we never acquire it, or we acquire it and use it and then forget it. And a lot of data never enters our conscious experience at all. Try driving your route to work in your mind. How much of what you remember is actually there? When I play with this I find that a lot of what I visualize is fabricated. For instance I find that I almost always viz it with trees in the summer. The things that I remember best are the things that I focus on the most. The thing is that most of what you see or experience is not needed other than at the moment it happens. I expect that someone is right in that it's probably true that for AI they may need to figure out why any life struggles to live in the first place. That's an opinion.
Culling isn't merely a function of the conscious brain. The design of our eyes--allowing us to perceive a limited range in front of us (but not in back of us) is a sophisticated (albeit flawed!) data-culling method produced through the process of evolution. That design can be described as a means through which we process relevant data and 'cull' irrelevant data.

That being said, we get fed plenty of data that's culled with or without us knowing.
User avatar
The Great Hippo
 
Posts: 5964
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: The first rule of consciousness?

Postby gmalivuk » Sat Sep 15, 2012 3:33 am UTC

EMTP wrote:Goal-directed behavior is a product of desire.
I'd be interested in seeing a definition of "goal-directed behavior" from you that didn't presuppose this claim, followed by a demonstration of this claim, because it is not at all self-evident to me.

TrlstanC wrote:but clearly the actual amount of data we deal with in a life time in many magnitudes greater.
Sure, but you're not honestly claiming we have HD-quality memories of *everything*, are you?
Treatid basically wrote:widdout elephants deh be no starting points. deh be no ZFC.


(If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome)
User avatar
gmalivuk
A debonaire peeing style
 
Posts: 20983
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There

Re: The first rule of consciousness?

Postby morriswalters » Sat Sep 15, 2012 9:04 am UTC

I don't see the conscious brain culling data at all. My information is limited, but what I have read leads me to believe that consciousness happens after the fact. It simply a narrative which places us and gives us a point of view, after the disparate parts of the brain have already done their work.
morriswalters
 
Posts: 3413
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The first rule of consciousness?

Postby TrlstanC » Sat Sep 15, 2012 1:23 pm UTC

gmalivuk wrote:Sure, but you're not honestly claiming we have HD-quality memories of *everything*, are you?


Oh no, certainly not. In fact the visual data we do remember is probably mostly relationships and patterns, and I'd guess that very little of it is actual visual 'data'. But I'd estimate that the visual information our eyes give us is "HD quality" (at least at a certain distance), so I was just pointing out that actually storing all that data is impossible.
Defining Consciousness - discussion on a theory of consciousness. Are conscous thoughts a prediction of what you would say to yourself?

The Bias Against Creativity: Why People Desire But Reject Creative Ideas
User avatar
TrlstanC
Flexo
 
Posts: 358
Joined: Thu Oct 08, 2009 5:16 pm UTC
Location: Boston, MA

Re: The first rule of consciousness?

Postby gmalivuk » Sat Sep 15, 2012 1:25 pm UTC

It may be HD near the center of the visual field, but it drops quite rapidly as you move away from that.
Treatid basically wrote:widdout elephants deh be no starting points. deh be no ZFC.


(If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome)
User avatar
gmalivuk
A debonaire peeing style
 
Posts: 20983
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There

Re: The first rule of consciousness?

Postby EMTP » Sat Sep 15, 2012 5:30 pm UTC

TheAmazingRando wrote:
EMTP wrote:It certainly isn't. Rational thought is an outgrowth of goal-directed behavior. Goal-directed behavior is a product of desire. Software doesn't want anything. A prerequisite to developing consciousness is developing a computer that wants things. That desires things. It's a tricky problem, and I don't see AI researchers focusing on it, which is a shame.
I think you'd need to generate some sort of general-purpose AI before the concept of a machine desiring something even means anything. Most AI research right now is about developing AI for specific applications.

However, if you look at modern machine learning techniques, it is based around a rudimentary punishment/reward system and learning behavior based around that. It's absolutely goal-directed and, in the absence of general purpose AI, seems like the closest you can get to "desire."


How can you "punish" software? How can you "reward" software? You can't -- you can only reinforce certain patterns of behavior. That's not goal directed behavior. Those aren't real rewards and punishments.

Imagine a cat or a dog. They have a rudimentary consciousness. They have real desires and can be rewarded and punished. They have real goal directed behavior. If you made them smarter -- more processing power, more knowledge and memory -- they'd have something we would recognize as intelligent, conscious behavior. But the machine with just the processing power and information would not.
Most AI research right now is about developing AI for specific applications.


Practical AI research is rightly not concerned with consciousness. I suspect that by the time we can mimic human intelligence with a machine, machines will have their own non-human intelligence that is far easier to construct and far more useful. That's what happened with chess. Computers play chess nothing like a human does, and appart from their ability to consider millions of positions, their strategic skills are appalling. But they are the best chess players in the world, because they play like machines, not like people.

I think AI will develop like mechanized transportation. Do we have a machine that can walk, jump, climb, and run like a human can? Not yet. Maybe in the next hundred years. But meanwhile we have bicycles, cars, trucks, planes and spacecraft that can do all sorts of things we can't. Practical AI research will give us cars and planes.

Impractical, consciousness-focused AI research has a long way to go, and I think they are going to have to find a substitute for the human limbic system. Consciousness is more than the frontal lobes. Respect the reptile brain!
"“The practice of violence, like all action, changes the world, but the most probable change is to a more violent world."
-- Hannah Ardent, "Reflections on Violence"
User avatar
EMTP
 
Posts: 1062
Joined: Wed Jul 22, 2009 7:39 pm UTC
Location: Elbow deep in (mostly) other people's blood.

Re: The first rule of consciousness?

Postby The Great Hippo » Sat Sep 15, 2012 5:45 pm UTC

EMTP wrote:I think AI will develop like mechanized transportation. Do we have a machine that can walk, jump, climb, and run like a human can? Not yet. Maybe in the next hundred years. But meanwhile we have bicycles, cars, trucks, planes and spacecraft that can do all sorts of things we can't. Practical AI research will give us cars and planes.
It's fascinating to me how much technology fails to simulate biology--finding alternative solutions to problems like movement that are far more practical and far more effective than the solutions evolution has given us. There are a few exceptions, but things like airplanes only started working once we stopped trying to rebuild the flapping motion of a bird's wings.

In this sense, I think it's arrogant of us to project our own model of thought and desire onto computer AI--AIs will probably end up thinking in ways we can't even comprehend. 'Emotion' is cheap reasoning; it's a way of making us do things that serve our biological best interests without explaining to us why it's in our best biological interests. Why would we saddle AI with cheap reasoning when they're perfectly capable of expensive reasoning?

I have no idea what effective AI will look like. I have a sneaking suspicion that, a million years from now, the human race will disappear, replaced by something we've created--we'll become an irrelevant 'link' in this 'manufactured' entity's evolutionary chain. Rather than us, it'll probably be this AI that will fly off and explore the rest of the universe.

I also suspect that it won't happen in some sort of absurd apocalyptic 'SkyNet Destroys Humans' scenario, but rather simply through means of us becoming redundant. Humans are biological; short-lived, short-thinking things controlled by systems they do not have the capacity to understand. Computers are mechanical; long-lived, long-thinking things controlled by systems that are transparent and can be modified when it suits their programming. Problems that take natural selection millions of years to solve can be solved by a computer within minutes! Which makes 'competition' with something like an honest-to-goodness AI laughable: Whatever we make won't bother to compete with or destroy us. They'll just leave us behind.

Of course, that's all a bunch of romanticized hogwash; I just like the sound of it. I like the notion that humanity's legacy will be creating AI descendants far more advanced than us, and we'll just fade quietly into the night--the distant ancestors of something far greater.
User avatar
The Great Hippo
 
Posts: 5964
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: The first rule of consciousness?

Postby EMTP » Sat Sep 15, 2012 5:57 pm UTC

gmalivuk wrote:
EMTP wrote:Goal-directed behavior is a product of desire.
I'd be interested in seeing a definition of "goal-directed behavior" from you that didn't presuppose this claim, followed by a demonstration of this claim, because it is not at all self-evident to me.


Synonyms
1. target; purpose, object, objective, intent, intention.

If you program a "goal" into an AI, whose intent does it express? Yours or theirs?

If you have a goal, where did it come from? Where did the intent come from?

It's fascinating to me how much technology fails to simulate biology--finding alternative solutions to problems like movement that are far more practical and far more effective than the solutions evolution has given us.


More practical and effective when you're building a machine. Not better objectively. Our engineering is still crude and sloppy compared to 4 billion years of evolution. Yep, that jet's pretty fast. Call me when it is self-repairing, self-replicating and runs on unpreprocessed organic garbage that it collects itself.
Last edited by EMTP on Sat Sep 15, 2012 6:04 pm UTC, edited 1 time in total.
"“The practice of violence, like all action, changes the world, but the most probable change is to a more violent world."
-- Hannah Ardent, "Reflections on Violence"
User avatar
EMTP
 
Posts: 1062
Joined: Wed Jul 22, 2009 7:39 pm UTC
Location: Elbow deep in (mostly) other people's blood.

Re: The first rule of consciousness?

Postby TheGrammarBolshevik » Sat Sep 15, 2012 6:01 pm UTC

EMTP wrote:If you program a "goal" into an AI, whose intent does it express? Yours or theirs?

Couldn't it be both? Clearly humans have goals, but this is not contingent on whether those goals were given to us by an intentional agent (if we were to discover that God predestines us toward certain ends, we would not say that humans do not have goals).
Nothing rhymes with orange,
Not even sporange.
TheGrammarBolshevik
 
Posts: 4562
Joined: Mon Jun 30, 2008 2:12 am UTC
Location: Where.

Re: The first rule of consciousness?

Postby The Great Hippo » Sat Sep 15, 2012 6:17 pm UTC

EMTP wrote:More practical and effective when you're building a machine. Not better objectively. Our engineering is still crude and sloppy compared to 4 billion years of evolution. Yep, that jet's pretty fast. Call me when it is self-repairing, self-replicating and runs on unpreprocessed organic garbage that it collects itself.
If you leave it alone in a relatively sterile, stable space without food or water, it won't 'die' after several weeks. Also, if you take it apart, you can put it back together again and it still works.

Machines aren't 'objectively' better than humans, but I wouldn't call them 'crude' and 'sloppy' in comparison--4 billion years of evolution and we still can't manage to not die from cancer. Biological evolution is an amazing process, but it is comparatively slow and clumsy compared to what we do with our hands and our brains within our own lifetimes. We're already modifying ourselves with machines--replacing organs with devices that accomplish similar tasks. The bottleneck isn't that our machines aren't complex enough; it's that we have to pander to the slow-moving, sloppily designed product that natural selection has given us. But engineers and doctors can do so much better than the limits of biology. And they are, and they will.

Imagine what will happen when the crude product of evolutionary biology is taken out of the equation entirely. When our medicine, our science, our engineering no longer has to pander to the needs of a system we inherited from millions of years of incredibly slow, incredibly sloppy 'trial-by-error'. Holy shit, things will move fast.
User avatar
The Great Hippo
 
Posts: 5964
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: The first rule of consciousness?

Postby morriswalters » Sat Sep 15, 2012 6:43 pm UTC

The primary goal for all life appears to be "gasp" to stay alive. Everything else derives from that. Desire is simply an emotion that forwards other abstracted goals.
The Great Hippo wrote:If you leave it alone in a relatively sterile, stable space without food or water, it won't 'die' after several weeks. Also, if you take it apart, you can put it back together again and it still works.

And it will do absolutely nothing it wasn't designed to do. Machines derive their motivation from us. They break under use, fail in unpredictable ways, and require way more energy to function. Life does it better. I believe the brain consumes about 20 watts to do it's magic. That from the OP's link.
The Great Hippo wrote:We're already modifying ourselves with machines--replacing organs with devices that accomplish similar tasks, only with extraordinarily greater efficiency.
Can you name one? The fact that life eventually fails may just be an indicator that there are limits to the process, just as there are limits to machines.
morriswalters
 
Posts: 3413
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The first rule of consciousness?

Postby The Great Hippo » Sat Sep 15, 2012 6:59 pm UTC

morriswalters wrote:And it will do absolutely nothing it wasn't designed to do. Machines derive their motivation from us. They break under use, fail in unpredictable ways, and require way more energy to function.
Dunno about the energy thing. How much energy is required to maintain a hard-drive? What about a flash-drive? Either way, I don't see the relevance; I'm just opposed to describing machines as sloppy compared to biology--because machines strike me as a much more precise expression of design. We're badly coded--because the programmers who wrote us shotgunned their way through it over millions of years until they cobbled together something that works 'just enough' to get the job done.

We can do so much better than that.
morriswalters wrote:
The Great Hippo wrote:We're already modifying ourselves with machines--replacing organs with devices that accomplish similar tasks, only with extraordinarily greater efficiency.
Can you name one? The fact that life eventually fails may just be an indicator that there are limits to the process, just as there are limits to machines.
I edited out the 'extraordinarily greater efficiency' after I realized it was hyperbole (and to be honest, I don't know enough about the intersection between machines and biology to be making assertions like that). That being said: When biology fails, we use machines. For example: Dialysis.
User avatar
The Great Hippo
 
Posts: 5964
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: The first rule of consciousness?

Postby morriswalters » Sat Sep 15, 2012 7:42 pm UTC

But the preciseness you see as a virtue of machines is the preciseness of simplicity.
morriswalters
 
Posts: 3413
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The first rule of consciousness?

Postby The Great Hippo » Sat Sep 15, 2012 7:56 pm UTC

Yes, and biology isn't simple, because biology is a machine designed by a progressive, slow process that responds to pressure. Biology is a blind watchmaker; but we aren't blind--and can therefore design better, simpler, more functional watches.

The premise I'm putting forward here is that it is reasonable to expect that we will build a better intelligence than our own, and it is unreasonable to assume this intelligence will, by necessity, think like us.
User avatar
The Great Hippo
 
Posts: 5964
Joined: Fri Dec 14, 2007 4:43 am UTC
Location: behind you

Re: The first rule of consciousness?

Postby morriswalters » Sat Sep 15, 2012 8:38 pm UTC

The Great Hippo wrote:Yes, and biology isn't simple, because biology is a machine designed by a progressive, slow process that responds to pressure. Biology is a blind watchmaker; but we aren't blind--and can therefore design better, simpler, more functional watches.

The premise I'm putting forward here is that it is reasonable to expect that we will build a better intelligence than our own, and it is unreasonable to assume this intelligence will, by necessity, think like us.
Your analogy implies that we can see where we are going and what we are doing. But put that aside for the moment. Say for a moment that we are able to do precisely that, design a new intelligence. Why should we. What is our response to our evolutionary ancestors who evolved differently? We either hunt them or put them in Zoo's. I don't find that prospect to be particularly appealing. We would be the inferior branch, if you believe that AI would be more intelligent than us. The presumption that is always implied is that however it might come into being, that it's goals would be ours, no matter it's process. I have no reason to believe that this is true.

In addition I am unconvinced it is possible. I believe that fundamentally we don't understand the limits of intelligence. I believe that the basic limit to intelligence is time. The question becomes which is more powerful over a fixed period of time, a race of moderately intelligent thinking nodes arranged massively in parallel, or a few very powerful intelligent nodes in parallel?
morriswalters
 
Posts: 3413
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The first rule of consciousness?

Postby TrlstanC » Sun Sep 16, 2012 2:14 pm UTC

There's certainly the discussion to be had about why we evolved the kind of consciousness we did, or why we (and other animals) evolved to be conscious at all. I don't think this rule (that the brain receives many more bits of information than it can process intelligently, and that we can only consciously understand a small portion of that information) will give us an explanation for these questions, but I think it's certainly worth remembering when we're thinking about them. A lot of the theories or discussions I read about consciousness seem to include the assumption that our brain power is essentially limitless, or is at least more then up to the task of 'intelligently' handling all the data that's thrown at it. If pressed, I think most people would admit that it isn't unlimited, and that part of that processing is just 'culling data that's not important' but also that if the first step is the culling, you could be losing a lot of important data before you have time to really judge if it's important or not. But the point I'm trying to make isn't that this is just another restriction on our conscious experience of the world (there's a little more data than we can usefully process), but that it's a crucial factor. The amount of data available is overwhelmingly greater than our ability to process it intelligently. Most of the processing of the available data will be quick and dirty rules for culling vast amounts of, it more accurately quick and dirty rules to pick out one area to pay attention to, while mostly ignoring everything else.

Specifically in regards to our conscious experience there are convincing arguments that free will doesn't exist, and our brains don't create a single unified 'self'. These are both cornerstones of our conscious experience, and naturally the question is, why experience 'free will' and a 'self' if they don't actually exist. If it was evolutionarily advantageous to have more accurate information about ourselves, it would seem to be a fairly easy evolutionary jump to be able to have conscious access to more information about what's actually happening in our brain.

Except that getting more information isn't a problem. When I think about these kinds of questions, the first thing I've started considering is that our brain is already overwhelmed with information. Would a model of the world that included free will and individual selfs help with this? It seems like they could both be useful shortcuts, overly simplified models of how our brains work that let us process more information (or cull more of it, more confidently) more quickly. From a 'rational' standpoint (the desire to make the best possible choice given all the information available), this might not be optimal, but we may not have any other choice.

When studying consciousness, and intelligence, it may be useful to shift our perspective from 'how does out brain create consciousness given these limitations?' to: this limitation may be a defining characteristic of our conscious experience, or the big advantage to consciousness may be that it is a very useful tool to deal with this imbalance. It's a quick and dirty way to get to a good result, most of the time.
Defining Consciousness - discussion on a theory of consciousness. Are conscous thoughts a prediction of what you would say to yourself?

The Bias Against Creativity: Why People Desire But Reject Creative Ideas
User avatar
TrlstanC
Flexo
 
Posts: 358
Joined: Thu Oct 08, 2009 5:16 pm UTC
Location: Boston, MA

Re: The first rule of consciousness?

Postby TheGrammarBolshevik » Sun Sep 16, 2012 2:43 pm UTC

TrlstanC wrote:A lot of the theories or discussions I read about consciousness seem to include the assumption that our brain power is essentially limitless, or is at least more then up to the task of 'intelligently' handling all the data that's thrown at it.

Examples?
Nothing rhymes with orange,
Not even sporange.
TheGrammarBolshevik
 
Posts: 4562
Joined: Mon Jun 30, 2008 2:12 am UTC
Location: Where.

Re: The first rule of consciousness?

Postby morriswalters » Sun Sep 16, 2012 3:16 pm UTC

The self as we experience it exists to serve a purpose doesn't it? It separates us from the world. The rest appears to be philosophy. We evolved to reproduce, unless someone has shown otherwise, and evolution produced a powerful general purpose survival machine. Computers demonstrate the idea that if you build powerful general purpose machines they will be used in fashions that are unpredictable. Evolution didn't need to produce a unified self, it just needed to produce the illusion. And perhaps I am mistaken, but we process as much information as we need. Think parsimony.
morriswalters
 
Posts: 3413
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The first rule of consciousness?

Postby TrlstanC » Sun Sep 16, 2012 11:36 pm UTC

morriswalters wrote:And perhaps I am mistaken, but we process as much information as we need. Think parsimony.
This could easily be taken to be saying that all of our decisions are perfect, given all available (and historically available) information. Or that, there's no way we could've been more evolutionary successful by making better decisions.

I'd agree that we probably utilize more information from our environment than most (possibly all other) species, and that we certainly utilize enough for it to be a significant advantage. But I don't think I'd say we utilize (or process) as much information as we need - without adding what we need it for.
Defining Consciousness - discussion on a theory of consciousness. Are conscous thoughts a prediction of what you would say to yourself?

The Bias Against Creativity: Why People Desire But Reject Creative Ideas
User avatar
TrlstanC
Flexo
 
Posts: 358
Joined: Thu Oct 08, 2009 5:16 pm UTC
Location: Boston, MA

Re: The first rule of consciousness?

Postby morriswalters » Mon Sep 17, 2012 12:32 am UTC

TrlstanC wrote:This could easily be taken to be saying that all of our decisions are perfect, given all available (and historically available) information. Or that, there's no way we could've been more evolutionary successful by making better decisions.

I'd agree that we probably utilize more information from our environment than most (possibly all other) species, and that we certainly utilize enough for it to be a significant advantage. But I don't think I'd say we utilize (or process) as much information as we need - without adding what we need it for.
Evolution doesn't make decisions. It's trial and error or something akin to that. And the amount of information one person could take advantage of is limited by the time he or she has to do it. Information is not as important, from my point of view, as the ability to do something with it. Think of it this way. Put a person on an island and give him a computer with everything we know to date, and the tools to do all the science and craft we do today. How is he limited? Even if he had the capacity to learn everything and know everything , it still wouldn't mean anything because, in and of himself, he is limited by his ability to do. There aren't enough days for one man to utilize all the information that he has. The question, could we as a species, make better decisions given more information, seems to have the idea in the background that we could have perfect information. Do you believe that we could ever have that?
morriswalters
 
Posts: 3413
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The first rule of consciousness?

Postby gmalivuk » Mon Sep 17, 2012 2:37 am UTC

TheGrammarBolshevik wrote:
TrlstanC wrote:A lot of the theories or discussions I read about consciousness seem to include the assumption that our brain power is essentially limitless, or is at least more then up to the task of 'intelligently' handling all the data that's thrown at it.
Examples?
Yeah, I'm pretty sure that no discussion or theory put forth by anyone who actually knows something about how the brain works would suggest that we have essentially limitless brain power.

morriswalters wrote:And perhaps I am mistaken, but we process as much information as we need.
No, we process as much information as our ancestors needed to have a reasonable chance at reproducing. The environment they lived in was, for the vast majority of human existence, *far* less information-rich than the one we live in now, even just in terms of the number of people we're expected to "know" or the number of tasks we're expected to be able to do simultaneously or in short succession.

morriswalters wrote:The question, could we as a species, make better decisions given more information, seems to have the idea in the background that we could have perfect information.
No, it definitely doesn't have that idea in the background. What in the world gives you the idea that it does? The claim that we'd do better with more information just means that we'd do better with more information, and nothing else. Why are you making the jump to having *all* the information?
Treatid basically wrote:widdout elephants deh be no starting points. deh be no ZFC.


(If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome)
User avatar
gmalivuk
A debonaire peeing style
 
Posts: 20983
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There

Re: The first rule of consciousness?

Postby morriswalters » Mon Sep 17, 2012 10:01 am UTC

gmalivuk wrote:No, we process as much information as our ancestors needed to have a reasonable chance at reproducing. The environment they lived in was, for the vast majority of human existence, *far* less information-rich than the one we live in now, even just in terms of the number of people we're expected to "know" or the number of tasks we're expected to be able to do simultaneously or in short succession.

And we've created the tools to do that. But the brain is limited by it's blood supply, the bigger or the more connections we have the more resources we would need to drive that increase, you reach a point of diminishing returns.
gmalivuk wrote:No, it definitely doesn't have that idea in the background. What in the world gives you the idea that it does? The claim that we'd do better with more information just means that we'd do better with more information, and nothing else. Why are you making the jump to having *all* the information?

The claim appears to me to be that the brain could do better given more information or the ability to process it better, or perhaps I'm incorrect? And the phrase perfect information implies having all the information we need to make a rational choice for a given instant in time.
morriswalters
 
Posts: 3413
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: The first rule of consciousness?

Postby EMTP » Mon Sep 17, 2012 11:10 am UTC

The environment they lived in was, for the vast majority of human existence, *far* less information-rich than the one we live in now, even just in terms of the number of people we're expected to "know" or the number of tasks we're expected to be able to do simultaneously or in short succession.


Citation needed. Anthropologists studying modern hunter-gatherers have found they maintain detailed knowledge of hundreds to thousands of species of plants and animals, not to mention local geography, a variety of crafts, and complex social relationships.

Jared Diamond argues that the hunter-gatherer lifestyle is more intellectually stimulating and challenging than the typical "glowing rectangle" lifestyle in the first world. I'm inclined to agree.

The Great Hippo wrote:Yes, and biology isn't simple, because biology is a machine designed by a progressive, slow process that responds to pressure. Biology is a blind watchmaker; but we aren't blind--and can therefore design better, simpler, more functional watches.


Someday, probably, if we don't destroy ourselves first. But realize where we are. You, yourself, are composed by an association of between seven and eleven trillion self-replicating nanomachines. You run on organic garbage. Barring a few hours for a CABG, you can expect approximately 700,000 hours of continuous operation.

The blind watchmaker makes a pretty fucking incredible watch. Can you make a watch like that?

The premise I'm putting forward here is that it is reasonable to expect that we will build a better intelligence than our own, and it is unreasonable to assume this intelligence will, by necessity, think like us.


I agree that early AI will not, probably, think like us. Ultimately we may be able to make one that thinks like us, just as we are learning, at great expense and over many years, how to make a machine walk like us.
"“The practice of violence, like all action, changes the world, but the most probable change is to a more violent world."
-- Hannah Ardent, "Reflections on Violence"
User avatar
EMTP
 
Posts: 1062
Joined: Wed Jul 22, 2009 7:39 pm UTC
Location: Elbow deep in (mostly) other people's blood.

Re: The first rule of consciousness?

Postby Meteoric » Mon Sep 17, 2012 1:18 pm UTC

EMTP wrote:
The environment they lived in was, for the vast majority of human existence, *far* less information-rich than the one we live in now, even just in terms of the number of people we're expected to "know" or the number of tasks we're expected to be able to do simultaneously or in short succession.


Citation needed. Anthropologists studying modern hunter-gatherers have found they maintain detailed knowledge of hundreds to thousands of species of plants and animals, not to mention local geography, a variety of crafts, and complex social relationships.

Jared Diamond argues that the hunter-gatherer lifestyle is more intellectually stimulating and challenging than the typical "glowing rectangle" lifestyle in the first world. I'm inclined to agree.

Information-rich isn't necessarily the same as intellectually stimulating. We're still surrounded by plants, animals, geography, crafts, and social relationships, and huge amounts of information about them can be readily obtained, but we're also juggling additional things like participating in very non-local discussions on the nature of consciousness, which itself implies a nontrivial amount of prerequisite skills and knowledge. The time and energy I devote to reading and posting on the xkcd forums mean less time and energy I can devote to learning about local species. Youtube alone far exceeds the human ability to process information; an hour of video is uploaded every second. (Admittedly, the source on that figure is iffy.)

A decent argument could be (and, apparently, has been) made that the additional deluge of information is ultimately not an improvement, and that processing fewer things in greater detail is more stimulating. I don't know if that's true, but it's a separate question from the amount of information a hunter-gatherer had to sift through versus a typical modern American.
No, even in theory, you cannot build a rocket more massive than the visible universe.
Meteoric
 
Posts: 213
Joined: Wed Nov 23, 2011 4:43 am UTC

Next

Return to Serious Business

Who is online

Users browsing this forum: broarbape, CealExpalased and 5 guests