AIs and Animals

For the discussion of the sciences. Physics problems, chemistry equations, biology weirdness, it all goes here.

Moderators: gmalivuk, Moderators General, Prelates

User avatar
tomandlu
Posts: 1111
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

AIs and Animals

Postby tomandlu » Fri Aug 02, 2013 10:02 am UTC

I have a horrible feeling this is going to be a bit vague...

The animal kingdom covers a wide range of cognitive abilities, even if you limit yourself to animals with a central nervous system (no shit Sherlock), and, largely in relation to AI, I was wondering:

  • Can we say, with any confidence, that we have recreated the cognitive ability, via AI, of any animal and, if so, to what level? An ant? A mouse? (note that I'm saying 'recreate', not 'emulate')
  • If so, what does this say about our inability, so far, to recreate the cognitive abilities of the more cognitively-complex animals? Does it imply that there is some clearly defined discontinuity in the animal kingdom?
  • If the answer to the first question is, essentially, 'no', does this imply that AI research is fundamentally on the wrong-track in terms of creating true AI?
  • The notion of consciousness feels significant, but, at the same time, I find it hard to believe that 'consciousness' and 'non-consciousness' lie on two sides of a clear demarcation - 'consciousness', IMHO, is a consequence of intelligence, rather than visa-versa, but, if so, then that would imply that a conscious AI would arise purely as a consequence of creating intelligence... which is all getting to be very chicken and egg...

For the record, which, if true, makes the above a bit moot, I am not convinced that AI research has progressed beyond, essentially, creating more and more complex programs - they might emulate intelligence, but they do not recreate it. Also, apols - I realise that AI stuff comes up every now and then, but I'm curious about animal/AI comparisons...
How can I think my way out of the problem when the problem is the way I think?

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: AIs and Animals

Postby idobox » Fri Aug 02, 2013 11:44 am UTC

To give you an idea of the answer to first question: I was offered a PhD position to simulate the olfactive bulb (an important part of their 'brain') of bees, and people around the world try to understand how flies control their flight.
People have also simulated a full cortical column from a rat. We estimate the number of cortical columns in humans to be roughly 2 millions.

My current project is to understand the role of a tiny part of the brain found in pretty much all vertebrates that modulates dopamine and serotonin release. This important, because it will help us understand better how reinforcement learning works.
Reinforcement learning is the most basic type of learning: give a reward when a light shines, punish when the buzzer rings, and the animal learns to enjoy or fear the light and the buzzer. It is also central in learning simple tasks like pushing a button to get a reward. And our understanding of the process is very incomplete.

That being said, we can program AIs that are quite capable to solve complex tasks. Ants exploring and bringing food back home are a famous example of distributed intelligence we know how to replicate, same thing with rat level maze solving and such.
The functional AIs we have usually don't try too hard to copy the inner workings of animals, simply because we often don't them well enough, and also because we usually can replicate the result with more efficient methods.

The examples of animal intelligence we have are pretty impressive, but also require massive computing power. We study them to understand ourselves, and the principles of intelligence. We also often apply concepts of artificial intelligence to biological data to make sense out of it, the real goal being to understand intelligence, whether it is natural or artificial.
Biological systems are often quite far for optimum, and many people believe brains have a lot of redundancy, superfluous stuff and inefficiency, and that we could get to equivalent results with significantly less computing power.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
tomandlu
Posts: 1111
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

Re: AIs and Animals

Postby tomandlu » Fri Aug 02, 2013 11:54 am UTC

idobox wrote:Reinforcement learning is the most basic type of learning: give a reward when a light shines, punish when the buzzer rings, and the animal learns to enjoy or fear the light and the buzzer.


Thanks - with the above, can we, in your opinion, talk about an AI 'enjoying' or 'fearing'? (or, to flip it, do you think all life that responds to stimulus in this way is 'enjoying' and 'fearing'? - my intuitive answer would be that reacting came before the ability to having a positive/negative internal awareness).
How can I think my way out of the problem when the problem is the way I think?

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: AIs and Animals

Postby idobox » Fri Aug 02, 2013 2:54 pm UTC

it is difficult to be sure. what does fear mean? we can teach a robot to run away from some stimuli.
To be able to teach a machine that way, it must have an internal representation for good and bad, even if it is just represented by a single variable (0 means very bad, 1 very good), that it tries to optimize.
In an animal, being hungry or hurt would have negative value, and the animal would try to avoid it, and things like being the right temperature or mating would be positive, and the animal would try to replicate it.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
tomandlu
Posts: 1111
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

Re: AIs and Animals

Postby tomandlu » Fri Aug 02, 2013 3:39 pm UTC

You've got me clarifying my thoughts a bit... As a hypothesis, it strikes me that consciousness* must evolve after the ability to react positively or negatively to stimulus, since it's initial purpose would have been to refine that reaction. It also strikes me that a distinct disadvantage for AI research is that, inevitably, the approach is the other way around. Animal life also starts with a huge advantage - the biological machinery that evolves towards consciousness is part of the same machinery that processes stimuli. Anything horrendous in that idea? (or, alternatively, so bleedin' obvious it hardly needed saying).

* consciousness, not self-awareness
How can I think my way out of the problem when the problem is the way I think?

User avatar
Copper Bezel
Posts: 2426
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Web exclusive!

Re: AIs and Animals

Postby Copper Bezel » Fri Aug 02, 2013 5:05 pm UTC

Advantage in what sense? In terms of expediting development, in terms of getting to a particular end on a particular "budget" of complexity, in the organism that's ultimately produced, or something else? I don't want to start that topic again, but I don't think that the evolutionary process and the development process you're talking about have enough in common to refer to one or the other as having "advantages" over the opposite. But I think idobox's point was that starting at a high degree of abstraction is the only "advantage" we have in the running against a system that's simply far more complex than anything we could produce.

On stimuli and consciousness, I was about to say something like, "We could do worse than to observe at this point that animals with nerves seem to have developed into forms that would have required a central nervous system of some kind in rather short order," and then I realized that that's not true. The first fossils of things that must have had nervous systems and the first fossils of things that have must had some kind of centralization are, like, 70 Ma apart.
So much depends upon a red wheel barrow (>= XXII) but it is not going to be installed.

she / her / her

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: AIs and Animals

Postby idobox » Fri Aug 02, 2013 8:46 pm UTC

tomandlu wrote:You've got me clarifying my thoughts a bit... As a hypothesis, it strikes me that consciousness* must evolve after the ability to react positively or negatively to stimulus, since it's initial purpose would have been to refine that reaction. It also strikes me that a distinct disadvantage for AI research is that, inevitably, the approach is the other way around. Animal life also starts with a huge advantage - the biological machinery that evolves towards consciousness is part of the same machinery that processes stimuli. Anything horrendous in that idea? (or, alternatively, so bleedin' obvious it hardly needed saying).

* consciousness, not self-awareness

first of all, consciousness is a pretty difficult thing to define, it would help the discussion to know exactly what you mean by it.

Most animals don't have consciousness, whatever the definition. And human level consciousness is extremely rare, so the odds are not very good for animals evolving it by themselves.

You have to realize that evolution works by small improvements, which means big jumps are impossible, and sometime you can't go toward the best solution because the intermediaries are really crappy. Almost any function a body can do (except reproduction), machines can do better.
Neurons are really far from being the perfect computing element. They're noisy, and unreliable. To control a simple reflex, like the knee-jerk, you need a few hundred neurons, just so that their response will be averaged, and the random noise will have little impact. Once you study the reflex and determine the function of the neurons implied (in this case, servo control of stiffness of the muscle), you can emulate it with only a few transistors.

And there's no reason not to believe the same kind of improvement couldn't be achieved for more complex systems. Planes are better than birds, because machines can do things that animals cant.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: AIs and Animals

Postby Izawwlgood » Fri Aug 02, 2013 8:48 pm UTC

Anyone read Snow Crash? The Rat Thing was a really awesome sci-fi thing to me. I've also read a killer biopunk story about a group of lobsters that achieve a kind of hive mind intelligence after being used to pilot deep ocean crawlers.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

User avatar
Copper Bezel
Posts: 2426
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Web exclusive!

Re: AIs and Animals

Postby Copper Bezel » Fri Aug 02, 2013 9:33 pm UTC

Most animals don't have consciousness, whatever the definition. And human level consciousness is extremely rare, so the odds are not very good for animals evolving it by themselves.

In context, he means something more like sentience than sapience, or even just the ability to judge and make decisions based on a set of sensory information as opposed to an automatic response. I was taking it as synonymous with "has some centralization in the nervous system."
So much depends upon a red wheel barrow (>= XXII) but it is not going to be installed.

she / her / her

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: AIs and Animals

Postby elasto » Fri Aug 02, 2013 9:52 pm UTC

tomandlu wrote:You've got me clarifying my thoughts a bit... As a hypothesis, it strikes me that consciousness* must evolve after the ability to react positively or negatively to stimulus, since it's initial purpose would have been to refine that reaction.

Well. Yeah. Even single-celled organisms can do that, so it would seem a reasonable hypothesis :)

zenten
Posts: 3799
Joined: Fri Jun 22, 2007 7:42 am UTC
Location: Ottawa, Canada

Re: AIs and Animals

Postby zenten » Sat Aug 03, 2013 4:24 am UTC

Intelligence isn't one thing though. It gets a bit messy because living things love to recombine distinct parts in different ways for different functions, but say being good at language is a different thing from recognizing objects, which is a different thing from recognizing you are being hunted. So saying that artificial intelligence research has been a failure because it hasn't managed to complete simulate all the behaviours of any given animal is like saying biology research is a failure because is hasn't managed to completely simulate all the functions of a given animal.

User avatar
tomandlu
Posts: 1111
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

Re: AIs and Animals

Postby tomandlu » Sat Aug 03, 2013 6:37 am UTC

Clarifying here...

By "advantage", I mean "advantage in creating consciousness" - i.e. what (I assume) is the eventual aim in A.I. development (and I'm not implying that evolution had such an aim, but clearly selection took place). By 'consciousness', as Copper Bezel said, I mean sentience rather than sapience - basically the ability to go "ouch" and mean it. Such an ability clearly has some evolutionary advantage for motile animals, presumably by providing more options in terms of reaction, rather than just a passive response to stimuli, such as you see in plants. As a final clarification, "intelligence" is generally linked with sapience, but in this context I mean "observed intelligence", and don't mean to imply self-awareness.

I think my underlying assumption (which may be wrong as well as contradicting something I said in the OP) is that intelligence requires sapience, and sapience requires sentience. So...

Can you remove sapience and sentience and still have intelligence?
Can you create intelligence without first creating sentience and sapience?
Can we theoretically build something that will, for instance, behave identically to an ant, or even some simpler animal, using the current approach to AI, or does the lack of sentience make that impossible?

If the answers to the above questions are all 'No', then that would seem to imply that AI will need to model sentience as a prerequisite to intelligence, rather than the other way around.
How can I think my way out of the problem when the problem is the way I think?

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: AIs and Animals

Postby elasto » Sat Aug 03, 2013 9:54 am UTC

tomandlu wrote:Clarifying here...

By "advantage", I mean "advantage in creating consciousness" - i.e. what (I assume) is the eventual aim in A.I. development (and I'm not implying that evolution had such an aim, but clearly selection took place). By 'consciousness', as Copper Bezel said, I mean sentience rather than sapience - basically the ability to go "ouch" and mean it.


I *seriously* doubt that's a primary aim in AI development. I mean it'd raise a ridiculous number of ethical considerations. A spellcheck/grammarcheck plugin to your browser that feels discomfort or even pain at reading a badly-written sentence might be better than other plugins but would you really be comfortable using it..?

(There will be some researchers directly investigating consciousness itself of course, but that's a slightly different matter)

Yes, you're right that it might turn out that consciousness is naturally and unavoidably emergent from any true general AI - because consciousness is probably emergent from systems modelling themselves in some fashion - but for me it's something to be avoided rather than sought out. A serious complication rather than a help. Unfortunately, though, the first time we create artificial consciousness we probably won't even know we've done it, and we'll as likely as have created a being in pure hell as in pure heaven :/

Meteoric
Posts: 333
Joined: Wed Nov 23, 2011 4:43 am UTC

Re: AIs and Animals

Postby Meteoric » Sun Aug 04, 2013 11:32 pm UTC

Creating consciousness may be one eventual aim of AI research, but I'm not sure it's sensible to talk about any single goal for the entire field.

My impression of the current state of AI has been that we can make a lot of separate parts and improve on them individually, but don't yet have the ability to make anything like a complete animal mind, no matter how dumb (because our parts aren't good enough, and because we don't have all the parts or really understand how to fit them together yet). If this is an accurate characterization (and anybody more informed is welcome to correct me on it), I wouldn't expect progress in AI to even be describable by comparisons to various animals, at least not yet.
No, even in theory, you cannot build a rocket more massive than the visible universe.

Thrasymachus
Posts: 141
Joined: Sat Sep 22, 2012 8:40 pm UTC

Re: AIs and Animals

Postby Thrasymachus » Mon Aug 05, 2013 4:15 am UTC

By 'consciousness', as Copper Bezel said, I mean sentience rather than sapience - basically the ability to go "ouch" and mean it.


Meaning it is sapience, though. Sentience is just the ability to go "ouch." Sapience is the ability to go "ouch" and judge/explain that "ouch" is bad. Sentience is just the ability to sense the world and act differently based on what is sensed. A roomba, or any other robot with a sensor that can avoid obstacles and navigate back to its charging station through a variable environment is sentient. It's not very sentient because it doesn't sense much, and its ability to behave differently when presented with different scenarios it can sense is limited. But there's a robot that goes around autonomously, avoiding obstacles like people moving through the hallways, and collects empty soda cans off of unattended desks. It's sophisticated enough that it doesn't get tripped up if you move the can out of reach, or even if you pick up the can and put it in its little mechanical hand. That sucker's probably at least as sentient as your average bacterium. Sentience is something that can be had in greater or lesser degree, even within the same organism/entity. You're not sentient when you're asleep, and the roomba's not sentient when it's charging or off.

Sapience, on the other hand, is the ability to unify and normatively judge experiences. That's what gives you better/worse, good/bad and right/wrong. Without sapience, you have a set of reactions, whether learned or innate, to certain stimulations of the sensors. Unlike sentience, sapience is either something you can do, or something you can't. Either your experience is a unified field of meaning, or it's not. Sentience doesn't imply meaning, it implies behavior. There's no reason to believe that even such "advanced" organisms as dogs and cats are sapient. When your dog is begging for treats, whining and adopting an upright posture, it's not because he judges that it's the right thing to do right now, because he really wants a treat. It's because his sensors, his eyes and ears and nose and the nerves in his stomach, and some sensors for blood sugar levels in his brain and gut and probably a whole host of other sensors are firing in a certain configuration, and adopting a certain posture and making certain noises is just what he does when his sensors are firing in that way. His posture and the noises he makes don't have any meaning for him, at least, not of the sort that things are meaningful for us. They may change the configuration of sensors that are firing, and that may lead to further, different behavior, but that further behavior is not meaningfully linked to the prior behavior.

And that's the way most human beings act, most of the time, especially when doing routine things. When you take a step, you don't consciously judge the movement of your legs and the placement of your foot, though of course your body and part of your brain is monitoring and controlling those things. You could try to consciously step, determining the meaning of every twitch of your muscles, the slope of the grade, the placement and pressure you place on your toes, and so on. You'd probably mess it up, and you'll be lucky to not fall on your face. But the fact that you could try it is what makes you sapient, even if you never do try it. Sapience is built on sentience, and requires it, but represents a sort of disconnect between the state of the sensors and the resulting behavior. Think about when you learn to drive on a manual transmission car, and you develop a habit of picking up your left foot and depressing the clutch pedal, and grasping for the gear shifter on the right, when you perceive a certain speed and hear a certain noise from the engine, then switch to an automatic transmission. When you feel the impulse to pick up your foot and grasp the shifter, but stop yourself before you do more than twitch because you recognize that doing so would now be pointless, you've exercised your sapience. And if you go ahead and stomp on the floor and grasp air before you can stop yourself, then feel silly or ashamed for doing so even though you're all by yourself, that's your sapience too.

User avatar
tomandlu
Posts: 1111
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

Re: AIs and Animals

Postby tomandlu » Mon Aug 05, 2013 8:27 am UTC

Thrasymachus wrote:
By 'consciousness', as Copper Bezel said, I mean sentience rather than sapience - basically the ability to go "ouch" and mean it.


Meaning it is sapience, though. Sentience is just the ability to go "ouch." Sapience is the ability to go "ouch" and judge/explain that "ouch" is bad. Sentience is just the ability to sense the world and act differently based on what is sensed. A roomba, or any other robot with a sensor that can avoid obstacles and navigate back to its charging station through a variable environment is sentient. It's not very sentient because it doesn't sense much, and its ability to behave differently when presented with different scenarios it can sense is limited. But there's a robot that goes around autonomously, avoiding obstacles like people moving through the hallways, and collects empty soda cans off of unattended desks. It's sophisticated enough that it doesn't get tripped up if you move the can out of reach, or even if you pick up the can and put it in its little mechanical hand. That sucker's probably at least as sentient as your average bacterium. Sentience is something that can be had in greater or lesser degree, even within the same organism/entity. You're not sentient when you're asleep, and the roomba's not sentient when it's charging or off.

Sapience, on the other hand, is the ability to unify and normatively judge experiences. That's what gives you better/worse, good/bad and right/wrong. Without sapience, you have a set of reactions, whether learned or innate, to certain stimulations of the sensors. Unlike sentience, sapience is either something you can do, or something you can't. Either your experience is a unified field of meaning, or it's not. Sentience doesn't imply meaning, it implies behavior. There's no reason to believe that even such "advanced" organisms as dogs and cats are sapient.

...



This just sounds odd to me. If your definition of sentience cannot distinguish qualitatively and not just quantitatively between a roomba and a dog, then your definition is wrong IMHO.

Moving on - okay, if the aim of AI is not to create a sentient, sapient being, then what is being created? Can AI, without those attributes, ever be more than just a very complex program?

If we slightly modify the Turing test to "can we tell whether the AI is sentient and sapient or not?" (the original phrasing of the test is compromised IMHO by the requirement of the AI to lie), is it imaginable that an AI could exist that would pass the test that wasn't actually sentient and sapient?
How can I think my way out of the problem when the problem is the way I think?

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: AIs and Animals

Postby elasto » Mon Aug 05, 2013 11:40 am UTC

Thrasymachus wrote:Sapience, on the other hand, is the ability to unify and normatively judge experiences. That's what gives you better/worse, good/bad and right/wrong. Without sapience, you have a set of reactions, whether learned or innate, to certain stimulations of the sensors. Unlike sentience, sapience is either something you can do, or something you can't. Either your experience is a unified field of meaning, or it's not.


Eh...

I agree to an extent: sapience is a non-linear, emergent property - but I don't think it's as all-or-nothing as you make out. I mean, it's not like even with a baby there's one day when it's not sapient and the next day it's as sapient as any adult. For one thing I don't think sapience is a single thing, I think it's multifaceted, and different facets can come 'online' at different times. For another thing I think each facet can deepen in sophistication.

I think it's quite possible social mammals like dogs have some sapience in common with humans. Sure, most of its behaviours are pure unthinking instinct, but I look at my dog looking at me, and it behaves as if it is quite capable of perceiving whether I'm happy or angry. That kind of 'what does that person think of me?' is most easily explained by sapience, and, indeed, the drive to perceive other's intentions more accurately was probably the force behind sapience's evolution.

Humans have a more sophisticated form: 'What does that person think I think of them?' which dogs almost certainly don't have - evidenced by their lack of duplicity if nothing else. Whether the latter is a different 'facet' of sapience or whether it's the same facet just turned up a notch I'm not sure - and I'm not sure it makes much difference either way.

There will come a point when robots could make use of that kind of social sophistication, but I don't think we're anywhere near that yet. 'Pure instinct' or 'reflex behaviour' from our AI is good enough for the foreseeable future.

User avatar
Copper Bezel
Posts: 2426
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Web exclusive!

Re: AIs and Animals

Postby Copper Bezel » Mon Aug 05, 2013 12:23 pm UTC

What you're describing is theory of mind. I think it's a part of sapience, although I don't think it covers everything that we normally mean by that term that isn't included in "sentience." It definitely exists on a grade as the rest do, though, and isn't all-or-nothing.

I'm not sure I'd describe even human awareness as a "unified field of meaning" (at best, that's the parts of our minds that we're consciously aware of under some specific conditions) and certainly not in a way that we can objectively separate it from the experience of other intelligent mammals. We know objectively that humans can abstract in a way that no other animal can, and higher-level abstracted thinking seems to be a product of the wiring for language, but of course, there are other apes that can think symbolically, which means that there's a limit to just how peculiar our wiring really is and how much of that really is a result of the development of language.
So much depends upon a red wheel barrow (>= XXII) but it is not going to be installed.

she / her / her

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: AIs and Animals

Postby idobox » Mon Aug 05, 2013 1:24 pm UTC

Meteoric wrote:Creating consciousness may be one eventual aim of AI research, but I'm not sure it's sensible to talk about any single goal for the entire field.

Creating consciousness is the holy grail of computational neuroscientists (who try to understand how the brain works) like me, and of a branch of AI researchers.
Most research done on AI is not on replicating the human mind, but on making programs that do specific stuff intelligently. You don't want SIRI to have existential dread, your spellcheck to have a sense of humour or your cleaning robot to dream of becoming an artist.

elasto wrote:I agree to an extent: sapience is a non-linear, emergent property

The word emergent is often used as an explanation when we have no idea how things work. Some people believe that a large enough neural network will get conscious or sapient, I don't.
I believe the architecture of our brain is the source of sapience, and you could make a much smaller and dumber mind that is sapient, or a much more complex and big mind that is not sapient.

tomandlu wrote:This just sounds odd to me. If your definition of sentience cannot distinguish qualitatively and not just quantitatively between a roomba and a dog, then your definition is wrong IMHO.

No, Roomba, as well as bacteria, are sentient. It's just that sentience is not a particularly remarkable property.

The few big things people usually think of when they use the words sapience, sentience, consciousness, etc are the following:
-being aware of its own existence as different from the rest of the world (it's self awareness)
-having a model of the world you can use to do mental experiments or understand stuff (this one exists in AI)
-having emotions (we can make robots afraid, but do they feel afraid?)
-abstract thought (some AIs can do it, but are very specialized)
-moral values (the difference between it's bad for me, and it's wrong)

All these things are more or less possible to implement in an AI, but the result is still not comparable to a human, and we're not sure what is missing.
My personal favourite definition of self-awareness is the ability to observe your own mental process, judge it, and alter it if need be. If you've ever played with electronics, you are aware of all the wonderfullness that arises when you add feedback loops, and this is the mother of al feedback loops.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: AIs and Animals

Postby Izawwlgood » Mon Aug 05, 2013 2:36 pm UTC

idobox wrote:The word emergent is often used as an explanation when we have no idea how things work. Some people believe that a large enough neural network will get conscious or sapient, I don't. I believe the architecture of our brain is the source of sapience, and you could make a much smaller and dumber mind that is sapient, or a much more complex and big mind that is not sapient.
Which is funny, because arguably our own sapience is an emergent property of the architecture of our brain, particularly, as they develop.

Emergence is not, I would say, only used to indicate 'we don't understand it'. It's used to indicate that the product is greater than the sum of the parts, which is true of many many biological systems.

idobox wrote:No, Roomba, as well as bacteria, are sentient. It's just that sentience is not a particularly remarkable property.
I think you're using almost irresponsibly loose applications of the word 'sentient' here. Neither of those things are sentient. No one is interested in 'is thing more complex than a rock? yes? it's sentient' as your definition of sentience.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

User avatar
tomandlu
Posts: 1111
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

Re: AIs and Animals

Postby tomandlu » Mon Aug 05, 2013 4:06 pm UTC

idobox wrote:Some people believe that a large enough neural network will get conscious or sapient, I don't.
I believe the architecture of our brain is the source of sapience, and you could make a much smaller and dumber mind that is sapient, or a much more complex and big mind that is not sapient.


This...

...

All these things are more or less possible to implement in an AI, but the result is still not comparable to a human, and we're not sure what is missing.
My personal favourite definition of self-awareness is the ability to observe your own mental process, judge it, and alter it if need be. If you've ever played with electronics, you are aware of all the wonderfullness that arises when you add feedback loops, and this is the mother of al feedback loops.


So, how comparable? Could we, for example, theoretically build an AI that could happily chat about being an AI, admit that it wasn't actually sentient* or sapient, but still have a conversation consistent with human intelligence and show that it understood the distinction?

* for some definitions of sentient
How can I think my way out of the problem when the problem is the way I think?

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: AIs and Animals

Postby idobox » Mon Aug 05, 2013 5:00 pm UTC

Izawwlgood wrote:Which is funny, because arguably our own sapience is an emergent property of the architecture of our brain, particularly, as they develop.

Emergence is not, I would say, only used to indicate 'we don't understand it'. It's used to indicate that the product is greater than the sum of the parts, which is true of many many biological systems.

We don't say video games are an emergent property of transistors. Emergence implies self-organisation, something that is not proven to be the case for consciousness.
If you take ants, and tell them to go directly to the anthill and leave a pheromone trail when they find food, you have an emergent behaviour resulting in ants going directly to food sources.
If you take a self-organizing map (a type of neural network) and feed it data, it will classify them, and it is an emergent behaviour.
When you wire transistors in a very precise and thought-out way, there is nothing emergent. Emergence would be if a box full of transistors in a random position had some interesting properties.

Some people think that conscience is an emergent behaviour, throw enough neurons, connect them randomly, give them a certain set of rules, wait a little a bit, and you get consciousness.
I believe you need to wire neurons a certain way to get conscience, that the networks doesn't converge spontaneously, but has to be constrained.

This is the realm of belief, the debate is still hot, and both sides have arguments. As always, the answer is probably a little bit of both, and I wouldn't be surprised if some important blocks are self organizing.

Izawwlgood wrote:I think you're using almost irresponsibly loose applications of the word 'sentient' here. Neither of those things are sentient. No one is interested in 'is thing more complex than a rock? yes? it's sentient' as your definition of sentience.

from wikipedia
Sentience is the ability to feel, perceive, or to experience subjectivity. Eighteenth century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience). In modern western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as "qualia").

A system that can perceive things and assign them an abstract value (good/bad) is sentient. When you are hungry, you don't feel "low-blood sugar" but "need food", and when a roomba has run for a long time, it doesn't feel "battery voltage is getting low" but "I need to recharge before I run out of battery".
Some people simplify the problem as "can it feel pain?" which is very difficult to answer. A robot with skin sensors that is programmed to avoid extreme values of pressure and temperature could be considered to feel pain and learn from it. Preventing a roomba from reaching its base station when it needs to recharge is pretty close to inflicting pain, as you're keeping it in a state it tries to avoid.

tomandlu wrote:So, how comparable? Could we, for example, theoretically build an AI that could happily chat about being an AI, admit that it wasn't actually sentient* or sapient, but still have a conversation consistent with human intelligence and show that it understood the distinction?

I'm not sure. Chatterbots are getting pretty good, and one might be able to fool you.
Given that us humans have a lot of trouble defining sapience, integrating that concept in a program that isn't itself sapient seems very difficult. I imagine a chatterbot could be aware that its rules and values are hard-coded and that it is unable to change them, while humans are different in that regard. Making it eager to discuss being an AI isn't very difficult, and many AIs have a system of reward/punishment learning. I'm not sure we can tell the difference between being happy and receiving a reward in humans, or if there is even one, at least dopamine levels react the same.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: AIs and Animals

Postby Izawwlgood » Mon Aug 05, 2013 5:16 pm UTC

idobox wrote:We don't say video games are an emergent property of transistors. Emergence implies self-organisation, something that is not proven to be the case for consciousness.
If you take ants, and tell them to go directly to the anthill and leave a pheromone trail when they find food, you have an emergent behaviour resulting in ants going directly to food sources.
If you take a self-organizing map (a type of neural network) and feed it data, it will classify them, and it is an emergent behaviour.
When you wire transistors in a very precise and thought-out way, there is nothing emergent. Emergence would be if a box full of transistors in a random position had some interesting properties.
You're confusing (deliberately, it seems) levels of organization though. No one would claim videogames are an emergent property of transistors, but you might say self-perpetuating organized movement is an emergent property of a particular arrangement in Conways Game of Life. The biological analogy is 'You wouldn't say the Mona Lisa is an emergent property of DNA replication', to which anyone with half a brain would say 'Well, duh'.

When people say that they think consciousness may emerge from a complex enough neural network, they're presupposing that the transistors are already arrayed in a sufficiently complex manner to comprise analogous blocks to brain parts; i.e., this cluster of transistors is analogous to the amygdyla, this cluster of transistors is analogous to Broca's Area, etc., etc. In this vein, no one would suggest that consciousness could emerge is you dumped a few trillion neurons and glia into a gelatinous matrix; everyone understands that when we talk about 'parts', we aren't talking about one of the most fundamental units of those parts, but rather groups.

idobox wrote:Some people think that conscience is an emergent behaviour, throw enough neurons, connect them randomly, give them a certain set of rules, wait a little a bit, and you get consciousness.
I believe you need to wire neurons a certain way to get conscience, that the networks doesn't converge spontaneously, but has to be constrained.

So, no, no one thinks the top part. Many people think the second part. To elaborate; no neuroscientist would agree that neuronal organization is random, or that neuronal organization is irrelevant to brain function.

idobox wrote:A system that can perceive things and assign them an abstract value (good/bad) is sentient.
I think this is why your definition is useless; 'respond to stimulus' is not what I understand that definition is meant for. I think the keyword in the linked definition is 'subjectivity'. The entity needs to be making decisions based on stimulus. Which is still a poor definition, because bacteria do that, and I don't feel bacteria are sentient.

idobox wrote:Some people simplify the problem as "can it feel pain?" which is very difficult to answer. A robot with skin sensors that is programmed to avoid extreme values of pressure and temperature could be considered to feel pain and learn from it. Preventing a roomba from reaching its base station when it needs to recharge is pretty close to inflicting pain, as you're keeping it in a state it tries to avoid.
Yeah, again, these definitions are shitty. I've heard 'can it suffer' as a better way of putting it, as objectively, we can probably all agree a Roomba isn't suffering if it's designated parameters aren't being met, and I feel comfortable assuming that bacteria don't suffer even when they are expressing stress pathways. 'Suffering' is a very anthropomorphic condition; I feel it is very difficult to define suffering
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

User avatar
davidstarlingm
Posts: 1255
Joined: Mon Jun 01, 2009 4:33 am UTC

Re: AIs and Animals

Postby davidstarlingm » Mon Aug 05, 2013 6:34 pm UTC

idobox wrote:My personal favourite definition of self-awareness is the ability to observe your own mental process, judge it, and alter it if need be. If you've ever played with electronics, you are aware of all the wonderfullness that arises when you add feedback loops, and this is the mother of al feedback loops.

Can dogs observe and modify their own behavior?

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: AIs and Animals

Postby idobox » Mon Aug 05, 2013 7:44 pm UTC

Izawwlgood wrote:You're confusing (deliberately, it seems) levels of organization though. No one would claim videogames are an emergent property of transistors, but you might say self-perpetuating organized movement is an emergent property of a particular arrangement in Conways Game of Life. The biological analogy is 'You wouldn't say the Mona Lisa is an emergent property of DNA replication', to which anyone with half a brain would say 'Well, duh'.

When people say that they think consciousness may emerge from a complex enough neural network, they're presupposing that the transistors are already arrayed in a sufficiently complex manner to comprise analogous blocks to brain parts; i.e., this cluster of transistors is analogous to the amygdyla, this cluster of transistors is analogous to Broca's Area, etc., etc. In this vein, no one would suggest that consciousness could emerge is you dumped a few trillion neurons and glia into a gelatinous matrix; everyone understands that when we talk about 'parts', we aren't talking about one of the most fundamental units of those parts, but rather groups.

So, no, no one thinks the top part. Many people think the second part. To elaborate; no neuroscientist would agree that neuronal organization is random, or that neuronal organization is irrelevant to brain function.

If your formal neurons are meticulously arranged in a complex network comprising many different subparts, possibly with different types of neurons, then whatever behaviour it has is not emergent by my definition.
I've worked with a model of basal ganglia where we carefully designed pathways for reward signals, cue signals, and such, and the thing was able to learn to associate cues with rewards. That behaviour is not emergent.
On the other hand, I've met a guy (AI expert, not neuroscientist) who thought layering enough self-organizing maps would result in self-awareness and was actually writing a book on the subject.

Izawwlgood wrote:I think this is why your definition is useless; 'respond to stimulus' is not what I understand that definition is meant for. I think the keyword in the linked definition is 'subjectivity'. The entity needs to be making decisions based on stimulus. Which is still a poor definition, because bacteria do that, and I don't feel bacteria are sentient.

That's because you're used to the word being used in science-fiction with the meaning of conscious. Nobody ever uses it in AI because it is a useless concept in AI.
It is useful in philosophy to distinguish perception from abstraction, thought and whatnot. An extreme form of locked-down syndrome could be understood as a functioning mind that is not sentient, but still sapient and concious.

davidstarlingm wrote:Can dogs observe and modify their own behavior?

I think they can only learn by reinforcement (reward when they do good, punishment when they do bad), you would have to ask a dog trainer. I don't even think they can learn by observing other dogs.
If you teach a dog to use a flap door, and then lock the flap door, it will try quite a few times before he realizes it doesn't work. A human in the same situation will first make sure the door doesn't work, and then either try to find another way, or try to look what could cause it.
I think apes, parrots and octopuses have demonstrated this kind of ability, but I really don't know much about animal intelligence.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: AIs and Animals

Postby Izawwlgood » Mon Aug 05, 2013 8:08 pm UTC

idobox wrote:If your formal neurons are meticulously arranged in a complex network comprising many different subparts, possibly with different types of neurons, then whatever behaviour it has is not emergent by my definition.
No, the point was referring to the levels of organization that you're somewhat haphazardly mixing. The brain is not merely a trillion neurons that happen to be connected in the right way, it's also, a trillion neurons arranged into a few thousand 'sections' which are in turn connected in very specific ways. Just like a computer isn't just a bunch of transistors with the software to make use of them, it's actually a bunch of transistors linked into blocks, which are in turn linked into blocks.
(I'm aware that I'm off by an order of magnitude for the number of neurons in a brain. As far as Izawwlgood estimates go, this was actually pretty close)

idobox wrote:I've worked with a model of basal ganglia where we carefully designed pathways for reward signals, cue signals, and such, and the thing was able to learn to associate cues with rewards. That behaviour is not emergent.
I'm not sure what your point is here. Is this model computational? Is it a neuronal slice? If you're saying you can train a neuronal tissue culture sample to respond to stimuli, I don't think you're saying anything that's particularly surprising to anyone whose worked with neurons. I can show you the same thing with individual Drosophila neurons, and just today attended a talk where someone switched this behavior off with an optogenetics like mouse model.

That behavior may not be emergent because you're talking about what amounts to a switch. Switches can exist on the atomic, molecular, or organismal level. Neuronal circuits are particularly awesome at demonstrating this sort of behavior because when they are reduced to their component parts, they are indeed fairly simple and describable, and when you get a whole mess of them organized the right way, you do in fact get the Mona Lisa.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: AIs and Animals

Postby idobox » Tue Aug 06, 2013 3:19 pm UTC

Izawwlgood wrote:'m not sure what your point is here. Is this model computational? Is it a neuronal slice? If you're saying you can train a neuronal tissue culture sample to respond to stimuli, I don't think you're saying anything that's particularly surprising to anyone whose worked with neurons. I can show you the same thing with individual Drosophila neurons, and just today attended a talk where someone switched this behavior off with an optogenetics like mouse model.

It is a simple computational model.

We seem to disagree on what emergent means.
For me emergence is when you have agents with simple rules, and when you put a lot of them together without ordering them, they start to have more complex behaviour.
Transistors are not like that because you need to connect them in a specific way to get useful behaviour.
Some neural networks are like that, self-organizing maps are an example: they don't require to be organized, and are able to classify whatever data you throw at them.
Some neural networks are not, and get their properties because of the way they are wired, and if you screw with the wiring, it stops working. The basal ganglia are an example of this.

Now, some people think that conscience is a property of the neo-cortex, and is largely emergent, ie the wiring at birth is mostly random and uniform and because of the stimuli it gets, it organises itself in a way that makes conscience emerge. Given that we don't understand much, it is a possibility.
Personally, I think the circuits needed for conscience are anatomical, that the substrate is not initially uniform. Of course, some parts of the circuit might have some emergent or self-organizing properties, but not the circuit as a whole. It is a relatively popular opinion, but is still just a speculation at this point.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: AIs and Animals

Postby Izawwlgood » Tue Aug 06, 2013 6:05 pm UTC

idobox wrote:We seem to disagree on what emergent means.
For me emergence is when you have agents with simple rules, and when you put a lot of them together without ordering them, they start to have more complex behaviour.
Transistors are not like that because you need to connect them in a specific way to get useful behaviour.
No, we seem to agree on what emergent means.
But yes, I know transistors aren't like that, because as I said, they aren't the equivalent component to the brain as a neuron is. You brought up transistors as an analogy, not me.

idobox wrote:Now, some people think that conscience is a property of the neo-cortex, and is largely emergent, ie the wiring at birth is mostly random and uniform and because of the stimuli it gets, it organises itself in a way that makes conscience emerge. Given that we don't understand much, it is a possibility.
Personally, I think the circuits needed for conscience are anatomical, that the substrate is not initially uniform. Of course, some parts of the circuit might have some emergent or self-organizing properties, but not the circuit as a whole. It is a relatively popular opinion, but is still just a speculation at this point.
I would be very surprised if anyone who knew anything about these things actually thought that brain organization at birth was random. Hell, at ANY stage after neural crest formation was random.

I think you're underestimating the very carefully managed process of neural organization that goes into our development, and I'd be curious to be pointed towards some literature that suggests it's just a random conglomeration from which we happen to get brains.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: AIs and Animals

Postby elasto » Tue Aug 06, 2013 6:06 pm UTC

For me, all I meant by emergent is that, when you have a chaotic system with extreme feedback loops, you get results that would have been very hard or even impossible to predict just knowing the initial rules - eg. CGoL.

If I had to guess, and it's only a guess, I think consciousness is intrinsically tied to such chaotic feedback loops that include a model of their own state as part of the data fed back.

But it's very hard to see where the 'magic' comes in. I mean, there's no reason you can't 'unroll' all the loops of a brain's processing into a super-massively long linear bit of code - nothing but a series of 'if-then' statements - and if you were to ask 'is that code conscious?', it's hard to see why it wouldn't be just as conscious as the more compact rolled-up soft squidgy version...

One thing seems to be clear though: We know that only a relatively small part of the brain actually contributes to consciousness. Other parts can be destroyed without the person reporting any awareness of loss of consciousness.

That holds out hope for me that it's not the case that any and every program we write is conscious - that's it's not just that our code is conscious but has no means to communicate that fact to us - that it's actually hard to produce a conscious thing. And that's a good thing from my point of view: Consciousness is a curse and a burden as much as it is a joy and a miracle. I wouldn't want my spellchecker conscious, as I say. Not unless I knew that it experienced pleasure from spotting a spelling mistake and not just pain..!

Also, it seems to be clear our brain does not at all experience a uniform experience. The left and right sides both experience the world consciously, but utterly differently. But somehow 'I' combine this into a uniform experience. All very mysterious and interesting.

This is worth seeing if you haven't before: A brain scientist experienced a stroke which knocked out the left hemisphere of her brain, and she experienced the world through the right hemisphere only: http://www.ted.com/talks/jill_bolte_tay ... sight.html

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: AIs and Animals

Postby Izawwlgood » Tue Aug 06, 2013 6:23 pm UTC

elasto wrote:ut it's very hard to see where the 'magic' comes in. I mean, there's no reason you can't 'unroll' all the loops of a brain's processing into a super-massively long linear bit of code - nothing but a series of 'if-then' statements - and if you were to ask 'is that code conscious?', it's hard to see why it wouldn't be just as conscious as the more compact rolled-up soft squidgy version...
This where I start to handwave, but it's must understanding that thinking about the brain as 'really complex code' is incorrect, as the brain is running a host of parallel processes simultaneously. It's less about 'what is the output for this input' and more 'how do all these things interact and affect the output'. I suppose?
elasto wrote:One thing seems to be clear though: We know that only a relatively small part of the brain actually contributes to consciousness. Other parts can be destroyed without the person reporting any awareness of loss of consciousness.
I think it's all relative; people with brain injuries report pretty significant changes to their behavior. If you're thinking about, say, Phineas Gage, who was basically lobotomized from his accident, the man distinctly reported a change to awareness and change of personality. In some senses you're correct; the brain is fairly plastic, and if small regions are damaged it's possible over time other regions will compensate. But you can't just willynilly chop out random chunks of brain and expect someone to be alright.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: AIs and Animals

Postby idobox » Tue Aug 06, 2013 7:05 pm UTC

Izawwlgood wrote:
elasto wrote:ut it's very hard to see where the 'magic' comes in. I mean, there's no reason you can't 'unroll' all the loops of a brain's processing into a super-massively long linear bit of code - nothing but a series of 'if-then' statements - and if you were to ask 'is that code conscious?', it's hard to see why it wouldn't be just as conscious as the more compact rolled-up soft squidgy version...
This where I start to handwave, but it's must understanding that thinking about the brain as 'really complex code' is incorrect, as the brain is running a host of parallel processes simultaneously. It's less about 'what is the output for this input' and more 'how do all these things interact and affect the output'. I suppose?

The typical neural network is represented as a matrix and a non linear function. Represent all input values (possibly including the output of your neurons) by a single dimension matrix, multiply by a weight matrix to get a linear matrix containing all the values for your neurons, apply non-linear function to get output.
The non linear function can be quite complex, and the values in the matrix change, but that's basically how we do it. When your network is organized, rather than using a single big matrix, we separate the neurons in groups, and use smaller matrices to connect groups.

This operation is traditionally realized sequentially on modern computers, more and more frequently it is done with massive parallel devices, and I've heard of people using holograms to represent the matrix and lasers as the inputs.

elasto wrote:Also, it seems to be clear our brain does not at all experience a uniform experience. The left and right sides both experience the world consciously, but utterly differently. But somehow 'I' combine this into a uniform experience. All very mysterious and interesting.

Many people the concept of self is an illusion. The part of my brain that process hunger and the part that deals with face recognition don't communicate, but somehow, we feel the same entity perceives hunger and identify people. Most people think there is a circuit somewhere that aggregate all these informations, and creates the self, but we're not sure.

elasto wrote:For me, all I meant by emergent is that, when you have a chaotic system with extreme feedback loops, you get results that would have been very hard or even impossible to predict just knowing the initial rules - eg. CGoL.

Be careful with words like chaotic in this kind of discussion.
A robot that moves in a room, grabs objects it finds and releases them when bumping into something will create piles, and eventually a single pile. The robot doesn't have to be chaotic or random to do it, and the feedback loop is not extreme, but this behaviour is still emergent. Simple rules, no requirement for a pre-existing organisation, and a complex result.

Izawwlgood wrote:I would be very surprised if anyone who knew anything about these things actually thought that brain organization at birth was random. Hell, at ANY stage after neural crest formation was random.
I think you're underestimating the very carefully managed process of neural organization that goes into our development, and I'd be curious to be pointed towards some literature that suggests it's just a random conglomeration from which we happen to get brains.

Cortical columns are quite homogeneous all over the cortex, and I don't think with have connectivity maps of newborns.
If we consider the columns to be identical all over the cortex, and the column to column connections to be random at birth (not the connection with and within the deeper structures), then whatever the cortex does will be emergent. It also explains the plasticity.

If you believe the cortex is structured by physiological processes rather than stimuli (both external and recursive connections), then you do not believe anything happening in it is emergent (by my definition of the word).
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: AIs and Animals

Postby Izawwlgood » Tue Aug 06, 2013 7:16 pm UTC

idobox wrote:Cortical columns are quite homogeneous all over the cortex, and I don't think with have connectivity maps of newborns.
If we consider the columns to be identical all over the cortex, and the column to column connections to be random at birth (not the connection with and within the deeper structures), then whatever the cortex does will be emergent. It also explains the plasticity.

If you believe the cortex is structured by physiological processes rather than stimuli (both external and recursive connections), then you do not believe anything happening in it is emergent (by my definition of the word).
Angua will have to chime in here, but everything I recall from developmental bio was that the organization of an organism is an incredibly NON-random process. Neural structures in particular are fascinatingly complex, often doing things that don't make much sense unless viewed in the context of evolutionary biology. Cortical columns being homogenous (I'm not sure what you mean by that) does not mean that the process which created them was random, and we don't need connectivity maps to recognize developmental queues that lead to organization.

Furthermore, I, and I believe everyone, believes that both physiological and stimuli are required for connections to form. But you're mixing up 'emergent' here now; what I, and I believe most people, mean by consciousness being an emergent property of our brains, is that there's no 'lobe of individuality' or 'area of sapience'. Rather, the whole of the brain, together, results in a gestalt. Sapience is an emergent property of the complex structures of our brain.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: AIs and Animals

Postby idobox » Wed Aug 07, 2013 2:15 pm UTC

Cortical columns are structures found in the cortex, thought to be elementary processing units. Their size and organisation is quite homogeneous across the cortex.
The deep structures of the brain are extremely non random, and even small deviations to the norm lead to massive disruptions of functions, but the cortex is not like that. You can remove half of it, and still have a functional human. Also, even at a macro level, the neo-cortex is quite random, and you could use the sulci (folds of the brain) like you use fingerprints.
If you take a blind person, put a matrix of electrodes on his tongue or vibrators on the back, and connect that matrix to a camera, after some time, he will be able to see with it. What is interesting is that it activates his visual cortex, although the nerve input doesn't arrive there. Somehow, the part of your brain that receives acidity inputs from the tongue (current tastes sour) or your somato-sensory cortex, which usually have no reason to project toward visual areas, are able to identify the input as being vision, and reroute it.
Damage most areas of the brain, like for example the Brocca area, can be overcome with time, as other areas start to their job instead.

The cortex, unlike the midbrain or the pons, is largely a self organizing structure, and its organization appears to be driven mostly by stimuli. The idea that a large enough cortex spontaneously creates complex feedback loops resulting in self-awareness is not absurd, but it is unproven.

It is commonly accepted that the connection strength between different parts of the neocortex are an anatomical feature, present from development and modulated by stimuli. This is also unproven, and mapping studies show a significant variability with some constant features. I wouldn't be surprised to learn than a significant part of these connections are not controlled by physiological processes, but start as random connections with stimuli and positive feedback reinforcing some and destroying others.
If we take the example of ants and food sources. If you start your experiment with no pheromones or obstacles, the ants will create very direct routes. You can also create preliminary routes and put walls around them, forcing them to be stable in shape, although their weight may vary. Or you could put a few obstacles and draw some routes, and let the ants refine them or create new ones. The wiring of the cortex could be like the first experiment, starting blank and inputs shaping its development, like the second, constructed with care and allowed small variations, or like the third experiment, as an intermediate between the two.
And honestly, right now, we don't know which is right.

Izawwlgood wrote:Furthermore, I, and I believe everyone, believes that both physiological and stimuli are required for connections to form. But you're mixing up 'emergent' here now; what I, and I believe most people, mean by consciousness being an emergent property of our brains, is that there's no 'lobe of individuality' or 'area of sapience'. Rather, the whole of the brain, together, results in a gestalt. Sapience is an emergent property of the complex structures of our brain.

So we do use different meanings of the word. I mean something like self-organizing, and you mean something like delocalized.
Your view that it is a property of the whole is a bit extreme, and many people think it's the result of the complex interaction between many, but not all, parts of the brain.
If I remove your visual cortex, you'll get blind, but that won't really affect your personality or perception of self. If I mess up with your cingulate cortex, on the other hand, I will affect your personality much more strongly. And if I damage your temporoparietal junction, I can screw with your perception of what is you and what is not (as in my body vs the rest of the world).
In my opinion, the temporoparietal junction is implied in self-awareness and consciousness, but not the visual cortex, and we can identify a circuit (although complex and delocalized) that is responsible for all that.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
Izawwlgood
WINNING
Posts: 18686
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: AIs and Animals

Postby Izawwlgood » Wed Aug 07, 2013 3:15 pm UTC

idobox wrote:In my opinion, the temporoparietal junction is implied in self-awareness and consciousness, but not the visual cortex, and we can identify a circuit (although complex and delocalized) that is responsible for all that.
No, the temporopartietal junction is implied in sensory sense of self and spatial relationships, not 'theory of mind' sense of self. People with damage to this area recognize their minds as being distinctly theirs, but cannot, say, appropriately process an object's relationship to their body in space. It is a sensory region, not a... damn what's the word... uh, 'mind region'.

Honestly, I think you're not paying attention to what I'm trying to say, although you did correct my extreme point that it's not the *whole* brain, just parts of it. But then, my point wasn't that the occipital lobe is required for sapience, and arguing that you can remove it and still have a sapient human is a pretty weak straw man.

I think we're kind of talking past one another here; I'm saying that I, and most neuroscientists I know of, think of sapience (a social construct in and of itself!) as an emergent property of the brain. There isn't a 'region of sapience'. You seem to be arguing that there is. You also seem to be misconstruing what I mean by 'emergent'. I mean 'emergent', again, to mean that the property in question is not directly the result of any one regions activities, but rather, the result of all/many/most/some/>2 of them acting together.

You also seem to be on some side tangent about neuroplasticity and how that proves your theory that neural organization isn't a carefully regulated process. It is; that doesn't mean the brain isn't plastic, because you'll notice, I never said anything about how the brain isn't plastic.
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

gamefreq
Posts: 2
Joined: Fri Nov 19, 2010 1:17 am UTC

Re: AIs and Animals

Postby gamefreq » Wed Aug 07, 2013 6:34 pm UTC

I recently discovered a very promising article that, if true, would have major implications for the direction we take in approaching AI. It's an interesting book in progress about standing waves and harmonic resonance in relation to overcoming combinatorial explosion seen in lots of "advanced" math. It's called "Harmonic Resonance in the Brain" by Steve Lehar. Chapter 3 is where it really starts getting good. I think we should start looking into setting up a system that takes inputs, and produces an output, based on how it matches with the logical equivalent of a standing wave, in essence a pattern matcher, with creativity possibly being changing internal signal propagation delays randomly, which would be roughly equivalent to altering the frequencies and phase offsets of the various oscillators, leading to enormous complexity with a very simple low-level system. I definitely recommend checking it out, just Google the title. I'd post a link but it gets flagged as spam if I try.

User avatar
tomandlu
Posts: 1111
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

Re: AIs and Animals

Postby tomandlu » Thu Aug 08, 2013 7:10 am UTC

idobox wrote:
Sentience is the ability to feel, perceive, or to experience subjectivity. Eighteenth century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience). In modern western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as "qualia").

A system that can perceive things and assign them an abstract value (good/bad) is sentient. When you are hungry, you don't feel "low-blood sugar" but "need food", and when a roomba has run for a long time, it doesn't feel "battery voltage is getting low" but "I need to recharge before I run out of battery".
Some people simplify the problem as "can it feel pain?" which is very difficult to answer. A robot with skin sensors that is programmed to avoid extreme values of pressure and temperature could be considered to feel pain and learn from it. Preventing a roomba from reaching its base station when it needs to recharge is pretty close to inflicting pain, as you're keeping it in a state it tries to avoid.


But without some form of subjectivity, this pain is just another rule, isn't it? It doesn't really have any quality to distinguish it from a 'positive' condition. Sentience, to me, at least implies the ability to feel distress.

idobox wrote:
tomandlu wrote:So, how comparable? Could we, for example, theoretically build an AI that could happily chat about being an AI, admit that it wasn't actually sentient* or sapient, but still have a conversation consistent with human intelligence and show that it understood the distinction?

I'm not sure. Chatterbots are getting pretty good, and one might be able to fool you.


In a sense, I don't want to be fooled (which is why I'm always a bit suspicious of the Turing test). I'm wondering whether one could build a truly 'intelligent' AI (by all reasonable definitions of intelligent) that was not sapient or conscious? For example, an AI that could take a course in English literature and then meaningfully discuss symbolism in Shakespeare's tragedies or something...
How can I think my way out of the problem when the problem is the way I think?

elasto
Posts: 3778
Joined: Mon May 10, 2010 1:53 am UTC

Re: AIs and Animals

Postby elasto » Thu Aug 08, 2013 9:18 am UTC

tomandlu wrote:In a sense, I don't want to be fooled (which is why I'm always a bit suspicious of the Turing test). I'm wondering whether one could build a truly 'intelligent' AI (by all reasonable definitions of intelligent) that was not sapient or conscious? For example, an AI that could take a course in English literature and then meaningfully discuss symbolism in Shakespeare's tragedies or something...

Proponents of The Chinese Room would say yes, such a thing is definitely possible - in theory at least. However, as you said earlier on, it might be that the easiest way to create an AI that acts in a genuinely intelligent manner is to create a sapient, self-aware one. Watson demonstrates that you can go a long way just throwing processing power at the problem though.

User avatar
tomandlu
Posts: 1111
Joined: Fri Sep 21, 2007 10:22 am UTC
Location: London, UK
Contact:

Re: AIs and Animals

Postby tomandlu » Thu Aug 08, 2013 12:55 pm UTC

elasto wrote:Proponents of The Chinese Room would say yes, such a thing is definitely possible - in theory at least.


Well, I'm not sure the Chinese Room says it's possible - it just points out that, if it were possible, then you can execute the program in such a way that you prove no consciousness is required.

However, as you said earlier on, it might be that the easiest way to create an AI that acts in a genuinely intelligent manner is to create a sapient, self-aware one. Watson demonstrates that you can go a long way just throwing processing power at the problem though.


Quite, but I'm still wondering if that's 'easiest' or 'only'. Watson's failings in the quiz (the airport question) are a pretty good example of the limits of current AI.
How can I think my way out of the problem when the problem is the way I think?

User avatar
idobox
Posts: 1591
Joined: Wed Apr 02, 2008 8:54 pm UTC
Location: Marseille, France

Re: AIs and Animals

Postby idobox » Thu Aug 08, 2013 4:33 pm UTC

tomandlu wrote:But without some form of subjectivity, this pain is just another rule, isn't it? It doesn't really have any quality to distinguish it from a 'positive' condition. Sentience, to me, at least implies the ability to feel distress.

How can you tell the difference between a robot that really feels pain when burnt, and a robot programmed to avoid extreme temperatures, express a form of alert when it happens, seeks repairs, and learns how to avoid it again?

This is not a trivial question, philosophers and AI specialists try to answer it. Take a look at that https://en.wikipedia.org/wiki/Philosophical_zombie
My opinion is that there isn't any difference, and that we have the illusion it's not a rule when it is actually one. And that's why neuroscience is so very important: it can tell one day tell us if pain is just a signal we are programmed to try to reduce, or if there is something more complex.
For now, the debate is mostly philospical and religious, because the scientific data is not enough to be sure.

Izawwlgood wrote:No, the temporopartietal junction is implied in sensory sense of self and spatial relationships, not 'theory of mind' sense of self. People with damage to this area recognize their minds as being distinctly theirs, but cannot, say, appropriately process an object's relationship to their body in space. It is a sensory region, not a... damn what's the word... uh, 'mind region'.

It's relevant in defining what is "you", although not in defining what is your mind. I accept the critique.

Izawwlgood wrote:You also seem to be misconstruing what I mean by 'emergent'. I mean 'emergent', again, to mean that the property in question is not directly the result of any one regions activities, but rather, the result of all/many/most/some/>2 of them acting together.
You also seem to be on some side tangent about neuroplasticity and how that proves your theory that neural organization isn't a carefully regulated process. It is; that doesn't mean the brain isn't plastic, because you'll notice, I never said anything about how the brain isn't plastic.

We both agree that consciousness is the result of complex interaction of many parts of the brain, something I do not consider to be fit my definition of emergence. And for some reason you insist we have the same definition of emergence.
I consider that the apparition of structures from an originally unorganized system to be a central property of emergence, so plasticity isn't a tangent. I never claimed you thought the brain isn't plastic, but you believe the organization of the neo-cortex is a process carefully regulated by physiological factors, that it starts in a specific state that is important. That is a reasonable belief, that I share, but that is not a consensus in the scientific community. And I end up defending a point of view I don't share, but that is also reasonable : that the organization of the neo-cortex is largely driven by stimulus, that the state it starts at is not really that important, but that because we all have similar inputs arriving in similar places and receive similar stimuli, we end up structuring it in similar ways.
If there is no answer, there is no question. If there is no solution, there is no problem.

Waffles to space = 100% pure WIN.

User avatar
Copper Bezel
Posts: 2426
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Web exclusive!

Re: AIs and Animals

Postby Copper Bezel » Thu Aug 08, 2013 5:15 pm UTC

If emergence in the broad sense just means that simple rules create unexpectedly complex patterns, then consciousness can be "emergent" without the mechanism being "self-organizing' in a broad sense. Your example earlier of the robot that moves objects according to two simple rules has an emergent behavior pattern, but it's not as if the robot itself is self-organizing (so in the analogy, the robot is this chunk of pre-structured cortical structure, while the objects are this one, and so on.) I think Izawwlgood is right that they're the same definition but being applied at two different scales.
So much depends upon a red wheel barrow (>= XXII) but it is not going to be installed.

she / her / her


Return to “Science”

Who is online

Users browsing this forum: No registered users and 12 guests