Judea Pearl's Critism of Machine Learning

Seen something interesting in the news or on the intertubes? Discuss it here.

Moderators: Zamfir, Hawknc, Moderators General, Prelates

jewish_scientist
Posts: 895
Joined: Fri Feb 07, 2014 3:15 pm UTC

Judea Pearl's Critism of Machine Learning

Postby jewish_scientist » Tue May 22, 2018 3:00 am UTC

I read this really interesting article on how Judea Peral sees current AI research is stuck. Advancements have all been in "curve fitting", interpreting data to find the most probable cause. He sees this as distinct from causal reasoning. The big difference, as he describes it, is that curve fitting can determine that a relationship exists between two events, but it cannot determine which is the cause and which is the effect. He says we need a new type of mathematics that is asymmetrical. I was wondering what you guys thought of this criticism.

User avatar
ucim
Posts: 6351
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Judea Pearl's Critism of Machine Learning

Postby ucim » Tue May 22, 2018 4:21 am UTC

Is curve fitting really distinct from causal reasoning? I would argue that it is not; it's just several layers deeper. On one hand, "X causes Y" can be rephrased as "the idea that X causes Y is positively associated with good results in problems that involve X and Y". Now it's just a curve-fitting problem involving "problems that involve X and Y" that are not themselves X or Y.

And what do human neurons do anyway? Which one of your neurons is responsible for figuring out that pricking yourself with a pin causes you to bleed, and not the other way around? Your neurons aren't thinking, they are just curve fitting in their own way. And where is this idea that X causes Y (and not the other way around) located? Your brain is just a machine that turns sensory inputs into muscle movements. There are a lot of layers involved in having the muscle movements correspond to your saying that X causes Y, and that's not even the important part. The important part is that if you want Y, you learn to do X, but wanting X doesn't cause you to seek Y.

So, any machine that learns (and those are the only ones of interest here) will eventually learn the asymmetries involved. X now is associated with Y later, but Y now is not so associated with X later.

A causal relation is in this sense merely the difference between two opposing associations.

Also, the article is wrong when it states "The language of algebra is symmetric: If x tells us about y, then y tells us about x." Yes, it sort-of does, but not fully. If y = x2, then x tells you y, but y is fuzzier about x.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
orthogon
Posts: 2899
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

Re: Judea Pearl's Critism of Machine Learning

Postby orthogon » Tue May 22, 2018 1:04 pm UTC

ucim wrote:Is curve fitting really distinct from causal reasoning? I would argue that it is not; it's just several layers deeper. On one hand, "X causes Y" can be rephrased as "the idea that X causes Y is positively associated with good results in problems that involve X and Y". Now it's just a curve-fitting problem involving "problems that involve X and Y" that are not themselves X or Y.

And what do human neurons do anyway? Which one of your neurons is responsible for figuring out that pricking yourself with a pin causes you to bleed, and not the other way around? Your neurons aren't thinking, they are just curve fitting in their own way. And where is this idea that X causes Y (and not the other way around) located? Your brain is just a machine that turns sensory inputs into muscle movements. There are a lot of layers involved in having the muscle movements correspond to your saying that X causes Y, and that's not even the important part. The important part is that if you want Y, you learn to do X, but wanting X doesn't cause you to seek Y.

So, any machine that learns (and those are the only ones of interest here) will eventually learn the asymmetries involved. X now is associated with Y later, but Y now is not so associated with X later.

A causal relation is in this sense merely the difference between two opposing associations.



Perhaps, but I have the feeling there's more to our understanding of causality than mere correlation with a time offset. We reason about causality, using the concept of a mechanism that underlies the causal relationship. In fact, this is so important that we are prone to make errors of both types: seeing causal relationships where there isn't even any correlation (alternative "medicine", magic, ...) and denying (or not looking for) correlation when we see no causal mechanism (TODO: insert a good example. Early surgeons who refused to wash their hands?)
xtifr wrote:... and orthogon merely sounds undecided.

User avatar
ucim
Posts: 6351
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Judea Pearl's Critism of Machine Learning

Postby ucim » Tue May 22, 2018 2:26 pm UTC

orthogon wrote:We reason about causality...
Yes, we certainly think we do. But what is "reason" anyway? That's the underlying question. The story we tell ourselves is that we "figure something out", but how? Which neuron had the idea?

The neurons are most certainly not "reasoning", and all our reasoning comes from neurons. So, perhaps reason is a kind of chimera.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
eran_rathan
Mostly Wrong
Posts: 1784
Joined: Fri Apr 09, 2010 2:36 pm UTC
Location: pew! pew! pew!

Re: Judea Pearl's Critism of Machine Learning

Postby eran_rathan » Tue May 22, 2018 3:21 pm UTC

I think perhaps the term you're looking for is 'gestalt' - as in, the whole is more than the sum of its parts.
"We have met the enemy, and we are they. Them? We is it. Whatever."
"Google tells me you are not unique. You are, however, wrong."
nɒʜƚɒɿ_nɒɿɘ

User avatar
orthogon
Posts: 2899
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

Re: Judea Pearl's Critism of Machine Learning

Postby orthogon » Tue May 22, 2018 3:25 pm UTC

ucim wrote:
orthogon wrote:We reason about causality...
Yes, we certainly think we do. But what is "reason" anyway? That's the underlying question. The story we tell ourselves is that we "figure something out", but how? Which neuron had the idea?

The neurons are most certainly not "reasoning", and all our reasoning comes from neurons. So, perhaps reason is a kind of chimera.

Jose

Well, we're getting into the whole question of whether mathematics is real or not. But I don't accept that the neurons are 'most certainly not "reasoning"'. Would you say that the neurons are not "performing arithmetic" when they add up the price of groceries? The result of the biological process is (barring mistakes) a number that conforms to the rules of arithmetic: the same number that an abacus, computer or cash register would arrive at. I say that both reason and arithmetic (in fact they're both just branches of mathematics) exist in some abstract sense, and our brains have evolved to be able to carry out processes that correspond to valid steps within these fields. You may be arguing that being able to perform those processes was selected for simply because they happened to provide the means to achieve desirable outcomes in the ancestral environment, and that they could be entirely contingent; a kind of evolutionary overfitting. I would respond that the way in which humankind has applied processes of reason and thereby achieved unimaginable feats of ingenuity with no parallel in the ancestral or even modern environment suggests that reason does really reflect some underlying truth.
xtifr wrote:... and orthogon merely sounds undecided.

User avatar
ucim
Posts: 6351
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Judea Pearl's Critism of Machine Learning

Postby ucim » Tue May 22, 2018 4:33 pm UTC

orthogon wrote:Would you say that the neurons are not "performing arithmetic" when they add up the price of groceries?
Yes. No individual neuron is keeping the total. No individual neuron is performing the addition. No single neuron carrying the one.

Rather, it's more like a seven segment display. Segments just light up. No segment knows what number it's displaying. In fact, there isn't even the idea of a number as far as the display is concerned. Yet, a number gets displayed.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

idonno
Posts: 169
Joined: Fri Apr 03, 2015 3:34 am UTC

Re: Judea Pearl's Critism of Machine Learning

Postby idonno » Tue May 22, 2018 5:24 pm UTC

It seems dubious to me that human brains are just running curve fitting algorithms. We require far less input than that should take and the massive biases in the data our minds decide to store makes it unlikely curve fitting would provide an accurate predictive model.

Tyndmyr
Posts: 11022
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Judea Pearl's Critism of Machine Learning

Postby Tyndmyr » Tue May 22, 2018 5:31 pm UTC

I mean, I'm all for another structure for AI. I'm sure that more breakthroughs are possible. I'm not overly sure that this proposal is it....but hey, if he gives it a shot, good on him. I don't know that any specific programming language ought to be required to enact any new AI ideas, though.

And, in particular, order and causality are pretty easy to represent in code. Math may not usually focus on it, but x causing y is not hard to represent in existing languages. So, I'm not 100% sure how he thinks a new language is going to solve this.

User avatar
Link
Posts: 1350
Joined: Sat Mar 07, 2009 11:33 am UTC
Location: ᘝᓄᘈᖉᐣ
Contact:

Re: Judea Pearl's Critism of Machine Learning

Postby Link » Wed May 23, 2018 6:12 am UTC

ucim wrote:
Also, the article is wrong when it states "The language of algebra is symmetric: If x tells us about y, then y tells us about x." Yes, it sort-of does, but not fully. If y = x2, then x tells you y, but y is fuzzier about x.
That's what I was thinking. The equality statement is symmetric, sure -- but the idea of non-injective maps is quite well established. See also: lambda calculus, category theory, functional programming.

idonno wrote:
It seems dubious to me that human brains are just running curve fitting algorithms. We require far less input than that should take and the massive biases in the data our minds decide to store makes it unlikely curve fitting would provide an accurate predictive model.
I'm not so sure; I'm tempted to say it's just a matter of complexity. The human brain consists of tens of billions of neurons with over a hundred trillion synapses, *and* a whole slew of internal control elements such as hormones and glial cells and whatnot that today's average deep learning system has no hopes of ever matching. Not to mention the fact that a few hundred million years of evolution have baked in some structures that make a developing brain pre-optimise itself for certain patterns; the "missing" input is one that's already there from the start, but in a way we really don't understand very well at all AFAIK.

That all being said, I can imagine combining today's deep-learning with a higher-level idea of causal reasoning would provide a way to cut out a vast deal of the complexity required from a neural net to match real intelligence.
Last edited by Link on Wed May 23, 2018 11:56 am UTC, edited 1 time in total.

Trebla
Posts: 361
Joined: Fri Apr 02, 2010 1:51 pm UTC

Re: Judea Pearl's Critism of Machine Learning

Postby Trebla » Wed May 23, 2018 11:40 am UTC

Maybe I'm interpreting with my own biases, but when he says (emphasis mine)...

The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions

...it seems like he's saying that "it doesn't count unless the machines are sentient (sapient???)". He's underwhelmed that AI can "master ancient games" and learn to play at super-human levels (more or less) on their own... and this is just curve fitting? I don't know, ability to predict strategies in adversarial competitions doesn't strike me as curve fitting in the standard sense. This seems like the programs have an "understanding" of causal relationships. "If I make this move, my opponent is likely to make that move."

elasto
Posts: 3477
Joined: Mon May 10, 2010 1:53 am UTC

Re: Judea Pearl's Critism of Machine Learning

Postby elasto » Wed May 23, 2018 3:49 pm UTC

Will someone come up with some breakthrough algorithm for 'general intelligence'? Perhaps. But this last decade has seen enormous strides made in domain-specific intelligence - and in domains that are really pretty broad at that - like answering general knowledge questions, or driving a car.

It may well be that all the 'important' domains get conquered and we never really have a need to develop a general AI. And that might be a good thing, because a general AI will almost certainly leave us in the dust in intelligence terms in quite short order, and it'll only be a matter of time before we lose control of it.

Kill All Humans


Return to “News & Articles”

Who is online

Users browsing this forum: No registered users and 18 guests