1425: "Tasks"

This forum is for the individual discussion thread that goes with each new comic.

Moderators: Moderators General, Prelates, Magistrates

m_ke
Posts: 1
Joined: Wed Sep 24, 2014 5:14 pm UTC

Re: 1425: "Tasks"

Postby m_ke » Wed Sep 24, 2014 5:17 pm UTC

Clarifai has this covered

Siankir
Posts: 2
Joined: Thu Apr 17, 2014 2:56 pm UTC

Re: 1425: "Tasks"

Postby Siankir » Wed Sep 24, 2014 5:18 pm UTC

jc wrote:That said, translation is a lossy process, and if you used capable human translators in French, Japanese, Greek, Croatian and English and did the same thing, you would lose shades of meaning, and if you continued, I think you could lose meaning altogether.

Would be a fun experiment to do, actually, if anyone knows any translators.


As a counterpoint, I point to the existence of the children's game "telephone," or - as Wikipedia states that it is called outside of the United States; naturally, I've never even heard it called the following - "Chinese whispers." In brief, passing a message in a single language is a lossy process, even without children intentionally ruining the translation (takes a few more steps when the people involved are seriously trying to preserve the message, but it'll still get corrupted eventually).

Depending on how many languages you translated the message through, I think you would need a control group of the same size to pass the message within a single language. That or each person involved writes the message down to ensure that the translator can preserve the structure optimally. Even then, the principle still applies: The human element allows for unique levels of error, such as impatience (translators of writing often spend laborious amounts of time trying to properly preserve the meaning, such that a sufficiently advanced machine might be able to avoid) and eventually even earnestly trying to correct previous mistranslations, but instead making the difference worse.

The Synologist
Posts: 41
Joined: Tue Sep 25, 2012 5:50 pm UTC

Re: 1425: "Tasks"

Postby The Synologist » Wed Sep 24, 2014 7:06 pm UTC

azule wrote:
The Synologist wrote:Also, kind of off-topic, but isn't it weird that explainxkcd was updated with this comic, but there wasn't even a forum thread for it yet?
Off-topic-man to the rescue! They have a bot that posts immediately. If the message was gone (about the bot posting), then they're just fast with the human portion of the updating. Speed it up, boys, you have no excuse! ;)

I figured that was partly the case, but the explanation had been filled in as well.

I guess it's easier to modify a created wiki page than fill out all the stuff to make a forum thread haha.

User avatar
keithl
Posts: 662
Joined: Mon Aug 01, 2011 3:46 pm UTC

Re: 1425: "Tasks"

Postby keithl » Wed Sep 24, 2014 7:22 pm UTC

Nick Bostrom, author of "Superintelligence", was the "warmup band" for GLR's "What If" presentation in Seattle 2014/09/09 . Bostrom worries about the AI Apocalypse, and believes AI will be symbol manipulation. He says the brain is exposed to about 20 megabits of sensory data per second, but stores very little of it, and that computers will do better Real Soon Now. This is what happens when philosophers think about brains. Sad.

Intelligence is about making sensible decisions with the data available. "That is a bird" is a sensible decision - and it can be made with good precision by another bird. My system for recognizing birds would involve lots of cages. However, I am better than a bird at recognizing a good engineering design at least 60% of the time.

Enhanced intelligence may involve "broad data" - like big data (extracting patterns from terabytes), but available to anyone. Our extended digital senses will winnow trillions of environmental bytes into the 20 Mbps streams our brains can handle, and help us make executive decisions. Maybe the last reduction from 20Mbps to action may someday be automated, but given that we don't know how to make machines pattern match and data reduce as well as a bird brain, it will be a while before such machinery is cheaper than the animal brains sold by the pound at the meat market.

I would be more worried about animals with intelligence prosthetics. Augmenting a bird to human intelligence may be a lot easier than making a machine as smart as an unaugmented bird. So "smart phones for birds" may indeed happen first, and have frightening consequences.

"Hrm. There was a bird watching me use the ATM last week. Now there is a debit for 500 kilograms of bird seed on my credit card."

schapel
Posts: 244
Joined: Fri Jun 13, 2014 1:33 am UTC

Re: 1425: "Tasks"

Postby schapel » Wed Sep 24, 2014 10:07 pm UTC

keithl wrote:Enhanced intelligence may involve "broad data" - like big data (extracting patterns from terabytes), but available to anyone. Our extended digital senses will winnow trillions of environmental bytes into the 20 Mbps streams our brains can handle, and help us make executive decisions.

I would think "enhanced intelligence" (at least in the near future) would involve some sort of augmented reality. For example, you look at a menu in French, and English translations appear next to the French. Or maybe you're in a grocery store and lines appear on the floor directing you to where on the shelf each item on your grocery list is located. I get the impression that this is what Google Glass is aiming for.

I already use my cell phone to help direct me through traffic and construction zones while driving, so we're inching towards "enhanced intelligence" using data from "the cloud" already.

User avatar
rhhardin
Posts: 81
Joined: Fri Apr 09, 2010 2:11 pm UTC

Re: 1425: "Tasks"

Postby rhhardin » Wed Sep 24, 2014 11:40 pm UTC

Artificial intelligence has the longest-running just-around-the-corner promise in history, now continuously since the 50s.

It does do the service of making philosophers of the mind shut up.

User avatar
Pfhorrest
Posts: 5478
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: 1425: "Tasks"

Postby Pfhorrest » Thu Sep 25, 2014 12:09 am UTC

How exactly does the difficulty in engineering AI make philosophers of mind shut up?

Seems like one side of the debate in that field is saying "artificial intelligence is impossible, you need a magic soul to have consciousness" and the other side is saying "consciousness is just a very complex physical phenomenon, and once we understand it thoroughly we will be able to replicate it from non-conscious components", and neither of those would be surprised, much less stopped in their tracks, by the difficulty of building AI just yet. They would place different bets on whether it will ever work out or not, but if anything that bet would make them talk more, not shut up.

Building an actual working AI might shut half of them up, but that doesn't seem to be what you're saying.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

Mikeski
Posts: 1112
Joined: Sun Jan 13, 2008 7:24 am UTC
Location: Minnesota, USA

Re: 1425: "Tasks"

Postby Mikeski » Thu Sep 25, 2014 12:24 am UTC

*click*

Siri: "Would you like to place this photo in the category 'Mountain Lion in Glacier National Park'?"

User: "Of course not, it's just a photo of a waterfall. There's no mountain lion in it."

*mauling noises*

schapel
Posts: 244
Joined: Fri Jun 13, 2014 1:33 am UTC

Re: 1425: "Tasks"

Postby schapel » Thu Sep 25, 2014 1:59 am UTC

Pfhorrest wrote:Building an actual working AI might shut half of them up, but that doesn't seem to be what you're saying.

I think what he's saying is that someone is arguing that when computers can "process" input at the same rate as humans, they will automatically become as intelligent as humans. This is, of course, ridiculous.

I think few very people argue that strong AI is just around the corner and also few very argue that it's impossible, so I don't think the "debate" is polarized. I think most people realize that AI will progress incrementally in the near future. Computers will continue gaining special-purpose skills (doing calculus, playing chess, driving cars, playing Jeopardy) that only humans have demonstrated previously, but they're not going to be able to do general-purpose human reasoning any time soon.

On second reading, was your post about consciousness? That's a different subject altogether, because intelligence can be demonstrated, but consciousness cannot. I can prove to you I'm smart, but I can never prove to you I'm conscious.

tuxedobob
Posts: 15
Joined: Mon Apr 04, 2011 9:27 am UTC

Re: 1425: "Tasks"

Postby tuxedobob » Thu Sep 25, 2014 2:46 am UTC

This sounds like a job for Mechanical Turk.

User avatar
Pfhorrest
Posts: 5478
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: 1425: "Tasks"

Postby Pfhorrest » Thu Sep 25, 2014 3:08 am UTC

schapel wrote:On second reading, was your post about consciousness? That's a different subject altogether, because intelligence can be demonstrated, but consciousness cannot. I can prove to you I'm smart, but I can never prove to you I'm conscious.

Consciousness is the big topic of debate in philosophy of mind, which is what rhhardin spoke of. There's probably some people somewhere in the history of that debate who at one point believed building a true mind comparable to a human mind was just a matter of sufficient processing power or some such, some quantitative threshold where we just need "enough computer" and bam you've got a mind, but to my knowledge that's not a position held by any contemporary philosophers of mind, so disproving it (by the existence of our amazingly powerful and yet still in many ways dumb computers today) wouldn't shut anybody in that field up. The debate there is just over whether or not it is possible in principle, and if it is possible, what is it exactly that a machine must be capable of doing before we will unambiguously say "yeah, that's a genuinely thinking machine on par with you and me".

Also, whether consciousness can be demonstrated depends on what you mean by "consciousness". The two broad senses in use today are "access consciousness" and "phenomenal consciousness" (and the problems surrounding them respectively the "easy" and "hard" problems of consciousness). Access consciousness is pretty uncontroversial: if you can tell me how you're feeling, what you're thinking, what you think caused you to think or feel that way, and especially what you think or feel about what you think or feel ("I'd rather not feel like this", "I know I shouldn't think that", etc), then you have access consciousness, and you just demonstrated it by telling me those things. You have access to information about your own internal mental states. To some philosophers of mind that's enough, and they dismiss the coherence of any other sense of the word "consciousness". (Those ones will usually say that it's clearly possible in principle to build a conscious machine, the rest is details).

Others want to answer still the harder problem of phenomenal consciousness: if you build a machine that responds to inputs and outputs exactly like a human and can report on its own internal states just like a human can, does a rose still smell just as sweet to it? Does it experience the same redness we do when looking at it? Can it properly experience smell or sight at all, or is it merely responding to chemical and electromagnetic stimuli with the same internal state-changes and consequent behavior as a human would? That's something which, if there is any answer to it at all, if the question even makes sense, it may not be possible to know. But then it's just as impossible to know about other humans as it is machines, so that's kind of irrelevant to questions about AI.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
Wee Red Bird
Posts: 192
Joined: Wed Apr 24, 2013 11:50 am UTC
Location: In a tree

Re: 1425: "Tasks"

Postby Wee Red Bird » Thu Sep 25, 2014 8:17 am UTC

Perhaps the image detection software could include this function:

Code: Select all

private string CheckImage(Bitmap InputImage)
        {
        string returnvalue;
        if (CheckImageBird(InputImage))
                {
                returnvalue = "bird";
                }
        else if (CheckImagePlane(InputImage))
                {
                returnvalue = "plane";
                }
        else
                {
                returnvalue = "superman";
                }
        return (returnvalue);
        }

User avatar
orthogon
Posts: 3102
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

Re: 1425: "Tasks"

Postby orthogon » Thu Sep 25, 2014 8:23 am UTC

Wee Red Bird wrote:Perhaps the image detection software could include this function:

I say again, if only the forum had a "like" button...
xtifr wrote:... and orthogon merely sounds undecided.

User avatar
Duck
Posts: 29
Joined: Wed Apr 11, 2007 1:53 pm UTC
Location: Somerset, UK
Contact:

Re: 1425: "Tasks"

Postby Duck » Thu Sep 25, 2014 9:29 am UTC

I remember hearing that there was an experimental image recognition technology in the 80s which supposedly "learned" to tell whether or not a tank was present in the image. The program was trained with a collection of input images, and then when tested with further images it performed extremely well, seeming to correctly spot whether or not new images contained a tank or not.

When tried on live data however, it performed very poorly. It turned out that all the test images with tanks had been taken on cloudy days, so what the program had actually learned was to tell the difference between cloudy days and sunny days.

User avatar
Philbert
Posts: 32
Joined: Mon Jan 05, 2009 12:32 pm UTC

Re: 1425: "Tasks"

Postby Philbert » Thu Sep 25, 2014 11:16 am UTC

hujackus wrote:Since when does it take 5 years to make an image captcha server? Or easier still, send the images to a click farm. Brings to mind the earliest meaning of the word computer.


Typical minute of work in the click farm:
*click* bird
*click* bird
*click* cat
*click* bird
*click* penis
*click* bird
*click* penis
*click* bird

Eoink
Posts: 88
Joined: Fri Nov 30, 2012 12:33 pm UTC

Re: 1425: "Tasks"

Postby Eoink » Thu Sep 25, 2014 11:47 am UTC

Duck wrote:I remember hearing that there was an experimental image recognition technology in the 80s which supposedly "learned" to tell whether or not a tank was present in the image. The program was trained with a collection of input images, and then when tested with further images it performed extremely well, seeming to correctly spot whether or not new images contained a tank or not.

When tried on live data however, it performed very poorly. It turned out that all the test images with tanks had been taken on cloudy days, so what the program had actually learned was to tell the difference between cloudy days and sunny days.


The variant of this I remember was of an early neural networking training program. As I heard it, they were looking to train it to tell the difference betweek NATO and Warsaw Pact tanks. The difference it learnt to tell was night/day, allied tanks had all been taken in daylight whilst the "enemy" tanks had been taken at night.
I have to admit I always assumed it was an apocryphal tale which was just used to help visualise the pitfalls of the task of identification, but I didn't ever check. (Those wonderful long lost days before a quick Internet search gave you the background on most things.)

Sadly it seems my memory was faulty, it was a neural net, but the story is indeed about tank/no-tank and cloudy/sunny, there is no reference other than comments such as "much-told story from the 80s", so I fear it is unlikely to have a real basis,

User avatar
PinkShinyRose
Posts: 835
Joined: Mon Nov 05, 2012 6:54 pm UTC
Location: the Netherlands

Re: 1425: "Tasks"

Postby PinkShinyRose » Thu Sep 25, 2014 12:41 pm UTC

Pfhorrest wrote:Others want to answer still the harder problem of phenomenal consciousness: if you build a machine that responds to inputs and outputs exactly like a human and can report on its own internal states just like a human can, does a rose still smell just as sweet to it? Does it experience the same redness we do when looking at it? Can it properly experience smell or sight at all, or is it merely responding to chemical and electromagnetic stimuli with the same internal state-changes and consequent behavior as a human would? That's something which, if there is any answer to it at all, if the question even makes sense, it may not be possible to know. But then it's just as impossible to know about other humans as it is machines, so that's kind of irrelevant to questions about AI.

Does phenomenal conciousness actually require that others experience red in the same way I do, as opposed to them experiencing red as I would experience green? How would an AI with interal state-changes and behaviour identical to humans be any different from a human mind?

Kit.
Posts: 1117
Joined: Thu Jun 16, 2011 5:14 pm UTC

Re: 1425: "Tasks"

Postby Kit. » Thu Sep 25, 2014 4:28 pm UTC

orthogon wrote:I guess the subject pronouns got lost when it went into Japanese and then had to be reinvented again somewhere along the way.

I've just fed Google Translate with the classic: "Time flies like an arrow; fruit flies like a banana". I was actually surprised that Google failed it.

GodShapedBullet
Posts: 686
Joined: Mon Nov 26, 2007 7:59 pm UTC
Location: Delaware
Contact:

Re: 1425: "Tasks"

Postby GodShapedBullet » Thu Sep 25, 2014 5:27 pm UTC

With regards to philosophy and artificial intelligence, I am so friggin' glad that artificial intelligence and computers came along so we don't have to talk about "trickster demons" when we are talking about doubting the reality of our sensation the problem of knowing things a priori.

"What if we are in a computer simulation like The Matrix?"

just is so much more resonant than

"What if there is a trickster demon who is using mind control to make you think that reality is real but really you are in a hell dimension?"

User avatar
Whizbang
The Best Reporter
Posts: 2238
Joined: Fri Apr 06, 2012 7:50 pm UTC
Location: New Hampshire, USA

Re: 1425: "Tasks"

Postby Whizbang » Thu Sep 25, 2014 5:28 pm UTC

What if we are in The Matrix, made by a trickster demon?

User avatar
jc
Posts: 356
Joined: Fri May 04, 2007 5:48 pm UTC
Location: Waltham, Massachusetts, USA, Earth, Solar System, Milky Way Galaxy
Contact:

Re: 1425: "Tasks"

Postby jc » Thu Sep 25, 2014 5:55 pm UTC

Kit. wrote:
orthogon wrote:I guess the subject pronouns got lost when it went into Japanese and then had to be reinvented again somewhere along the way.

I've just fed Google Translate with the classic: "Time flies like an arrow; fruit flies like a banana". I was actually surprised that Google failed it.


... and you should've added "; tie flies like a fisherman" for even more fun with translation. Of course, that's a bit more obscure, and some native speakers of English might not even understand it.

schapel
Posts: 244
Joined: Fri Jun 13, 2014 1:33 am UTC

Re: 1425: "Tasks"

Postby schapel » Thu Sep 25, 2014 5:58 pm UTC

GodShapedBullet wrote:"What if there is a trickster demon who is using mind control to make you think that reality is real but really you are in a hell dimension?"

Clearly the trickster demons invented computers to throw off suspicion!

User avatar
Pfhorrest
Posts: 5478
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: 1425: "Tasks"

Postby Pfhorrest » Thu Sep 25, 2014 6:37 pm UTC

PinkShinyRose wrote:Does phenomenal conciousness actually require that others experience red in the same way I do, as opposed to them experiencing red as I would experience green?

No, but if you and I experience the same frequency of electromagnetic radiation differently despite being physically identical in all relevant ways (big obvious loophole factory here: what ways are relevant?), that would suggest there's something nonphysical to the experiencing, which is what the people interested in phenomenal consciousness usually seem to be on about — that just knowing all the physical facts about light and eyes and optic nerves and brains and so on doesn't tell you everything there is to know about sight; it doesn't tell you what it's like to see.

How would an AI with interal state-changes and behaviour identical to humans be any different from a human mind?

Well that's the question. Would it by any different? Some people say no, some people say yes, there are respected arguments in both directions. I fall on the "no" side there. (Although I do agree that there is a what-its-like, first-person experiential knowledge which is not contained within the total sum of third-person physical knowledge, but I'd say that that what-its-like experience is directly dependent on your physical functionality in a way precisely analogous to how the third-person observable behavior of you is; and that every physical thing has some at least trivial what-its-like experience, so it's still really just an ordinary aspect of all physical matter, but it's not until you get things which are complex and interesting in their behavior, like humans, that you get things which have similarly complex and interesting experiences like humans do. What it's like to be a rock isn't any more interesting than what a rock does, which for the most part is nothing of note. Experience is your functionality as seen from the inside, in the first person. Behavior is your functionality as seen from the outside, in the third person. Experience is the input to the function which constitutes you, and behavior is the output from it. You are, and every thing is, a function mapping experiences to behaviors; and the nature of that function determines the nature of that thing, both what its behavior as seen from the outside will be, and what its experience as seen from the inside will be).
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
da Doctah
Posts: 996
Joined: Fri Feb 03, 2012 6:27 am UTC

Re: 1425: "Tasks"

Postby da Doctah » Thu Sep 25, 2014 8:10 pm UTC

GodShapedBullet wrote:With regards to philosophy and artificial intelligence, I am so friggin' glad that artificial intelligence and computers came along so we don't have to talk about "trickster demons" when we are talking about doubting the reality of our sensation the problem of knowing things a priori.

"What if we are in a computer simulation like The Matrix?"

just is so much more resonant than

"What if there is a trickster demon who is using mind control to make you think that reality is real but really you are in a hell dimension?"


I still prefer "What if I'm just a brain in a jar being stimulated to have what I think are experiences?"

(Or even "What if I'm a butterfly, dreaming that I am Chuang Tzu?")

schapel
Posts: 244
Joined: Fri Jun 13, 2014 1:33 am UTC

Re: 1425: "Tasks"

Postby schapel » Thu Sep 25, 2014 9:57 pm UTC


Bounty
Posts: 41
Joined: Mon Apr 23, 2012 10:38 pm UTC

Re: 1425: "Tasks"

Postby Bounty » Thu Sep 25, 2014 10:21 pm UTC

da Doctah wrote:I still prefer "What if I'm just a brain in a jar being stimulated to have what I think are experiences?"

(Or even "What if I'm a butterfly, dreaming that I am Chuang Tzu?")

Image

rmsgrey
Posts: 3655
Joined: Wed Nov 16, 2011 6:35 pm UTC

Re: 1425: "Tasks"

Postby rmsgrey » Thu Sep 25, 2014 11:28 pm UTC

I always feel that the Chinese Room argument misses the point somewhere - for those who don't know it and can't be bothered to check Wikipedia, it's a famous thought experiment which runs something like:

Imagine someone sat in a room with a booklet of instructions. Periodically, a slip of paper is pushed through a slot in the door with squiggles on it. Using the booklet, the person writes the corresponding pattern of squiggles on the back of the slip of paper and pushes it back through the slot. To an outside observer who is fluent in Chinese, it appears that whoever is in the room is also fluent, yet the person inside the room isn't fluent. It's generally used as an argument against the Turing Test as potential evidence of "strong AI" - genuinely conscious computer programs.

The trouble is the argument's emotional appeal rests on the idea that the instructions to mimic fluency would be in the form of something small and comprehensible rather than being too long and complex to fit into a large warehouse when written out - once you start thinking about the actual magnitude of the "instruction book", it becomes much harder to dismiss the possibility of understanding being hidden in there somewhere. More seriously, the argument supposes that because no individual component of the system understands Chinese, the entire system cannot, despite people generally assuming that other humans understand languages but very few people being prepared to claim that any individual neuron understands, well, much of anything...

speising
Posts: 2365
Joined: Mon Sep 03, 2012 4:54 pm UTC
Location: wien

Re: 1425: "Tasks"

Postby speising » Thu Sep 25, 2014 11:58 pm UTC

worse, the argument begs the question. if we say a brain understands chinese, we can also say the chinese room does. if that sounds silly, then only for the same reason that a cat in a quantum superposition of alive and dead does. it's just a thought experiment which is not realistically possible.

User avatar
keithl
Posts: 662
Joined: Mon Aug 01, 2011 3:46 pm UTC

Re: 1425: "Tasks"

Postby keithl » Fri Sep 26, 2014 1:18 am UTC

Duck wrote:I remember hearing that there was an experimental image recognition technology in the 80s which supposedly "learned" to tell whether or not a tank was present in the image.
During World War II, the Russians trained dogs to run under tanks carrying bombs. In battle, they learned that it had not been a good idea to train the dogs with Russian tanks.

Iamthep
Posts: 3
Joined: Wed Jun 08, 2011 1:18 am UTC

Re: 1425: "Tasks"

Postby Iamthep » Fri Sep 26, 2014 3:15 am UTC

See if a bird is in an image? Easy. Just use Clarifai.

User avatar
Pfhorrest
Posts: 5478
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: 1425: "Tasks"

Postby Pfhorrest » Fri Sep 26, 2014 6:39 am UTC

I'm not sure how exactly we got on the subject of the Chinese Room, but I do think it's actually rather relevant to this comic. At least, my own personal refutation of it is.

Searle's actual conclusion he draws directly from the Chinese Room thought experiment is that "syntax does not equal semantics" — by which he means, being able to process and manipulate the symbols of a language doesn't mean you understand what those symbols mean — and from that he jumps to the implication that strong AI is therefore impossible.

I think that first conclusion is actually correct, but that the stronger conclusion that's supposed to follow automatically without question from it does not in fact follow.

The instruction book that the man in the room is supposed to have is described as a table of symbol inputs and the appropriate symbols to output in response to those symbols. It's a complex computer program, basically, to be executed by a human, using the book as non-volatile memory in which to store the program.

We can completely eliminate any controversy about whether the room (the-man-plus-the-instruction-book) understand Chinese even though the man himself doesn't, by just imagining that the man has memorized the instructions. Suppose he has also learned how to encode those written symbols in sounds — the same sounds that those symbols respond to when fluent Chinese speakers read those symbols out loud. That shouldn't really make a difference, we're just encoding the signals in a different medium.

Now let that man out of the room and ask him to converse with fluent Chinese speakers. And imagine one of those fluent Chinese speakers points out the window and asks, "What kind of bird is that in the tree?" The man will be unable to respond to that question in the same way that a fluent Chinese speaker would, because he has no way of knowing what kind of experiential phenomena correlate with the symbols "bird", "tree", and so forth. That is the sense in which he doesn't know what the words mean. He may know that "sparrows" are a subset of "birds" (which are a subset of "animals" and so forth), but he could never identify a sparrow, or a bird, or an animal. He has no kind of sensory images that come to mind associated with those words. They're just empty names. Meaningless.

But now imagine the original room again, and along with all the tables of what symbols to output in response to different symbol inputs, it also includes huge tables of images associated with those symbols, and maybe this instruction book is digital so it can also play sounds associated with those symbols, maybe it's even fancier and can somehow have scratch-and-sniff or lickable patches, textured patches... whatever it takes to associated symbols with experiential phenomena, sights, sounds, etc.

Now the room (the-man-plus-the-instuction-book) can speak fluent Chinese. You could slip a photo of a bird in a tree in with the question (in Chinese) "What kind of bird is in the tree?", and using the instruction book, the man would be able to respond just like a native Chinese speaker would. And if the man inside were to memorize the instruction book, he would simply be learning Chinese.

So Searle is right that syntax isn't enough. But there's no reason why we can't program a computer to do more than just manipulate syntax. In my modified Chinese Room we're cheating a bit and taking advantage of the human in the room already having really impressive image recognition and pattern matching software running on his brain, the kind of stuff that it took millions of years for evolution to hard-wire into animals like us. It shouldn't be surprising that it's taking us at least a couple decades to recreate such a program. And there's certainly no reason to think it's impossible. And once we've got that, we don't need the man in the room executing the instruction book, we now have a computer do it for him, and now the computer can memorize that instruction book and speak fluent Chinese just like the man could have.

But the instruction book Seale himself describes certainly wouldn't suffice to teach either man or machine how to speak Chinese.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
orthogon
Posts: 3102
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

Re: 1425: "Tasks"

Postby orthogon » Fri Sep 26, 2014 8:04 am UTC

speising wrote:worse, the argument begs the question. if we say a brain understands chinese, we can also say the chinese room does. if that sounds silly, then only for the same reason that a cat in a quantum superposition of alive and dead does. it's just a thought experiment which is not realistically possible.

Sure it is. I saw that Jim Al-Khalili do it on TV the other day. Fortunately for the BBC, the cat's wave function collapsed the right way. No animals were harmed1 in the making of this documentary.

1 in this universe.

Pfhorrest wrote:[...] he would simply be learning Chinese. [...]


Did anybody else hear Tom Lehrer's voice singing "und I'm learning Chinese, says Wernher von Braun" when they read this? He definitely sings that part in italics.
xtifr wrote:... and orthogon merely sounds undecided.

Kit.
Posts: 1117
Joined: Thu Jun 16, 2011 5:14 pm UTC

Re: 1425: "Tasks"

Postby Kit. » Fri Sep 26, 2014 9:34 am UTC

Pfhorrest wrote:I'm not sure how exactly we got on the subject of the Chinese Room, but I do think it's actually rather relevant to this comic. At least, my own personal refutation of it is.

Mine too.

The problem of the Chinese Room is that Searle didn't clearly define what "understanding" is, relying on human intuitive understanding of "understanding" instead. And as the comic shows, human intuition alone is quite weak at understanding the powers and weaknesses of AI.

This should come as no surprise to those who subscribe to the idea that human intuition is the result of the Darwinian selection on irrational beliefs. AI's understanding (as opposed to understanding shown by the surrounding predators, game and human partners and enemies) didn't affect humans' reproductive success (at least before the Internet), so it was not a factor in shaping our intuitive views about understanding.

rmsgrey
Posts: 3655
Joined: Wed Nov 16, 2011 6:35 pm UTC

Re: 1425: "Tasks"

Postby rmsgrey » Fri Sep 26, 2014 12:35 pm UTC

Pfhorrest wrote:But now imagine the original room again, and along with all the tables of what symbols to output in response to different symbol inputs, it also includes huge tables of images associated with those symbols, and maybe this instruction book is digital so it can also play sounds associated with those symbols, maybe it's even fancier and can somehow have scratch-and-sniff or lickable patches, textured patches... whatever it takes to associated symbols with experiential phenomena, sights, sounds, etc.

Now the room (the-man-plus-the-instuction-book) can speak fluent Chinese. You could slip a photo of a bird in a tree in with the question (in Chinese) "What kind of bird is in the tree?", and using the instruction book, the man would be able to respond just like a native Chinese speaker would. And if the man inside were to memorize the instruction book, he would simply be learning Chinese.


Which raises the question of whether qualia are semantic or syntactic - is our volunteer still just manipulating symbols, but on a much larger scale, or is there actual understanding there?

Does he know whether he's saying "sparrow" or "brown" or "larch" or "no tank"?

Yes, he's now able to respond to a wide range of situations as a native Chinese speaker would, but that was also true when he was sealed in a room and not interacting in and with a broader shared context - it's not clear that broadening the scope of supported interaction is a qualitative change.

peregrine_crow
Posts: 180
Joined: Mon Apr 07, 2014 7:20 am UTC

Re: 1425: "Tasks"

Postby peregrine_crow » Fri Sep 26, 2014 1:03 pm UTC

Kit. wrote:
orthogon wrote:I guess the subject pronouns got lost when it went into Japanese and then had to be reinvented again somewhere along the way.

I've just fed Google Translate with the classic: "Time flies like an arrow; fruit flies like a banana". I was actually surprised that Google failed it.


I didn't expect it to be able to disentangle the ambiguity in "fruit flies like a banana" as both interpretations are grammatically correct, I tried a few permutations though and apparently google just always translates <noun A> like a <noun B> as A similar B (at least for English to dutch translations).

It does translate fruit flies as the insect rather than as the food category, which means the translator actually makes a dumber mistake than what the sentence is trying to trick human readers into (ie: it translates it as "Food spoiling insects similar a banana" (grammar errors included) instead of "pseudo vegetables have aerial maneuvering capabilities similar to that of a banana") .
Ignorance killed the cat, curiosity was framed.

speising
Posts: 2365
Joined: Mon Sep 03, 2012 4:54 pm UTC
Location: wien

Re: 1425: "Tasks"

Postby speising » Fri Sep 26, 2014 2:38 pm UTC

very interesting:

google translate wrote:Die Zeit fliegt wie ein Pfeil; Fruchtfliegen wie eine Banane.


grammatically, if not semantically, correct, per chance, i suspect.

User avatar
keithl
Posts: 662
Joined: Mon Aug 01, 2011 3:46 pm UTC

Re: 1425: "Tasks"

Postby keithl » Fri Sep 26, 2014 5:02 pm UTC

The chinese room - imagine a meta-room that tests chinese rooms. How do we know it is doing so correctly?

The real life version of this is using Google translate, along with a fat Swedish-English-Swedish dictionary, to communicate with my Swedish speaking fourth cousin in Sweden. I translate an English paragraph to Swedish and back with Google translate, and modify the English until it comes back through the process with approximately the right meaning (harder than you might think,
because there are cultural assumptions involved). Then I send both the English and the Swedish to my cousin, and with his dictionary and Google translate he checks my work, and replies similarly. Much of the conversation is about the slipperyness of words, and the carving up of idea space into different maps, using different words and phrases.

All human languages are designed to convey meaning in environments, not merely symbols and relationships between them. Meaning is experienced, not merely parsed. Conveying a precise meaning can require one word in Swedish and a paragraph in English, or vice versa, dependng on the frequency of that meaning in that culture. Explain "crocodile" to our 10th century common ancestors.

Fortunately for me, the English language steals words from others. Here are 10 words ripe for exploitation, with "Luftmensch" and "Tsundoku" immediately applicable.

Wednesday, I had the upsetting experience of trying to convey "tinnitus" to my motor-mouth general physician. Me: "slow down, let me look at your face while you talk, write notes and let me write notes, this HURTS". Him: " bzz bzz yada yada Hearing Aid bzz bzz Neural Accomodation bzz bzz". Two humans, two poorly correlated cultures.

Sometimes I hope for machine AI so I can have an intelligent conversation. But I suspect a machine AI will have a billion words for "transistor" and three words for "lifeform", i.e. "irritation", "conquerable threat", and "threat which must be temporarily endured".

User avatar
Lenoxus
Posts: 120
Joined: Thu Jan 06, 2011 11:14 pm UTC

Re: 1425: "Tasks"

Postby Lenoxus » Sat Sep 27, 2014 3:32 am UTC

keithl wrote:Here are 10 words ripe for exploitation, with "Luftmensch" and "Tsundoku" immediately applicable.


I wonder what English words are considered similarly idiosyncratic, in the sense that foreign speakers would think "Huh, they have a word for that." I know that "okay" has been exported to much of the word, so perhaps it once belonged in that category.

User avatar
orthogon
Posts: 3102
Joined: Thu May 17, 2012 7:52 am UTC
Location: The Airy 1830 ellipsoid

Re: 1425: "Tasks"

Postby orthogon » Sat Sep 27, 2014 12:30 pm UTC

Lenoxus wrote:
keithl wrote:Here are 10 words ripe for exploitation, with "Luftmensch" and "Tsundoku" immediately applicable.


I wonder what English words are considered similarly idiosyncratic, in the sense that foreign speakers would think "Huh, they have a word for that." I know that "okay" has been exported to much of the word, so perhaps it once belonged in that category.


Yesterday, we were trying to explain to a native Spanish speaker what camp (the adjective) means. It's quite hard to do in a gerbil-swallowing way, and it's developed into quite a complex concept, applicable for example to inanimate objects. Do other languages have a single word for this?

German is famous for having more of these words, but that's more to do with the way in which it combines words into longer words (I thought this was known as agglomeration but Google and Wikipedia fail to confirm). In response to Luftmensch, I offer Space Cadet, which would probably also be a single word in German. Japanese has a similar tendency , but komorebi, the sunlight that filters through the leaves of the trees, creates dappled shade. These are two words, but they're found together often enough (one eighth of the occurrences of dappled are followed by shade) to be a word-like entity, whatever that's called. I won't speculate on why Japanese concentrates on the light, and English on the shade!

Come to think of it, whether something is one word or two is a feature of the writing system, not the spoken language, isn't it? So of course Japanese and Chinese, which don't use spaces between words in writing, would have more of them.
xtifr wrote:... and orthogon merely sounds undecided.

Mikeski
Posts: 1112
Joined: Sun Jan 13, 2008 7:24 am UTC
Location: Minnesota, USA

Re: 1425: "Tasks"

Postby Mikeski » Sat Sep 27, 2014 4:26 pm UTC

orthogon wrote:
Lenoxus wrote:
keithl wrote:Here are 10 words ripe for exploitation, with "Luftmensch" and "Tsundoku" immediately applicable.


I wonder what English words are considered similarly idiosyncratic, in the sense that foreign speakers would think "Huh, they have a word for that." I know that "okay" has been exported to much of the word, so perhaps it once belonged in that category.

German is famous for having more of these words, but that's more to do with the way in which it combines words into longer words (I thought this was known as agglomeration but Google and Wikipedia fail to confirm). [...] Japanese has a similar tendency

You're looking for agglutination. (Apparently, German is "fusional" and not "agglutinative". The term that covers both is "synthetic". The term that covers people who knew this before checking Wikipedia is "weird".)

Come to think of it, whether something is one word or two is a feature of the writing system, not the spoken language, isn't it? So of course Japanese and Chinese, which don't use spaces between words in writing, would have more of them.

Nah, Japanese might run all their kanji and kana together, but they're still individual words you could look up in a dictionary. Any combination word that would be considered its own new word (through agglutination) would work the same way in a roman-lettered language.

One of my favorite Japanese "huh, they have a word for that" is "asatte": "the day after tomorrow".


Return to “Individual XKCD Comic Threads”

Who is online

Users browsing this forum: BytEfLUSh and 107 guests