brain in a vat

A forum for good logic/math puzzles.

Moderators: jestingrabbit, Moderators General, Prelates

User avatar
ponzerelli
Posts: 108
Joined: Mon Dec 31, 2007 4:40 pm UTC
Contact:

brain in a vat

Postby ponzerelli » Tue Aug 18, 2009 9:58 pm UTC

My professor gave the class a question to answer for our first assignment, and I have no idea how to answer it. He said it was a revamp of Descartes' brain in the vat.

You are in a room with a computer and it has two IM windows open. You are having two separate conversations one with a real person and one with an advanced AI. Your task is to figure out which is the AI. You know nothing about the real person you are talking to and they know nothing about you. The real person knows your task as does the AI. The AI will do whatever it takes to keep its identity secret including lying so no asking "are you a computer" or any such thing.

It has me stumped...the professor said there was a way to trap the computer...but I can't think of it. Any ideas??

Wuggles
Posts: 9
Joined: Wed Aug 19, 2009 2:11 am UTC

Re: brain in a vat

Postby Wuggles » Wed Aug 19, 2009 2:24 am UTC

Perhaps you could do something like this:
Spoiler:

Code: Select all

       
        O
       OO
      O  O
    OOOOOOO
   O       O
  O         O
What letter is represented in the lines above?


EDIT: Stupid forum removes my spaces, then changes the font when I employ a workaround! You'll have to take my word that it looked more like an "A" when I originally typed it.

This takes advantage of numerous deficiencies that a computer suffers from. A computer might not understand what you mean by the phrase "the lines above." Which lines? How far above are they? How far do they extend? A human would understand exactly what is meant just given the obvious image of the letter "A." We humans have a knack for working with uncertainty, whereas computers have a much more difficult time. It likely has no capacity to visualize the letter that is represented here - only deal with language. Even if it could find which lines you are referring to (maybe in your haste to trick the computer, you typed this first), this would still trip up the computer by requiring it to think visually. Humans are fairly good at this whereas computers are far, FAR more comfortable representing things as data, not poorly-defined, imperfect images.


In a larger sense, however, there is really no reason why a sufficiently advanced computer could not compensate for this. Imagine a machine that contains a list of EVERY POSSIBLE input or sequence of inputs, including this one. It is built to have a valid response no matter what you put in, because that has already been accounted for. In this case, how could you possibly tell it apart from a human? There doesn't seem to be a way - all responses are accounted for. This, of course, requires the use of infinite memory. It is also, of course, just an example.
Last edited by Wuggles on Wed Aug 19, 2009 2:45 am UTC, edited 1 time in total.

sje46
Posts: 4730
Joined: Wed May 14, 2008 4:41 am UTC
Location: New Hampshire

Re: brain in a vat

Postby sje46 » Wed Aug 19, 2009 2:41 am UTC

ponzerelli wrote:My professor gave the class a question to answer for our first assignment, and I have no idea how to answer it. He said it was a revamp of Descartes' brain in the vat.

You are in a room with a computer and it has two IM windows open. You are having two separate conversations one with a real person and one with an advanced AI. Your task is to figure out which is the AI. You know nothing about the real person you are talking to and they know nothing about you. The real person knows your task as does the AI. The AI will do whatever it takes to keep its identity secret including lying so no asking "are you a computer" or any such thing.

It has me stumped...the professor said there was a way to trap the computer...but I can't think of it. Any ideas??

That's a good question. It kinda deals with philosophical zombies.
One problem is that we don't know how smart the computer is. How advanced. It could very easily read captchas for all we know. The only things we can do is to somehow find out if humans are really different than computers, something that I don't personally believe. That if, say, you kill a human, it soul goes to heaven, and if you kill a computer, it shuts off.

I'm thinking it has something to do humans experience and know, but you can't teach as a fact. Like, what blue looks like. It's very hard to describe this sensation. Perhaps use an analogy. Still, though, I see nothing about flesh and neurons that can't theoretically be reproduced by "machinery".
General_Norris: Taking pride in your nation is taking pride in the division of humanity.
Pirate.Bondage: Let's get married. Right now.

SplashD
Posts: 6
Joined: Wed Aug 19, 2009 2:34 am UTC

Re: brain in a vat

Postby SplashD » Wed Aug 19, 2009 2:44 am UTC

Spoiler:
Since you stated:

"Your task is to figure out which is the AI"

It appears the onus is on you to prove which one is and isn't the AI. In this case it is easy for the AI to thwart you. It can simply refuse to answer any of your questions. The real person knows your task as you say so he/she may chose to do the same. So you've got two IM Windows that aren't replying. How are you tell which is which?

bane2571
Posts: 46
Joined: Tue Oct 14, 2008 11:15 pm UTC

Re: brain in a vat

Postby bane2571 » Wed Aug 19, 2009 2:55 am UTC

The question is too wide of scope. Without knowing the limitations of the AI you can't assume that it can be beaten simply because of wuggles' "perfect" AI.

Wuggles
Posts: 9
Joined: Wed Aug 19, 2009 2:11 am UTC

Re: brain in a vat

Postby Wuggles » Wed Aug 19, 2009 3:01 am UTC

Spoiler:
SplashD wrote:Since you stated:

"Your task is to figure out which is the AI"

It appears the onus is on you to prove which one is and isn't the AI. In this case it is easy for the AI to thwart you. It can simply refuse to answer any of your questions. The real person knows your task as you say so he/she may chose to do the same. So you've got two IM Windows that aren't replying. How are you tell which is which?


Hold on just a second there. You are assuming that both the human and the AI are trying to deceive you, but this is not the case. Only the AI is trying to trick you. All that the human needs to do is answer honestly. Not responding does not help the human convince you that he is, in fact, the human. All that it does is possibly allow you to more easily mistake him for the AI. Because of this, the AI has nothing to gain by not responding because it is trying to imitate the human. I think that you misunderstood the question. The reason why you can't ask "which one of you is the AI?" is not because this is some rule embedded in the challenge, but rather because neither one will answer "yes."


By the way, how insane are we supposed to get about these spoiler tags? I'm new here and unfamiliar with the appropriate protocol.

Scip
Posts: 7
Joined: Tue Aug 18, 2009 9:38 pm UTC

Re: brain in a vat

Postby Scip » Wed Aug 19, 2009 3:14 am UTC

Kinda sounds like a Turing test.

Spoiler:
Maybe you could ask it about its surroundings (where do you live? How's the weather there?), assuming the AI can't look that up.
I don't know the answer to the spoiler question myself so I'll just put these here. :P

SplashD
Posts: 6
Joined: Wed Aug 19, 2009 2:34 am UTC

Re: brain in a vat

Postby SplashD » Wed Aug 19, 2009 3:44 am UTC

"You are assuming that both the human and the AI are trying to deceive you"

It states that the AI will try to deceive you. However it is not known whether the human will try to deceive you or not. In that case you need to assume the worst case scenario.

User avatar
ponzerelli
Posts: 108
Joined: Mon Dec 31, 2007 4:40 pm UTC
Contact:

Re: brain in a vat

Postby ponzerelli » Wed Aug 19, 2009 4:01 am UTC

SplashD wrote:"You are assuming that both the human and the AI are trying to deceive you"

It states that the AI will try to deceive you. However it is not known whether the human will try to deceive you or not. In that case you need to assume the worst case scenario.


The human is not trying to deceive you. The human will answer truthfully. From the way my professor stated it, the human is trying to help you realize that he is indeed human but since the computer will lie and answer "no" to "are you a computer" it's impossible to figure out that way. My professor said specifically "You have to trap it somehow, and therein lies your hint."

Walter.Horvath
Posts: 933
Joined: Fri May 15, 2009 11:33 pm UTC
Location: Orlando, FL

Re: brain in a vat

Postby Walter.Horvath » Wed Aug 19, 2009 4:16 am UTC

Spoiler:
If you wanted to catch it in a word-trap, you could do something like ask "Is the answer to the following question the same as..." I don't know how that would help you differentiate at all, maybe the AI would shut down, or the human would get pissy?

Spoiler:
Maybe do something to evoke emotion, then call them on it?
"my friend Catherine just died."
"That sucks"
"What? Why would you say that, it doesn't suck, you bitch!"
*cue AI asplosion*

Spoiler:
Or maybe just something that the human can't answer, asuming that he/she doesn't have infinite knowledge. You would have to trap it, though, as the AI might catch your drift when you ask it NP-complete.

SplashD
Posts: 6
Joined: Wed Aug 19, 2009 2:34 am UTC

Re: brain in a vat

Postby SplashD » Wed Aug 19, 2009 4:28 am UTC

Spoiler:
"human is trying to help you realize that he is indeed human "

It that case ask each to come over and shake your hand.

User avatar
notzeb
Without Warning
Posts: 629
Joined: Thu Mar 08, 2007 5:44 am UTC
Location: a series of tubes

Re: brain in a vat

Postby notzeb » Wed Aug 19, 2009 5:06 am UTC

If the AI is anything like the most advanced chatbots I've seen:
Spoiler:
"When I say zork, I mean one."
"How does that make you feel?"
"What is zork plus zork?"
"Tell me more about zork."
Seriously, these things are too primitive to have any short term memory whatsoever. If you're a bit more paranoid, invent an entire language and teach it to the AI and the human, then start speaking in it.
What I'd do:
Spoiler:
Ask for help on an incredibly hard math problem, pretending that I think the first one to solve the problem is human. If the problem is solved, I win, and publish the results (with the human, or AI, as a coauthor). If the problem isn't solved, then I try to teach them enough math for them to be even slightly useful to me as I try to solve the problem myself.
What I think your professor wants you to say:
Spoiler:
"This statement is a lie."
Dalek on the other end asplodes.
Silly answer just because:
Spoiler:
Ask the first IM window what he would do in your situation. If his strategy works then you know who the AI is. If it doesn't then just assume he is the computer, since any human can easily solve this problem.
Another silly answer:
Spoiler:
Get up, and walk outside. Follow the internet connection to it's source... probably a router if the guy who set up the experiment is smart. Seeing as you have physical access to the router, it shouldn't be *too* hard to figure out where the packets are coming from. Find the computer on the other end (this could take a while). Check if it has a guy typing stuff into it.
Zµ«V­jÕ«ZµjÖ­Zµ«VµjÕ­ZµkV­ZÕ«VµjÖ­Zµ«V­jÕ«ZµjÖ­ZÕ«VµjÕ­ZµkV­ZÕ«VµjÖ­Zµ«V­jÕ«ZµjÖ­ZÕ«VµjÕ­ZµkV­ZÕ«ZµjÖ­Zµ«V­jÕ«ZµjÖ­ZÕ«VµjÕ­Z

Puck
Posts: 615
Joined: Tue Nov 27, 2007 7:29 pm UTC

Re: brain in a vat

Postby Puck » Wed Aug 19, 2009 6:36 am UTC

Or...
Spoiler:
Say something like "What is the maximum airspeed velocity of a coconut-laden swallow?" The computer may respond many things, but is unlikely to immediately find the expected response that any human Monty Python fan would know instantly: "African or European?"
22/7 wrote:If I could have an alternate horn that would yell "If you use your turn signal, I'll let you in" loud enough to hear inside another car, I would pay nearly any amount of money for it.

operator[]
Posts: 156
Joined: Mon May 18, 2009 6:11 pm UTC
Location: Stockholm, Sweden

Re: brain in a vat

Postby operator[] » Wed Aug 19, 2009 7:47 am UTC

Spoiler:
I believe that the question might be a variation of the "three princesses" puzzle.

User avatar
notzeb
Without Warning
Posts: 629
Joined: Thu Mar 08, 2007 5:44 am UTC
Location: a series of tubes

Re: brain in a vat

Postby notzeb » Wed Aug 19, 2009 7:51 am UTC

operator[] wrote:
Spoiler:
I believe that the question might be a variation of the "three princesses" puzzle.

No... just no.

Nope. Not gonna happen.

Well... no. I just can't see it. No.

Edit: also, I really think your prof is on something if he thinks there is some foolproof method here.
Spoiler:
I'd probably switch classes if his answer did not involve physically meeting the other guy in some way. Either that, or he has to make assumptions, such as the AI being one of the really crappy types of AI we have around these days.

In the worst possible case, he's one of those crackpots that takes Godel's Incompleteness Theorem as proof that humans are somehow quantumly better than computers... you don't want to get infected with that drivel.
Last edited by notzeb on Wed Aug 19, 2009 8:00 am UTC, edited 1 time in total.
Zµ«V­jÕ«ZµjÖ­Zµ«VµjÕ­ZµkV­ZÕ«VµjÖ­Zµ«V­jÕ«ZµjÖ­ZÕ«VµjÕ­ZµkV­ZÕ«VµjÖ­Zµ«V­jÕ«ZµjÖ­ZÕ«VµjÕ­ZµkV­ZÕ«ZµjÖ­Zµ«V­jÕ«ZµjÖ­ZÕ«VµjÕ­Z

User avatar
jestingrabbit
Factoids are just Datas that haven't grown up yet
Posts: 5967
Joined: Tue Nov 28, 2006 9:50 pm UTC
Location: Sydney

Re: brain in a vat

Postby jestingrabbit » Wed Aug 19, 2009 7:51 am UTC

Spoiler:
"You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it’s crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?"

or possibly

"Describe in single words, only the good things that come into your mind about your mother."


Your teacher might be hung up on something like
Spoiler:
the deterministic nature of computers,
but I really think that wuggles' perfect AI is an entirely adequate refutation to any given question.

By the way, how insane are we supposed to get about these spoiler tags? I'm new here and unfamiliar with the appropriate protocol.


If its a solution, then spoiler it. If its discussion about the question, don't. If you are discussing a solution in enough detail to tell somone else what the solution was, spoiler it. Basically, imagine reading through the thread as somone who isn't looking at spoilers, who wants to work it out themselves. Would you be irritated to read a solution? Yes, you would, so spoiler it.
ameretrifle wrote:Magic space feudalism is therefore a viable idea.

Wuggles
Posts: 9
Joined: Wed Aug 19, 2009 2:11 am UTC

Re: brain in a vat

Postby Wuggles » Wed Aug 19, 2009 8:10 am UTC

I have thought about this some more: what if you try exploiting the advantages that computers have over humans rather than the reverse? The problem is, of course, that the computer can always conceal its power...

Regardless, this still fails when presented with the prospect of Wuggles' Perfect AI (Have I coined something, or is the phrase being used as a matter of convenience?). Also, for anyone who gets hung up on the idea that such an AI could not have infinite storage space (I certainly am), consider a slightly modified problem: what if we have the question presented in the original statement but with one modification: there are N rounds in which a person is allowed to attempt to solve the problem. On the first round, the AI is little more than a calculator with some rudimentary text-recognition capabilities and grammar rules. Now, after each round, the inventor of the AI will modify the software such that it is able to properly answer the thing that caused it to lose on the previous round. Thus the AI will become more and more human-like with the passing of each round.

There will come a point where the AI is within the range of performance of a human. I say "within a range" because - let's not forget - we humans are all unique, and nothing is discussed about what KIND of human is on the other end. What if on the other end is a human savant? Surely such a person could much more easily be mistaken for a computer in this test, given his advanced calculation and analytical abilities and deficiencies in other areas. On the other hand, what if we have a small child on the other end? This person may have difficulty even understanding many questions posed. For instance, what if you ask a small child a fairly simply math question. The child might be totally baffled. You might think that you are surely talking to an AI that has been built to conceal its math capabilities in order to appear more human-like. It seems as though "human" describes a fairly wide range of entities and is not just a single goal that the computer must approach as a limit but some range that it can actually be capable of existing in.

An AI may have a more difficult time convincing you that it is human against some people, but against others it seems as thought it would have much more of an advantage as it is programmed and re-programmed enough times.

User avatar
quintopia
Posts: 2906
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: brain in a vat

Postby quintopia » Wed Aug 19, 2009 10:45 am UTC

Spoiler:
"Have you seen The Land Before Time?"
"No."
"You have an internet connection. . .go download it and watch it."
*wait a few hours*
"What part made you cry?"

User avatar
sixes_and_sevens
Posts: 71
Joined: Thu Jun 25, 2009 3:15 pm UTC
Location: Birmingham, UK

Re: brain in a vat

Postby sixes_and_sevens » Wed Aug 19, 2009 10:49 am UTC

quintopia wrote:
Spoiler:
"Have you seen The Land Before Time?"
"No."
"You have an internet connection. . .go download it and watch it."
*wait a few hours*
"What part made you cry?"


Spoiler:
I think "the sequels" should be a valid answer to this question.

User avatar
ponzerelli
Posts: 108
Joined: Mon Dec 31, 2007 4:40 pm UTC
Contact:

Re: brain in a vat

Postby ponzerelli » Wed Aug 19, 2009 4:04 pm UTC

The only problem I have with some of the answers y'all are giving is that I don't know what the computer is capable of lying a about. I do not know if it has information enough to lie about feelings and such.

quintopia wrote:
Spoiler:
"Have you seen The Land Before Time?"
"No."
"You have an internet connection. . .go download it and watch it."
*wait a few hours*
"What part made you cry?"


Spoiler:
Could it lie and say "The part where Littlefoot was seperated from his(her?) mother" or whatever. (I don't remember much about The Land Before Time, so I'm not sure if that really happened or not). My professor was not specific about it's capacity to do such things, he just said "super advanced AI".



Edit: also, I really think your prof is on something if he thinks there is some foolproof method here.
Spoiler:
I'd probably switch classes if his answer did not involve physically meeting the other guy in some way. Either that, or he has to make assumptions, such as the AI being one of the really crappy types of AI we have around these days.

In the worst possible case, he's one of those crackpots that takes Godel's Incompleteness Theorem as proof that humans are somehow quantumly better than computers... you don't want to get infected with that drivel.


This is actually for an english class, but he went on a shpiel about philosophy and I'm thinking that this whole thing is just to get us thinking and there isn't an actual answer (like in the case of Descartes' "Brain in a Vat"). But his comment about being able to trap the AI still bothers me. Unless he is talking about us being trapped not being able to answer it? or something? :/ gah! this bothers me so much.

Wuggles
Posts: 9
Joined: Wed Aug 19, 2009 2:11 am UTC

Re: brain in a vat

Postby Wuggles » Wed Aug 19, 2009 4:24 pm UTC

It doesn't matter. You simply assume that it is the most advanced and most capable that an AI could possibly be. Otherwise you could be sitting in front of a DOS command prompt and claim that this is the AI that you intend to test.

I think that your professor is trying to pull some kind of shenanigans about how computers can't "really feel emotions" or perhaps they "can't evaluate a statement with no definite truth value" ("this statement is false"). In either case, the AI does not need to answer correctly. It needs to answer in the way that a human would.

I say that you write your paper on why no such way of distinguishing a sufficiently advanced computer from a human could exist.

On a separate but related note, if your English professor DOES know something that all the worlds' AI researchers have yet to realize, you'd better remind him of his academic duty to tell them - they may be wasting their time on something futile.

User avatar
Moonbeam
Posts: 292
Joined: Sat Dec 08, 2007 5:28 pm UTC
Location: UK

Re: brain in a vat

Postby Moonbeam » Wed Aug 19, 2009 5:46 pm UTC

This reminds me of a film from 1986, that I'm surprised no-one has mentioned:

Spoiler:
Has no-one seen the film Short Circuit ??

There's a scene in there, where someone is trying to determine whether the "robot" in the film is actually "alive".
They slap some paint onto a piece of paper, fold it in half to smudge all the paint together, open it up and ask the robot what it sees.
At first, the robot answers as expected, stating all the colours and all the chemical compounds present, which kinda disappoints everyone. Then it says that it can see a butterfly and various other objects which convinces everyone that it is thinking for itself, etc.

Maybe something similar could be employed here ???

Puck
Posts: 615
Joined: Tue Nov 27, 2007 7:29 pm UTC

Re: brain in a vat

Postby Puck » Wed Aug 19, 2009 5:50 pm UTC

I think you've just given an example of what doesn't work. Your example (though fictional) would demonstrate that artificial constructs are capable of responses which appear human.

Spoiler:
I suppose one could argue that Johnny-5 identified himself as artificial by speaking first about the chemical composition of the paint and then moving on to things that a human would naturally notice first; but a more advanced AI trying to mimic a human would know better.
22/7 wrote:If I could have an alternate horn that would yell "If you use your turn signal, I'll let you in" loud enough to hear inside another car, I would pay nearly any amount of money for it.

User avatar
Moonbeam
Posts: 292
Joined: Sat Dec 08, 2007 5:28 pm UTC
Location: UK

Re: brain in a vat

Postby Moonbeam » Wed Aug 19, 2009 6:01 pm UTC

Puck wrote:I think you've just given an example of what doesn't work. Your example (though fictional) would demonstrate that artificial constructs are capable of responses which appear human.


Yea but:
Spoiler:
Even though Johhny-5 was an AI, we all know that he really was alive :roll:

EricH
Posts: 259
Joined: Tue May 15, 2007 3:41 am UTC
Location: Maryland

Re: brain in a vat

Postby EricH » Wed Aug 19, 2009 6:39 pm UTC

Scip wrote:Kinda sounds like a Turing test.
Sounds exactly like one, and an AI advanced enough to pass a Turing test is, by definition, not possible to distinguish from a human. Based purely on the text messages, I can't see how it can be done; it's akin to giving the problem: "Prove you're not in the Matrix".
ponzerelli, when your professor gives his answer, please be sure to post it here, so we can tell you in detail how he's wrong...that's how we get closure.
Pseudomammal wrote:Biology is funny. Not "ha-ha" funny, "lowest bidder engineering" funny.

User avatar
ponzerelli
Posts: 108
Joined: Mon Dec 31, 2007 4:40 pm UTC
Contact:

Re: brain in a vat

Postby ponzerelli » Wed Aug 19, 2009 7:05 pm UTC

Will do. ^.^

I really can't tell if he has an answer or not, because he is such a smartass all the time. I will find out tomorrow though.

User avatar
Macbi
Posts: 941
Joined: Mon Apr 09, 2007 8:32 am UTC
Location: UKvia

Re: brain in a vat

Postby Macbi » Wed Aug 19, 2009 7:05 pm UTC

The question is irrelevant, since the AI will take over your mind.
    Indigo is a lie.
    Which idiot decided that websites can't go within 4cm of the edge of the screen?
    There should be a null word, for the question "Is anybody there?" and to see if microphones are on.

User avatar
Qaanol
The Cheshirest Catamount
Posts: 3069
Joined: Sat May 09, 2009 11:55 pm UTC

Re: brain in a vat

Postby Qaanol » Wed Aug 19, 2009 8:20 pm UTC

I suspect the Prof is thinking of something that will involve feeding the output of one IM conversation into the other (and perhaps vice versa). I can't see how this would help, but it's the only "tricky" thing I can think of that doesn't involve leaving the testing room.
wee free kings

Axidos
Posts: 167
Joined: Tue Jan 20, 2009 12:02 pm UTC
Location: trapped in a profile factory please send help

Re: brain in a vat

Postby Axidos » Wed Aug 19, 2009 10:07 pm UTC

ponzerelli wrote:"You have to trap it somehow, and therein lies your hint."

Based on his hint there is probably a simpler answer. I think he's been watching some Ghost in the Shell or other sci-fi.
Spoiler:
Is he referring to trapping it in an infinite loop?

"The below statement is false.
The above statement is true.
Which statement is true?"

User avatar
ponzerelli
Posts: 108
Joined: Mon Dec 31, 2007 4:40 pm UTC
Contact:

Re: brain in a vat

Postby ponzerelli » Wed Aug 19, 2009 10:46 pm UTC

Axidos wrote:
ponzerelli wrote:"You have to trap it somehow, and therein lies your hint."

Based on his hint there is probably a simpler answer. I think he's been watching some Ghost in the Shell or other sci-fi.
Spoiler:
Is he referring to trapping it in an infinite loop?

"The below statement is false.
The above statement is true.
Which statement is true?"


Spoiler:
That could work, theoretically. It still falls apart when you consider how advanced it could be. My professor didn't give us enough info. A human would (or at least, I would) say "Uh...I have no fucking clue". If it's advanced enough, the AI could say the same thing in order to preserve its identity.

Rob7045713
Posts: 8
Joined: Mon Jun 16, 2008 11:21 pm UTC

Re: brain in a vat

Postby Rob7045713 » Wed Aug 19, 2009 11:44 pm UTC

Ask each of them to start a video chat with you. Or open a chatroom with both of them and somehow figure it out from there.

User avatar
thc
Posts: 643
Joined: Fri Feb 08, 2008 6:01 am UTC

Re: brain in a vat

Postby thc » Thu Aug 20, 2009 3:00 am UTC

I don't think the question as stated, has nearly enough information to figure out.

But if you make a few assumptions, there might be a way.

Assume the real person is trying to help you figure out what's what.
Assume the AI is human-like, and by that I mean the AI is the emulation rather than the AI emulating a human. If the AI is advanced enough, then it will be equivalent to a human that believes it's not a human.

If you take those two assumptions, the questions boils down to figuring out who is actually trying to help you and who is pretending to help you. Perhaps a well-trained psychologist could figure it out :p

sje46
Posts: 4730
Joined: Wed May 14, 2008 4:41 am UTC
Location: New Hampshire

Re: brain in a vat

Postby sje46 » Thu Aug 20, 2009 3:19 am UTC

ponzerelli wrote:
Axidos wrote:
ponzerelli wrote:"You have to trap it somehow, and therein lies your hint."

Based on his hint there is probably a simpler answer. I think he's been watching some Ghost in the Shell or other sci-fi.
Spoiler:
Is he referring to trapping it in an infinite loop?

"The below statement is false.
The above statement is true.
Which statement is true?"


Spoiler:
That could work, theoretically. It still falls apart when you consider how advanced it could be. My professor didn't give us enough info. A human would (or at least, I would) say "Uh...I have no fucking clue". If it's advanced enough, the AI could say the same thing in order to preserve its identity.

Spoiler:
It wouldn't work at all. Our brains are computers too, and we don't get trapped in infinite loops, not unless we choose to. Who says the computer can't realize that the paradox is a paradox, and thus worth skipping? Or maybe it will do the loop, but isn't so poorly designed that it will waste all its resources on it and not be able to continue the discussion?
General_Norris: Taking pride in your nation is taking pride in the division of humanity.
Pirate.Bondage: Let's get married. Right now.

User avatar
ponzerelli
Posts: 108
Joined: Mon Dec 31, 2007 4:40 pm UTC
Contact:

Re: brain in a vat

Postby ponzerelli » Thu Aug 20, 2009 3:59 am UTC

sje46 wrote:
ponzerelli wrote:
Axidos wrote:
ponzerelli wrote:"You have to trap it somehow, and therein lies your hint."

Based on his hint there is probably a simpler answer. I think he's been watching some Ghost in the Shell or other sci-fi.
Spoiler:
Is he referring to trapping it in an infinite loop?

"The below statement is false.
The above statement is true.
Which statement is true?"


Spoiler:
That could work, theoretically. It still falls apart when you consider how advanced it could be. My professor didn't give us enough info. A human would (or at least, I would) say "Uh...I have no fucking clue". If it's advanced enough, the AI could say the same thing in order to preserve its identity.

Spoiler:
It wouldn't work at all. Our brains are computers too, and we don't get trapped in infinite loops, not unless we choose to. Who says the computer can't realize that the paradox is a paradox, and thus worth skipping? Or maybe it will do the loop, but isn't so poorly designed that it will waste all its resources on it and not be able to continue the discussion?

Spoiler:
That's what I was saying, it would work if the computer wasn't smart enough, but if it is then the AI can disregard it like a human could.

MSTK
Posts: 123
Joined: Fri Oct 31, 2008 5:43 am UTC

Re: brain in a vat

Postby MSTK » Thu Aug 20, 2009 8:25 am UTC

You all must remember the theoretical limit of AI intelligence as gaged by the Turing Test, and assume that the professor means this:

For every possible question AND series of questions/statements (that comprise the conversation), there is, stored in an infinite memory, a pre-determined response.

Theoretically, it doesn't require infinite memory. Just vast orders of magnitude for not only every possible statement, but every possible arbitrary string of statements and responses.

The Land Before Time question falls apart because the AI would have already had this programmed into it.

However, this really isn't artificial intelligence. But this is sufficient to pass the test that the professor is putting in front of you.

User avatar
quintopia
Posts: 2906
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: brain in a vat

Postby quintopia » Thu Aug 20, 2009 9:51 am UTC

MSTK wrote:Theoretically, it doesn't require infinite memory. Just vast orders of magnitude for not only every possible statement, but every possible arbitrary string of statements and responses.


Um, actually it does require infinite memory to store responses to ALL questions human can generate at the current time.* Furthermore, a human has to program all these responses even if there are only finitely many (because, if we assume that a human doesn't, the only alternative is that the responses were machine-generated, in which case we are technically actually talking to the machine that generated them and thus your assertion that this machine is merely a device to look up responses in a table is meaningless) and considering the sheer number of responses that need be programmed, it would probably take more physical memory than the matter in the universe can provide and more time to enter in the responses than the human race can survive.

*This is because grammar allows us to generate an infinite number of questions, for example by asking meta-*questions (here * is the Kleene star) with an arbitrarily high level of abstraction. We can, however, reduce to a finite number of questions by adding a default answer ("Uh, I dunno man.") after using up all the matter and time in the universe to cover as many bases as we can.

ON THE OTHER HAND: Humans will not be interested in most of the possible questions that could be asked, and the programmers could just decide to ignore any such questions. They might be able to get off with a few centuries of hand-coding responses in this way.
MSTK wrote:The Land Before Time question falls apart because the AI would have already had this programmed into it.
How long do you think it would take for the programmers to hand-code opinions and emotional responses (and not necessarily the most common opinions and responses since such would admit statistical analysis) to every scene in every movie in a way consistent with the personality they are designing for the AI? Keep in mind that thousands of new movies hit youtube everyday, and we are working under the assumption that the human and AI have access to the net.

TLDR: An AI consisting of merely an exponentially large table of responses that it can search with speed comparable to the time it takes a human to compose a reply (which would require careful data management and programming in and of itself) is extremely impractical and not really even worth considering.

User avatar
sixes_and_sevens
Posts: 71
Joined: Thu Jun 25, 2009 3:15 pm UTC
Location: Birmingham, UK

Re: brain in a vat

Postby sixes_and_sevens » Thu Aug 20, 2009 10:39 am UTC

Not quite a solution to this, but an interesting idea: seeing how many times a second an alleged human can pass a captcha test.

Offering evidence of your human status 300,000 times in a minute might possibly strike some people as suspicious behaviour.

User avatar
a1s
Posts: 45
Joined: Mon May 21, 2007 11:56 am UTC

Re: brain in a vat

Postby a1s » Thu Aug 20, 2009 11:35 am UTC

This isn't a full proof plan, but I think you actualy should ask both participants if they are an AI. Whoever says they are is the human (as it would be counterproductive for the AI to say this, but a human might just do it- I would :wink: ).

Also if you have access to a spaceship you can fly both of them to saturn, and let them each be the pilto for a time. whoever tries to crush you with acceleration is the robot AI.

User avatar
EnderSword
Posts: 1060
Joined: Wed Feb 04, 2009 8:11 pm UTC

Re: brain in a vat

Postby EnderSword » Thu Aug 20, 2009 2:56 pm UTC

I'd suggest turning the tables on your Professor.

Tell him you bet him an A+ he can't answer his own question in a way we can't tear apart within 1 hour of him saying it.
WWSD?*
*what would Sheldon do?

User avatar
TauCeti
Posts: 37
Joined: Tue Oct 09, 2007 11:16 pm UTC

Re: brain in a vat

Postby TauCeti » Thu Aug 20, 2009 3:12 pm UTC

Certainly not perfect, but...
Spoiler:
If I were building such an AI, it'd be cored on a natural language processor (we don't have good ones yet), with an internet connection and a good search engine to make up for any intrinsic ignorance. Toss on something to write its own code, so it can edit the language processor on the fly, and a fictional "backstory" so it can answer questions about its "childhood" and you have something hard to crack using only text. Then I'd "just" need something to parse images, and something else to make it make human-like mistakes (also non-trivial, as anyone in the computer gaming industry can tell you).

If I were attacking my AI, I'd probably target the natural language processor first with something like:

"äré Ü â hµmαn"

Internally, all of those characters are represented by 7-bit codes which don't contain any information about their shape. Parsing that sentence would require the AI to look up the shape of each character, and test them against the shapes of standard English characters, and there's a chance that the designers didn't think to do that. Without previous messages from me, the AI doesn't have anything to go on for re-writing its parser to handle this sort of thing (unlike tricks like defining "zork" to be one).


Return to “Logic Puzzles”

Who is online

Users browsing this forum: No registered users and 6 guests