Agreed for the most part. I suppose the appropriate qualifier for my statement would be that while philosophy of mind boils down to AI programming, AI programming does not boil down to philosophy of mind. That is, to create an artificial intelligence, you need to first have a sufficient grasp of what constitutes "intelligence" (and so forth) to know when you've met that goal. Philosophy of mind tells you what are the necessary conditions for a mind: what tests does something need to pass in order to reach that bar? But it doesn't tell you the sufficient conditions, that's a whole other project that goes beyond philosophy into the nitty gritty details of AI programming. I suppose an analogous scenario would be that the philosophical thesis of naturalism, that all and only empirically observable physical phenomena constitute reality, still leaves open the huge question for the natural sciences to answer of what
empirically observable physical phenomena are there?
Multiple realizability leaves open that there may be many different sets of jointly sufficient conditions for a mind, which brings me to my second point: that understanding how exactly humans do it may not really be important, except for maintenance and repair of your humans. But there may be much more straightforward (or other, equally complex but different) ways of getting the same functionality which is necessary for a mind.
An interesting thing arises if you combine that with the thesis that a thing's qualitative, subjective experience is a product of its function: you could have beings which always respond to the same inputs with the same outputs as each other, but process that higher-level function internally in different ways, and thus have vastly different experiences of the world in ways that they simply cannot ever communicate to each other. Like the old question of whether the color which we both call "blue" really looks the same to both of us.
I think I've busted the cleverbot-is-random-paired-users myth. A friend of mine just now complained that it responded to the question "What is the plot of Fifth Element?" with a summary of the Matrix, and showed me the chat transcript. As a test, I asked Cleverbot "What is the plot of the Matrix?" and got the exact same response back, character for character. No way he and I randomly got paired with the same person out of over nine thousand chatting at that time, and that person just copy/pasted his response to any "What is the plot of _____?" inquiry.
For future reference, the response was "Humans are trapped inside of a computer program. Neo, the protagonist, can change the program as he is in it. He tries to free the humans from the machines who built the programme."
Note also the difference in spelling of "program/programme", which makes me think this is a response composited from several previous user responses, not one user's spontaneous natural-language response. Either way, that's a pretty impressive ability for a bot. It also seems to be able to emulate humor, see the following transcripts:
Cleverbot wrote:Cleverbot: Tell me a joke.
User: A blonde, a brunette, a redhead, a rabbi, a priest, a minister, a Mexican, an American, a Canadian, a duck, an elephant, and a giraffe walk into a bar, and the bartender says 'What is this, some kind of joke?'.
That one could easily be a script (print "Tell me a joke.", wait for response, print "Haha.") But then later I got this:
Cleverbot wrote:Cleverbot: I would tell you if I was a robot. Would you tell me if you were a robot?
User: If I was programmed to do so, certainly. Otherwise, probably not.
Cleverbot: You're fake.
User: So's your mom's tits.
Cleverbot: Ha ha ha ha nice one!
So it seems to be able to identify when the user means to be funny, in at least some circumstances, which is in and of itself pretty cool.