schapel wrote:On second reading, was your post about consciousness? That's a different subject altogether, because intelligence can be demonstrated, but consciousness cannot. I can prove to you I'm smart, but I can never prove to you I'm conscious.
Consciousness is the big topic of debate in philosophy
of mind, which is what rhhardin spoke of. There's probably some people somewhere in the history of that debate who at one point believed building a true mind comparable to a human mind was just a matter of sufficient processing power or some such, some quantitative threshold where we just need "enough computer" and bam you've got a mind, but to my knowledge that's not a position held by any contemporary philosophers of mind, so disproving it (by the existence of our amazingly powerful and yet still in many ways dumb computers today) wouldn't shut anybody in that field up. The debate there is just over whether or not it is possible in principle, and if it is possible, what is
it exactly that a machine must be capable of doing before we will unambiguously say "yeah, that's a genuinely thinking machine on par with you and me".
Also, whether consciousness can be demonstrated depends on what you mean by "consciousness". The two broad senses in use today are "access consciousness" and "phenomenal consciousness" (and the problems surrounding them respectively the "easy" and "hard" problems of consciousness). Access consciousness is pretty uncontroversial: if you can tell me how you're feeling, what you're thinking, what you think caused you to think or feel that way, and especially what you think or feel about
what you think or feel ("I'd rather not feel like this", "I know I shouldn't think that", etc), then you have access consciousness, and you just demonstrated it by telling me those things. You have access to information about your own internal mental states. To some philosophers of mind that's enough, and they dismiss the coherence of any other sense of the word "consciousness". (Those ones will usually say that it's clearly possible in principle to build a conscious machine, the rest is details).
Others want to answer still the harder problem of phenomenal
consciousness: if you build a machine that responds to inputs and outputs exactly like a human and can report on its own internal states just like a human can, does a rose still smell just as sweet to it? Does it experience the same redness we do when looking at it? Can it properly experience
smell or sight at all, or is it merely responding to chemical and electromagnetic stimuli with the same internal state-changes and consequent behavior as a human would? That's something which, if there is any answer to it at all, if the question even makes sense, it may not be possible to know. But then it's just as impossible to know about other humans as it is machines, so that's kind of irrelevant to questions about AI.