TheGrammarBolshevik wrote:If I refer to a science czar, it is clear that I am not referring to a ruler of Russia. But if I say that Santa Claus exists, nobody would take me to mean that there is a tradition of giving gifts during the Christmas season in Western countries.
But I'm not referring to a tradition: I'm referring to an entity.
Consider if it were discovered that there was, in fact, a jolly fat man who lived at the north pole, with elves, who gave presents to children. His name is Fred Bloggs. Would you say that I was incorrect to state "santa claus is real"?
Or to take it one step further: consider if there were an alien in orbit around the earth with a mind control field, but the mind control ray is only capable of leading parents to buy gifts for their children and tell them it came from someone else. When we discover it, would you object to characterising it as "the santa claus alien" (or even just santa claus)?
setzer777 wrote:Consciousness relies on centralized processing. All of the information being put into the system is brought together to form a single model of the world we act in. A vague concept shared by several minds cannot act in such a tightly unified manner.
I'm going to assume that you have no objection to the idea that, given enough people (~100 billion), and the right set of instructions given to each person, you could create a new consciousness(or duplicate an existing one) by simulating neural activity. (And if you do have an objection to that idea, then the obvious question is why you object to this, but not to the idea that a brain can.)
Since we are running a little short on people, it should be equivalent to have each person simulate 100 neurons, thus only requiring a billion people. We should still have a consciousnessin our system.
Additionally, since it's a bit much to expect of people to spend their whole life(longer, given the relative speeds of neural activity and our system) doing nothing but creating our pet consciousness, let's institute a rotation policy: every so often, we substitute people: the participant explains the system to a new person, gives them the current state of all their neurons, and then leaves. We should still have a consciousness.
To speed our system, we also decide to have each participant internalise their neurons: now they "simply" have a model in their head that is equivalent to their neurons, and they know which connections to other poeple's neurons they "should have". (That is, their own neurons are a higher level model, and all that they strictly simulate now are the external connections.) This statement is probably more controversial: but as far as I see, we should still have a consciousness. After all, outside of each person, the system looks the same, and it still behaves the same.
And finally, since there are a lot of connections between people, we create a code for their interactions. (Or, like many people performing repetitive tasks, they create the system themselves) When Alice and Bob need to communicate interactions, Alice can tell Bob "17" and he knows that his 17th neuron was just stimulated. Additionally, Bob can say "I was about to talk to Claire, do you have anything to tell her", and Alice can say "Yes, please tell her '23'". Perhaps there are uniqe ways to indicate that multiple neurons are firing at the same time, and things like that. The same interactions are still taking place, they are just in a more compact representation. We should still have a consciousness.
Now, functionally, this final system is a good model for Santa today: we have a bunch of people who have ideas about gifts from santa (their head models), who discuss the concept (children with parents, parents with other parents, children with other children, advertisers with everybody...) (Their "encoded neural interactions"). Shouldn't we expect this too to create a consciousness?