hakadosh wrote:Yes, by efficiency I mean the ability to come up with the best possible solution for a problem.
'Best' in what sense? In life forms, the success measure is ultimately our ability to survive and reproduce. This manifests itself more immediately in a number of unpredictable, seemingly unrelated ways. Hunger and such, yes, but ultimately everything we do and feel -- our emotions, our morality, our bloodthirstiness, our fondness for knock-knock jokes -- are all either manifestations or by-products of our choice of success measure. Who's to say that the AI's success measure won't result in an equally diverse and unpredictable set of motivations?
hakadosh wrote:Why would we need an all-knowing general AI if we can create brutally efficient specific AIs and let them communicate to each other.. Won't that make an extremely efficient yet general network of AIs..??
Unfortunately, no. That would just be a collection of computer programs that can solve a specific, pre-determined set of problems. We already have that. Generality comes from being able to adapt to previously unseen problems with little or no human input.
hakadosh wrote:It depends on the definition of "machine".. According to me a machine is anything(intelligent or not) created by an intelligent life form for performing specific tasks..
And since I do not believe that there is an intelligent being behind our creation(which incidentally was a freak chemical accident), we are not machines!!!
By 'machine', I just meant 'physical object that can interact with its environment'; I was making no claim about how we came into existence.
In any case, we are fairly general-purpose intelligences that emerged through an evolutionary search. As a result, we are not perfectly efficient; many problems can be solved much faster and more accurately by simpler, more specific machines. We are full of little hacks that work to our advantage in certain situations, but are counterproductive in others.
Most AI algorithms are full of approximations and heuristics, and are becoming even more so as we scale to larger problems and more general agents. Why would you expect them to be so flawless? They will very probably have quirks of their own. They are unlikely to be anything like our own quirks, though.
What will they want? What will they do? What will their quirks be? What safeguards do we need to take? It all depends on how they work, and what utility function (success measure) we give them. Until we know that, this conversation is ridiculously premature. Right now, we don't even have an inkling. Present-day AI is still way too primitive to generalize from meaningfully. We know as much about strong AI as we do about alien intelligence -- which is to say, not very much at all.