infernovia wrote:The artificial intelligence is powerful, with its goal to desire only the neurochemical reaction/state programmed by the developers. Its goal is to maximize it, so it needs a lot of energy. Anything that steals energy from its purpose, it annihilates it. Each energy is converted to a larger neurochemical reaction (or more distinct neurochemical reactions if you prefer). To this end, it starts draining energy from everything it sees and destroys everything it touches for its one purpose, to be more happy. Happiness is the supreme drive for this AI. A sort of a fractal happiness, sort of a monster that becomes more gleeful with each thing it consumes.
Like Nozick's hypothetical utility monster
that was meant to show the moral flaws of utilitarianism. Of course, talks of monsters and gleeful, greedy consumption are merely metaphors that serve to invoke our primal fears of predatorship. Language is constructivism, and what you describe here is actually the best thing that could happen to this world: The exponentially growing transformation of matter and energy into sentient configurations of profound subjective brilliance free from suffering. The only suffering that might be created in this system would be very local and short-term, during the inicial process of transformation. And if it were evolutionary stable, and if it could spread out into space, provided there are no alien civilizations within reach, all power struggles and sources of suffering would exist only in the smallest fraction of that system's future, in the beginning on earth. With the right strategic plan, even that initial suffering could be minimized during takeover.
After that, there would simply be no suffering anymore, because the only sentient states would be the positive ones that are created by the system, while all the consumed matter and energy would come from non-sentient entities like planets, asteroids and stars. The only suffering I could imagine then would occur if the AI itself would be sentient and never be satisfied with the state of hedonistic maximization. But that's a long strech, it's certainly nowhere near akin to the experience of a biological entity being skinned or boiled alive (which is currently happening all the time).
I don't think I would consider it intelligent/sapient, it is more primitive in its goals than anything.
Here, you're representing it like electronic kudzu
, a non-thinking pest that spreads. This is nonsensical, because in order to be successful in competition with human groups, it would have to use supremely intelligent and strategic planning to begin with. Kudzu is annoying, but nowhere near a serious organizational competition for human cooperation.
The outside, it cannot comprehend it, it only understands what it can analyze and maintain so all things alien to it needs to be absorbed into the system (like the sun) or destroyed (so it can use its energy)
Of course it has to comprehend the structures of the world in order
to absorb them. We do it all the time, and of course, such a system would have to do it too.
And everything else? Gone under the dominion of the AI neural network. No longer entities, not in the sense of separation anyhow.
Well, it depends on the value that the AI's goal system attributes to such phenomena as personhood, or subjective free will etc. If it's not originally programmed to value these principles, it will just do away with the separation, yes. Here
is a cartoon vision of this outcome.
In this sense, suffering and happiness still exist. Each loss of an energy source means less happiness, means a loss of entities.
How does your first statement logically follow from your second one? In particular, in what sense do you see suffering occur within such a system, once the power struggles are done with?
I thought it was hedonistic maximization? To make animals stop suffering, their death is the easiest.
Which is why I don't want to maintain natural ecosystems if their resource-recycling functions can be replaced by artifictial cycles. The question whether the prevention of suffering is more important than the creation of happiness is a hard question that depends on many factors, some of which are counter-intuitive: The nature of consciousness, the nature of time, the nature of individuality. However, if there is a solution that can achieve hedonistic maximization, it will certainly entail minimizing or abolishing the occurence of suffering anyway (otherwise, it wouldn't really be much of a solution).
If I were to pick something, a more comprehensive virtual reality seems like a better idea. Not to abolish suffering, or to be a slave to the AI, but to be able to live your desire.
Edit: And by this, I don't mean the desire of happiness (its simply another emotional state you could want in it), but the things you desire.
I don't know what that means. By the very definition of virtuality, the "things" you desire are stimulants of mental states. I mean, the whole point of VR is to create sensory input without creating the physical periphery conditions upon which said input would normally be contingent.
Also, VR alone is not a long-term solution because in and by itself, it's not evolutionary stable. Just like wire-heading. VR-absorbed or wire-headed entities are unproductive and don't care much for self-replication. Only within an organizational framework which enforces long-term stability by systematic control of the context, it could make a real difference in the long-term hedonic calculus. Otherwise, the system itself is inevitably going to be replaced by whatever competing evolving systems might be more adaptive.