Nihilism, The Last Man

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Mon May 17, 2010 7:04 am UTC

SnakesNDMartyrs wrote:Why do you want to eliminate suffering?

[+ cheap misrepresentations from infernovia]

I suggest we postpone all further discussions of this topic until you guys have the knowledge what suffering actually is, by making the expericene yourself. Considering you're probably sitting comfortably and well-fed behind your computer screens without any serioius personal risk, this may knowledge may never reach you.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: Nihilism, The Last Man

Postby infernovia » Mon May 17, 2010 8:39 am UTC

Considering you have no idea what you are talking about, it doesn't really matter. You seem to be implying that you know what suffering really is, and I highly doubt it. I am not one to milk the suffering thing, I know I have nothing to compare to the tragedies of the Holocaust, war veterans etc. But do you? Maybe second hand experience at best.

Your solution to the suffering problem is "volition" aka, if a person wants to do it (since you understand that the same situation can be different depending on the person). But how do you want to do something that you haven't experienced before? A baby could barely walk because the first two steps will deal with its failure. Every single negativity would have to be eliminated upon your world, and from it, every single expectation of failure. The AI, aka God, just keeps you milked with happy chemicals so that no one can ever feel bad/neutral, then why would you even bother moving?

Regardless, it seems clear to me that my point was made. The world you have created is filled with hyper-paranoia and rejection of the cruelty of life (and to go along with it, its affirmation) and a significant damage to the psyche. And if you are fine with this, then thats ok, as long as you understand what you have sacrificed for this type of happiness. That you have sacrificed and that you have done one of the greatest atrocities known in history, according to your definition, for this plan is perfectly clear to me.

Edit: Not to mention, if we will not suffer, it seems that America as the significant majority is what you were looking for. That it doesn't seem to be what you want shows me that the problem will not be just "suffering," but banality also. But perhaps you will never see that, if you had, I could actually have gotten behind some of the stuff.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Mon May 17, 2010 4:42 pm UTC

infernovia wrote:...happy chemicals...

I don't care for which implementation mode is chosen. It's an engineering problem, and there are multiple solutions to it. I personally like the brains-in-a-vat solution. Modular, clean, scalable, easily controllable, resource-efficient and allowing for the full range of potential experiences that human-like brains can undergo. And hey, you're right - they wouldn't even have to move anymore (unless they wanted to, in virtual worlds).

but banality also

Stimulate the right regions of the brain in the right way, and everything feels deeply profound and interesting. All the time, if that's the goal.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: Nihilism, The Last Man

Postby infernovia » Mon May 17, 2010 5:12 pm UTC

How would you get the energy to the brains? Magic teleportation with no chance of bacterial infection? How matrix-like lol.

The virtual is a solution for many things, I won't disagree with that. Virtual war to remove violence, virtual interaction to remove actual interaction etc.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Mon May 17, 2010 7:41 pm UTC

infernovia wrote:How would you get the energy to the brains? Magic teleportation with no chance of bacterial infection? How matrix-like lol.

How do you do it today? You have a circulatory system with nutrients. In theory, this could be a closed artificial system that is kept sterile from bacteria.

The virtual is a solution for many things, I won't disagree with that. Virtual war to remove violence, virtual interaction to remove actual interaction etc.

Right. It's not certain the computing power needs will live up to it though. Fully immersive VR will be computationally expensive. But even if it's impossible, you can still implement this: Pre-render an experience, including the decisions of the person who experiences it. Like a movie. And then stimulate the brain in such a way that it thinks it acts freely, but all the outcomes are pre-defined. That would work, even if 'true' VR is too expensive. And subjectively, it would make no difference.

SnakesNDMartyrs
Posts: 143
Joined: Tue Mar 16, 2010 11:45 pm UTC

Re: Nihilism, The Last Man

Postby SnakesNDMartyrs » Mon May 17, 2010 10:00 pm UTC

Hedonic Treader wrote:But even if it's impossible, you can still implement this: Pre-render an experience, including the decisions of the person who experiences it. Like a movie. And then stimulate the brain in such a way that it thinks it acts freely, but all the outcomes are pre-defined. That would work, even if 'true' VR is too expensive. And subjectively, it would make no difference.


And if the subject changes his mind half way through the movie? How can you possibly know what decisions the person will make without their active input?

What if in the virtual reality world I try to inflict some 'suffering' on myself? Would the AI prevent me from doing so - in turn causing suffering through the loss of freedom?

I tried to point out the irony in your plan but I think it was missed. You've chosen to eliminate 'suffering' because your brain has been conditioned via evolution to judge suffering as inherently 'bad'. Yet there is nothing objectively bad about suffering and your goal to eliminate suffering has no transcendent value. This is just one of the primitive, Darwinian judgements that you've been so against. You're actually committing the naturalistic fallacy:

- My mind, conditioned by the natural process of evolution, tells me that suffering is bad.
- Therefore, ending suffering must be good.
- Therefore, my ethics will be based on the elimination of suffering.
An attempt to make a living at online poker: PokerAdept

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Mon May 17, 2010 11:02 pm UTC

SnakesNDMartyrs wrote:And if the subject changes his mind half way through the movie? How can you possibly know what decisions the person will make without their active input?

The machine controls the brain's decision by stimulating it in certain ways. There are already some hints from neuroscience that this is possible without the brain realizing that it didn't make the decision freely.

What if in the virtual reality world I try to inflict some 'suffering' on myself? Would the AI prevent me from doing so - in turn causing suffering through the loss of freedom?

I still don't think we're using the same concept of suffering. Some minor unpleasantness/frustration doesn't really show on my moral radar.

I tried to point out the irony in your plan but I think it was missed. You've chosen to eliminate 'suffering' because your brain has been conditioned via evolution to judge suffering as inherently 'bad'. Yet there is nothing objectively bad about suffering and your goal to eliminate suffering has no transcendent value. This is just one of the primitive, Darwinian judgements that you've been so against. You're actually committing the naturalistic fallacy:

- My mind, conditioned by the natural process of evolution, tells me that suffering is bad.
- Therefore, ending suffering must be good.
- Therefore, my ethics will be based on the elimination of suffering.

This is an interesting point. But in this general sense, you can use it against any possible kind of judgment, losing all meaningful values for decision-making. Also, the naturalistic fallacy lies in declaring something as good/bad because it is natural/unnatural. This is not what I do with suffering. I declare it as bad because it feels bad - that this feeling originally evolved in a natural way is not the ethically relevant part of it. You can hardly undergo an experience of severe suffering and still say it's not bad - sure, you can say it's only subjectively bad, not objectively, but then again, everything else is subjective too, there are no objective values.

SnakesNDMartyrs
Posts: 143
Joined: Tue Mar 16, 2010 11:45 pm UTC

Re: Nihilism, The Last Man

Postby SnakesNDMartyrs » Tue May 18, 2010 12:42 am UTC

Hedonic Treader wrote:The machine controls the brain's decision by stimulating it in certain ways. There are already some hints from neuroscience that this is possible without the brain realizing that it didn't make the decision freely.


I would of thought that part of 'making a decision' would involve some kind of conscious input, are you suggesting that your machine will simply just tell your mind that it is freely choosing when it really isn't? This is worse than any kind of suffering that I can imagine - being enslaved without ever knowing it.

I still don't think we're using the same concept of suffering. Some minor unpleasantness/frustration doesn't really show on my moral radar.


Ok, more to the point - will your machine employ the greater good argument? Would it allow immense amounts of suffering to 1000 people to prevent immense amounts of suffering occurring to 1001 people?

This is an interesting point. But in this general sense, you can use it against any possible kind of judgment, losing all meaningful values for decision-making. Also, the naturalistic fallacy lies in declaring something as good/bad because it is natural/unnatural. This is not what I do with suffering. I declare it as bad because it feels bad - that this feeling originally evolved in a natural way is not the ethically relevant part of it. You can hardly undergo an experience of severe suffering and still say it's not bad - sure, you can say it's only subjectively bad, not objectively, but then again, everything else is subjective too, there are no objective values.


You don't lose all meaningful values for decision making, saying something feels bad is meaningful and can certainly be used in decision making; the problem is when you try to base an ethical framework on a person's feelings of what is good and bad. You feeling bad because there is suffering in the world is part of nature - a totally natural process created by and invoked by nature itself, to base your ethical argument on this feeling is to commit the naturalistic fallacy. I don't see anyway around this as you freely admit that there are no objective values.
An attempt to make a living at online poker: PokerAdept

User avatar
Vaniver
Posts: 9422
Joined: Fri Oct 13, 2006 2:12 am UTC

Re: Nihilism, The Last Man

Postby Vaniver » Tue May 18, 2010 6:11 am UTC

Hedonic Treader wrote:You can hardly undergo an experience of severe suffering and still say it's not bad - sure, you can say it's only subjectively bad, not objectively, but then again, everything else is subjective too, there are no objective values.
Yes, I can. I've gone through a number of horrible experiences and emerged a better person; I do not regret them in the slightest, and while they were unpleasant in the short term that doesn't make them bad.
I mostly post over at LessWrong now.

Avatar from My Little Pony: Friendship is Magic, owned by Hasbro.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Tue May 18, 2010 7:17 am UTC

SnakesNDMartyrs wrote:I would of thought that part of 'making a decision' would involve some kind of conscious input, are you suggesting that your machine will simply just tell your mind that it is freely choosing when it really isn't? This is worse than any kind of suffering that I can imagine - being enslaved without ever knowing it.

You sure that this is worse than being tortured? Anyway, the decision-manipulation was just a hack described by me to in case VR turns out to be computationally too expensive. The ideal would be volition, of course.

Ok, more to the point - will your machine employ the greater good argument?

That depends on whether it is programmed to employ the greater good argument. In practice, I find it hard to believe that any large-scale (politcal or otherwise) decision-making doesn't involve the greater good argument in some way. But compare it to the status quo - how many sentient entities are undergoing non-consensual frustration and suffering in the darwinian game every day?

the problem is when you try to base an ethical framework on a person's feelings of what is good and bad.

Yeah, well, what else would you base your ethical framework on? And without one, how could you think decision-making could be preserved?

Vaniver wrote:Yes, I can. I've gone through a number of horrible experiences and emerged a better person; I do not regret them in the slightest, and while they were unpleasant in the short term that doesn't make them bad.

Yeah, maybe. I don't know your life's situations, but there have been and continue to be kinds of torture that I don't think are worth it, and the entities that are forced to experience them ususally don't agree to in order to emerge as a "better person". And quite often, they don't emerge as anything, and sometimes they emerge mentally detroyed. The question is, is it worth it in general (as opposed to a particular biography), and considering the world's existence and nature are basically chance accidents, I don't think the answer is straightforwardly yes, as people would like to believe.

User avatar
Vaniver
Posts: 9422
Joined: Fri Oct 13, 2006 2:12 am UTC

Re: Nihilism, The Last Man

Postby Vaniver » Tue May 18, 2010 6:33 pm UTC

Hedonic Treader wrote:Yeah, maybe. I don't know your life's situations, but there have been and continue to be kinds of torture that I don't think are worth it, and the entities that are forced to experience them ususally don't agree to in order to emerge as a "better person".
The point is not that suffering is always good; the point is that suffering is not always bad. What distinguishes the adult and the child is the adult's willingness to suffer when it counts.
I mostly post over at LessWrong now.

Avatar from My Little Pony: Friendship is Magic, owned by Hasbro.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Tue May 18, 2010 8:16 pm UTC

Vaniver wrote:The point is not that suffering is always good; the point is that suffering is not always bad. What distinguishes the adult and the child is the adult's willingness to suffer when it counts.

Right. Pain, fear etc. obviously have a signalling function that make them valuable in the struggle for existence. Of course, this implies that there is value in existence itself.

In fact, this is right on track with hedonism, as long as the positive aspects of life are worth the negative ones. And of course, it's consistent with thinking about a system that would drastically change the balance by artificial means. Whether it would be worth the risk, and what sentient life could lose in the process, that's a different question. "Murder of the psyche" would not necessarily be a part of it, just as listening to mp3s instead of real-world orchestras doesn't murder the value of music.

User avatar
Vaniver
Posts: 9422
Joined: Fri Oct 13, 2006 2:12 am UTC

Re: Nihilism, The Last Man

Postby Vaniver » Wed May 19, 2010 12:46 am UTC

Hedonic Treader wrote:Right. Pain, fear etc. obviously have a signalling function that make them valuable in the struggle for existence. Of course, this implies that there is value in existence itself.
The point is that there are more valuable goals than existence, and that many of those goals require suffering for more than signalling. Relegating suffering to the feedback of a hedonic servomechanism seeks to reduce men to beasts, not to make them more than what they are.
I mostly post over at LessWrong now.

Avatar from My Little Pony: Friendship is Magic, owned by Hasbro.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Wed May 19, 2010 12:25 pm UTC

Vaniver wrote:The point is that there are more valuable goals than existence, and that many of those goals require suffering for more than signalling.

Can you give an example of a (non-arbitrary) goal that is more valuable than existence, and that does require suffering, but for other reasons than its signalling function?

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7594
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: Nihilism, The Last Man

Postby Zamfir » Wed May 19, 2010 5:42 pm UTC

Hedonic Treader wrote:
Vaniver wrote:The point is that there are more valuable goals than existence, and that many of those goals require suffering for more than signalling.

Can you give an example of a (non-arbitrary) goal that is more valuable than existence, and that does require suffering, but for other reasons than its signalling function?


"Non-arbitrary" is doing the heavy lifting there. It's unclear to me why existence has much non-arbitrary value in itself, without some other (possibly arbitrary) values thrown in.

To illustrate my point, I wrote a little program. I wanted to call it "Paradise", but it is now called "What time is love?" *

Code: Select all

# America.py

def Life():
    happiness=0
    while Liberty():
        happiness = Pursuit_Of_(happiness)

def Liberty():
    return True

def Pursuit_Of_(target):
    return(target+1)

Life()

In what sense is running this program morally inferior to your brains in a vat? Or hooking up a thousand rats to a glucose-heroin mixture?



* Did the KLF ever make it to the US? I think at some point there will be naked women in the video.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Wed May 19, 2010 9:07 pm UTC

Zamfir wrote:In what sense is running this program morally inferior to your brains in a vat? Or hooking up a thousand rats to a glucose-heroin mixture?

I have no idea why everyone associates me with heroin now. :mrgreen:

Seriously, I already pointed out that an addictive drug with side-effects is actually a very poor approach to hedonism.

Why is your algorithm inferior? Because we don't value it for the reason we pretend to value it. We don't run these treadmills for the sake of the treadmills. They don't have deep meaning for us. That's just something we tell ourselves to get through the day.

We run the treadmills for the droplets of hedonic value they tend to excrete when things go right. And often enough we run them simply because we happen to find ourselves in them. I have to pay my rent, therework I find a job, I have to keep my job, therefore I do project X when my boss tells me to, I have to do succeed at project X, therefore I have to solve problem Y. Is there intrinsic value in solving problem Y, succeeding at project X, keeping your job, or paying your rent? No, there isn't.

When things go well, the treadmill can be fun. Then you have hedonic value. When things go badly, the treadmill can be very unpleasant. Otherwise, we simply run them because we haven't died yet.

So the hedonic value is the only real benefit your algorithm grants us. And I think that hedonic output could be vastly boosted by an artificial system that is specifically designed for hedonism. And the unpleasant parts (suffering) could be left out; I don't see why anyone would want to keep them. Under the condition that the system actually works, of course.

SnakesNDMartyrs
Posts: 143
Joined: Tue Mar 16, 2010 11:45 pm UTC

Re: Nihilism, The Last Man

Postby SnakesNDMartyrs » Wed May 19, 2010 11:21 pm UTC

Hedonic Treader wrote:Seriously, I already pointed out that an addictive drug with side-effects is actually a very poor approach to hedonism.


Happiness is, in a way, an addictive drug with side-effects. Also, what about 'pleasure saturation'? Our brains aren't supposed to be in a constantly 'happy' state and just like a heroin addict you require more and more 'hits' to get the same amount of pleasure. Taken to its logical conclusion I imagine you could actually cause a person to no longer be able to feel happiness.

Pleasure is a treat to reinforce what our brain sees as good behavioral patterns, it is a means to an end and not the end itself.
An attempt to make a living at online poker: PokerAdept

User avatar
Vaniver
Posts: 9422
Joined: Fri Oct 13, 2006 2:12 am UTC

Re: Nihilism, The Last Man

Postby Vaniver » Thu May 20, 2010 4:06 am UTC

Hedonic Treader wrote:Can you give an example of a (non-arbitrary) goal that is more valuable than existence, and that does require suffering, but for other reasons than its signalling function?
Give? No. I can tell you the things without which I would not feel fit to live, and the things that I pursue, but I cannot make you value them, or make them not seem arbitrary. If you do not feel a hunger to thrive in your bones and possess a willingness to act on it, I cannot give those to you, only hope you discover them.

I should repeat and expand what I said to infernovia: it is possible to do hedonic calculus correctly. The terms used in it, however, are often misleading and the usefulness of the math is limited; an uncertain life demands an uncertain approach. A life spent attempting to avoid suffering is shameful, cowardly, and unsatisfying.
I mostly post over at LessWrong now.

Avatar from My Little Pony: Friendship is Magic, owned by Hasbro.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Thu May 20, 2010 6:12 pm UTC

SnakesNDMartyrs wrote:Happiness is, in a way, an addictive drug with side-effects. Also, what about 'pleasure saturation'? Our brains aren't supposed to be in a constantly 'happy' state and just like a heroin addict you require more and more 'hits' to get the same amount of pleasure. Taken to its logical conclusion I imagine you could actually cause a person to no longer be able to feel happiness.

That sounds deep and zen-like but is in fact a testable scientific hypothesis, and it may well be false. There are some interesting perspectives on this topic in the section about wireheading in this text.

Pleasure is a treat to reinforce what our brain sees as good behavioral patterns, it is a means to an end and not the end itself.

From whose perspective? From our selfish genes' perspective? Or from ours? To me, it absolutely is an end in itself.

Vaniver wrote:A life spent attempting to avoid suffering is shameful, cowardly, and unsatisfying.

When I talk about the ablishment of suffering, I don't mean avoiding all risks in a contemporary human life. I mean a technological large-scale solution in the long run. The point here is to distinguish between the goal itself and the practicabilities of that goal. And shameful, cowardly are just statements of personal judgment - if I wanted to adopt a risk-aversive strategy in my life to prevent as much suffering to myself as possible, I wouldn't care about such judgments from other people.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: Nihilism, The Last Man

Postby infernovia » Sun May 23, 2010 8:19 am UTC

Yeah, you wouldn't care, but its still evaluated as being a coward. Like, a dog doesn't care its a dog and doing doggy things, but you can say that it is a dog-like behavior.

I was reading a few of your links... and I gotta say not bad. Its a lot more psychologically complete than what you have been explaining, they do understand a few of the issues. There are still a few dumb things though, in one moment, they are talking about being filled with wonder at every moment (its called "sublime"), yet at the next sentence, they talk about intelligence. Boredom is one of the key facets of the intelligent individual (or the powerful individual), otherwise they might get stuck playing minesweeper all day. Its why they move up to bigger and better things. There are a lot more things like that, especially when they start coming to the general idea rather than the specifics.

Yeah man, genetic engineering, intelligence increasing pill, simulations, virtual etc. doesn't sound bad. I just think the future that they are imagining is a bit different than the future they are describing. I also think that the psychology is incomplete in most areas, and for the most part, they should just stick to one goal rather than going for everything and the kitchen sink. Cuz I already see contradictions. In anycase, the algorithm for hedonistic maximization seems to work even better with rats and monkeys, I am even more sure of this than before. I hope you understand why this will be anti-intelligence (murder of the psyche).

Now, the role of the intelligent people seeking pleasure in their capacity and still maintaining intelligence, that is a more complicated subject than what "hedonistic maximization" entails.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Sun May 23, 2010 9:23 am UTC

I mostly agree with your criticisms. The "gradient solution" David Pierce suggests aims at a "win-it-all-and-lose-nothing" strategy. I think the reason is that he believes in the potency of future technology, maybe to an unrealistic degree. It also makes the strong assumption that you can take aversive experience away from a sentient motivational system and still have it function perfectly adaptively. I have my doubts that this will work out, and even if it did, the question of long-term stability and organization remains an open question. This is why I end up talking about brains-in-vats and high-tech systems of control while he talks about posthuman superbeings living in paradise. His vision is nicer, of course, it doesn't really entail hard decisions about trade-offs.

infernovia wrote:Yeah, you wouldn't care, but its still evaluated as being a coward. Like, a dog doesn't care its a dog and doing doggy things, but you can say that it is a dog-like behavior.

As many times before, you've got your categories mixed up. Dog-like behavior is a scientific category, "cowardice" is the social status evaluation of tribal hunter-gatherer/warrior males in relation to other males. The purpose is twofold: 1) to signify superior hierarchy status (which was highly relevant for our ancestor's fitness function), and 2) to enforce the vitality of the tribe, since in hunterer-gatherer/warrior situations, pain-avoiding individuals are dead weight for the group.

This is why it's a part of our psychology, and this is the reason people today still engange in these categories, even if they dont' share ancestral tribal interests - you find it in internet debates of people living on different continents, often distorting the content of the topics.

And as I pointed out earlier, these categories are Darwinian categories, and in the long run, Darwinian logic will be incompatible with a hedonic/utitarian calculus ethic -> this is very likely true from the POV of game theory, and it will probably become more salient in the future because the question "What should we do with all this powerful technology?" will make it an inevitable discussion. I think this discussion will also reveal very unpleasant trade-offs between objective individual freedom and general happiness/protection from suffering/existential risk mitigation.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: Nihilism, The Last Man

Postby infernovia » Mon May 24, 2010 4:32 pm UTC

As many times before, you've got your categories mixed up. Dog-like behavior is a scientific category, "cowardice" is the social status evaluation of tribal hunter-gatherer/warrior males in relation to other males. The purpose is twofold: 1) to signify superior hierarchy status (which was highly relevant for our ancestor's fitness function), and 2) to enforce the vitality of the tribe, since in hunterer-gatherer/warrior situations, pain-avoiding individuals are dead weight for the group.

So the statement "he looks like an ape, and has the brain of one too" isn't establishing any superiority hierarchy?

But hey man, as long as you are fine with your calculus using rats (or w/e superanimal that can deal with bacterias) and not humans, I am cool. That you established a goal, I am ok with, but what your goal actually does to the human psyche, that is still a valid place of criticism. Anyway, the model you are trying to establish is not actually "hedonistic maximization" anymore but "hedonistic maximization with no animals but humans." That the second clause makes the first more complex to establish is clear, that it also contradicts the first is also clear to me. I am also asking why the second clause is even brought up in a brain in a vat type model, which I think is a valid question to ask.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Tue May 25, 2010 3:38 pm UTC

infernovia wrote:But hey man, as long as you are fine with your calculus using rats (or w/e superanimal that can deal with bacterias) and not humans, I am cool. That you established a goal, I am ok with, but what your goal actually does to the human psyche, that is still a valid place of criticism. Anyway, the model you are trying to establish is not actually "hedonistic maximization" anymore but "hedonistic maximization with no animals but humans."

I didn't write that it has to be humans. The species is not the most important factor. In fact, it is unlikely that contemporary human nature would be unchanged in such a future; it would be a post-human future almost by definition.

What your criticism aims at is the destruction of sophistication, of intelligence, self-awareness, depth of personal narrative. And I agree, these are valuable. If they can be maintained - or even enhanced - without undermining the supergoal of hedonism, that is to be preferred. One could imagine a matrix-like system that is highly social, interactive, and contains modes of experience and narrative depth that humans currently are unable to experience. While at the same time, the system's organizing principle remains stable, and experiences such as torture, agony, prolonged involuntary distress etc. are systematically prevented. This is certainly not impossible.

However, if there is a trade-off between sophistication and hedonism, I'd put the priority on hedonism. I'd rather be an unintelligent being living a profoundly positive life than a superintelligence undergoing systematic torture. The abolishment of severe agony from this world takes precedence over other ethical goals. This is something I would argue for, yes.

dandelionburner
Posts: 9
Joined: Wed May 26, 2010 12:09 am UTC

Re: Nihilism, The Last Man

Postby dandelionburner » Wed May 26, 2010 12:32 am UTC

You should really check that quote. I'm not so sure Zarathustra wrote that. Also, I'm almost positive that the second part of that piece is not written by the same author.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: Nihilism, The Last Man

Postby infernovia » Wed May 26, 2010 2:26 am UTC

http://praxeology.net/zara.htm

http://nietzsche.thefreelibrary.com/Thu ... hustra/2-1

http://www.literaturepage.com/read/thus ... ra-20.html
Probably not the preferred translation but I don't think its off. If you have a better translations, I will be happy to put it up.

However, if there is a trade-off between sophistication and hedonism, I'd put the priority on hedonism. I'd rather be an unintelligent being living a profoundly positive life than a superintelligence undergoing systematic torture. The abolishment of severe agony from this world takes precedence over other ethical goals. This is something I would argue for, yes.

False dichotomy. The super intelligence's biggest concern would be boredom, as torture is simply not guaranteed (so the appropriate portrayal would be super-intelligence being in the risk of torture).

I am not sure what you are talking about when you say "narrative depth." Maybe complex social interaction, beyond the scope of what we have currently? Or do you mean living through a story that is more dramatic than the current past events of your lifetime? And how do you happen to construct this without the threat of evil?

dandelionburner
Posts: 9
Joined: Wed May 26, 2010 12:09 am UTC

Re: Nihilism, The Last Man

Postby dandelionburner » Wed May 26, 2010 2:35 am UTC

I don't trust your sources. The first part of that verse does not match the second. And don't direct me to a corrupted wiki page either.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: Nihilism, The Last Man

Postby infernovia » Wed May 26, 2010 7:40 am UTC

Ok, so put up another translation?

I put up three sources with no significant differences in meaning between the three sites (but with different translations). I got the first link from an avid Nietzsche reader and I stand by his opinion since he has read a shit ton of his work (he even read all the letters) plus I know he at least has a solid understanding on the basics. I deleted most of the double spaces because it would make the quote too long. What else is there? Oh, maybe there is something wrong with the prose...

Well, tell me what is wrong with it or make a counter point to it. And unless that translation alters the meaning, I don't really care about it, and isn't in the scope of the thread.

http://books.google.com/books?id=xD-gSp ... &q&f=false

Page: 10

http://books.google.com/books?id=ooURAA ... &q&f=false

Page: 12

etc.

As I said, I don't have Kaufmann's book on me currently. So good luck, btw, here it is in german: http://www.gutenberg.org/dirs/etext05/7zara10.txt

And this time, I won't even take out the spacing!

So will ich ihnen vom Veraechtlichsten sprechen: das aber ist _der_letzte_Mensch_."

Und also sprach Zarathustra zum Volke:

Es ist an der Zeit, dass der Mensch sich sein Ziel stecke. Es ist an der Zeit, dass der Mensch den Keim seiner hoechsten Hoffnung pflanze.

Noch ist sein Boden dazu reich genug. Aber dieser Boden wird einst arm und zahm sein, und kein hoher Baum wird mehr aus ihm wachsen koennen.

Wehe! Es kommt die Zeit, wo der Mensch nicht mehr den Pfeil seiner Sehnsucht ueber den Menschen hinaus wirft, und die Sehne seines Bogens verlernt hat, zu schwirren!

Ich sage euch: man muss noch Chaos in sich haben, um einen tanzenden Stern gebaeren zu koennen. Ich sage euch: ihr habt noch Chaos in euch.

Wehe! Es kommt die Zeit, wo der Mensch keinen Stern mehr gebaeren wird. Wehe! Es kommt die Weit des veraechtlichsten Menschen, der sich selber nicht mehr verachten kann.

Seht! Ich zeige euch _den_letzten_Menschen_.

"Was ist Liebe? Was ist Schoepfung? Was ist Sehnsucht? Was ist Stern" - so fragt der letzte Mensch und blinzelt.

Die Erde ist dann klein geworden, und auf ihr huepft der letzte Mensch, der Alles klein macht. Sein Geschlecht ist unaustilgbar, wie der Erdfloh; der letzte Mensch lebt am laengsten.

"Wir haben das Glueck erfunden" - sagen die letzten Menschen und blinzeln.

Sie haben den Gegenden verlassen, wo es hart war zu leben: denn man braucht Waerme. Man liebt noch den Nachbar und reibt sich an ihm: denn man braucht Waerme.

Krankwerden und Misstrauen-haben gilt ihnen suendhaft: man geht achtsam einher. Ein Thor, der noch ueber Steine oder Menschen stolpert!

Ein wenig Gift ab und zu: das macht angenehme Traeume. Und viel Gift zuletzt, zu einem angenehmen Sterben.

Man arbeitet noch, denn Arbeit ist eine Unterhaltung. Aber man sorgt dass die Unterhaltung nicht angreife.

Man wird nicht mehr arm und reich: Beides ist zu beschwerlich. Wer will noch regieren? Wer noch gehorchen? Beides ist zu beschwerlich.

Kein Hirt und Eine Heerde! Jeder will das Gleiche, Jeder ist gleich: wer anders fuehlt, geht freiwillig in's Irrenhaus.

"Ehemals war alle Welt irre" - sagen die Feinsten und blinzeln.

Man ist klug und weiss Alles, was geschehn ist: so hat man kein Ende zu spotten. Man zankt sich noch, aber man versoehnt sich bald - sonst verdirbt es den Magen.

Man hat sein Luestchen fuer den Tag und sein Luestchen fuer die Nacht: aber man ehrt die Gesundheit.

"Wir haben das Glueck erfunden" - sagen die letzten Menschen und blinzeln -

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Nihilism, The Last Man

Postby morriswalters » Mon Jun 07, 2010 5:18 am UTC

I like to believe that I am reasonably intelligent, but this thread has my head spinning. I gather, and I apologize if I am wrong, that Hedonic Treader believes that in pursuit of pleasure man can find meaning. And that the most effective way to do that is to let an Uber Intelligence filter reality to remove pain.

The idea of the Singularity relies on the existence of a technology that doesn't now and may never exist. It also assumes that we understand intelligence and sentience well enough be able to recognize it if it occurred. And finally the only example of a sapient machine that we have available varies wildly in capabilities and motivations. From Saints to serial Killers from morons to geniuses. What mechanism do you suggest to make sure your machine is a Saint and a genius.

And if you assume that it does in fact occur, why would a self aware Sapient machine choose to play? If given freedom of action, what would keep a machine as described from coming to the conclusion that the easiest way to keep men from suffering was to kill all men. Dead men do not suffer. If given a "goal set" that is immutable how do you deal with the possibility that the "goal set" itself may be flawed in some fashion you can't predict.

The unfortunate part of intelligence is the ability to wonder why. Which leads us to try to create systems which give some meaning to an apparently meaningless existence. Wouldn't it be just as logical to reduce intelligence below the threshold that allows the question to arise in the first place? This is effectively what the kind of existence you advocate would do. In a world of unreality where you no longer control the internal conversation you would have no metric to measure or compare against. The idea of intelligence becomes meaningless.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Tue Jun 08, 2010 9:47 pm UTC

morriswalters wrote:I gather, and I apologize if I am wrong, that Hedonic Treader believes that in pursuit of pleasure man can find meaning.

The unfortunate part of intelligence is the ability to wonder why. Which leads us to try to create systems which give some meaning to an apparently meaningless existence. Wouldn't it be just as logical to reduce intelligence below the threshold that allows the question to arise in the first place?

That makes it sound like the primary value of hedonism is the meaning we project into the pursuit of it, and the pleasure is just an arbitrary goal to get to the meaning. That would be a misunderstanding. I think that the true value lies in the valuation of the experience modes themselves (like the goodness of pleasure, or the badness of suffering). Meaning, or value, cannot be deduced logically from neutral assumptions, they are rooted in the valuation of good and bad as affective mental states, like those that are hard-wired in the human brain. Reducing intelligence would not make your agony less agonizing. Or at least I don't think it would; non-intelligent animals show clear behvioral indicators of very strong aversion to noxious stimuli.

The idea of the Singularity relies on the existence of a technology that doesn't now and may never exist. It also assumes that we understand intelligence and sentience well enough be able to recognize it if it occurred. And finally the only example of a sapient machine that we have available varies wildly in capabilities and motivations. From Saints to serial Killers from morons to geniuses. What mechanism do you suggest to make sure your machine is a Saint and a genius.

If given a "goal set" that is immutable how do you deal with the possibility that the "goal set" itself may be flawed in some fashion you can't predict.

You're right that the technology may never exist. Or it may exist this century. You're also right about the risks. We can know quite a bit about them in advance because we are the very designers of the minds we create, but I don't think there can be a "100% safety proof" that a superintelligence is and always will be Friendly. Errors and failure modes can certainly occur. But let's consider the alternative: A world that is filled with torture and endless struggles for limited resources, bad organization, and certainly no moral code that would meaningfully shape the big picture in the long run. Then look at the hundeds of millions of years of suffering and darwinian struggle we've already had, and the billions of years that potentially lie ahead of us. The stakes are enormous.

And if you assume that it does in fact occur, why would a self aware Sapient machine choose to play? If given freedom of action, what would keep a machine as described from coming to the conclusion that the easiest way to keep men from suffering was to kill all men. Dead men do not suffer.

Right. If the minimization of suffering would be programmed as the supreme goal without any other values or constraints, the quick and complete destruction of all sentient life would be a rational strategy for such an intelligence, provided it could reach sufficient power. However, if other goals, such as the improvement of positive hedonistic value, or the existence of sophisticated sentient structures and narratives (like personhood, relationship etc.), or the concept of rights would be defined as intrinsically valuable goals as well, then it would try to find balances that allow for all these values to be realized in less efficient ways without eliminating each other completely. But yeah, it may go "wrong" - whatever that means; a negative utilitarian may not consider extinction a worse outcome than continued suffering.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Nihilism, The Last Man

Postby morriswalters » Wed Jun 09, 2010 9:31 pm UTC

Hedonic Treader

The point about removing intelligence that I was making is not about removing pain or pleasure. The example would be not to change that skunks stink, rather it is to remove the ability to wonder why the skunk stinks. However I need to look at Hedonism a bit closer before I walk off the cliff of ignorance in this discussion.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: Nihilism, The Last Man

Postby infernovia » Wed Jun 09, 2010 11:37 pm UTC

Again, I point out that HT's world primarily relies on ousting cruelty to the AI. Cruelty, dominance, and such evils still exist (destruction of everything but that which relies on the chemicals and mental state of hedonic pleasure), but its servants are free from its actions and implications. Conceiving of a "friendly AI" is silly, what you really want is an AI that you do not want to call a master while at the same time being its slave.

Forget about rights and survival of human kind, they are just abstractions anyway. Intelligence and wisdom can only come from the fatal that befalls us, you cannot desire "free will" and "intelligence" and such in the world you are creating. I mean you are trying to create a pre-lived existence man, what is the need of intelligence?

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Nihilism, The Last Man

Postby morriswalters » Thu Jun 10, 2010 12:29 am UTC

infernovia wrote:Again, I point out that HT's world primarily relies on ousting cruelty to the AI. Cruelty, dominance, and such evils still exist (destruction of everything but that which relies on the chemicals and mental state of hedonic pleasure), but its servants are free from its actions and implications.


There is an interesting interpretation implied by the idea as you state it. Create a God and give him all sin.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Thu Jun 10, 2010 10:23 am UTC

morriswalters wrote:The point about removing intelligence that I was making is not about removing pain or pleasure. The example would be not to change that skunks stink, rather it is to remove the ability to wonder why the skunk stinks.

Yes, I see that, and I think it's a terrible priority setting for decision-making. It keeps the badness of negative experiences, and disposes of the tools to adequately address them. Exactly the opposite of what we need. And we've had it before, by the way: Before human-level intelligence evolved on this planet, there already existed hundreds of millions of years during which trillions of sentient entities had to endure strongly aversive experiences without understanding the underlying causes or consenting to the suffering. Even acknowledging all the good experiences that also happened, I consider this to be a tragedy whose existence is unfortunate.

infernovia wrote:Cruelty, dominance, and such evils still exist (destruction of everything but that which relies on the chemicals and mental state of hedonic pleasure)

In other words, your definition of "evil" is the destruction of everything except that which is good. Because ultimately all good things in life boil down to neurochemical dynamics and mental states of positive affect. There is no "good" other than in the good mental states. After all, where would it come from?

Equally muddled, you call a non-malicious non-ulterior goal set that aims at abolishing suffering "cruelty". With such definitions of evil and cruelty, every ethical motivation is always evil and cruel; this is just a muddling of concepts.

you cannot desire "free will" and "intelligence" and such in the world you are creating. I mean you are trying to create a pre-lived existence man, what is the need of intelligence?

That is a valid question. If we had super-human-level intelligence organizing the world, we would have no need for human-level intelligence as a tool to shape the world. It can be replaced with a concept of narrative depth, the sophistication and complexity of interrelated experience modes -> such as the illusion of personhood, individuality, relationships, richness of mental structure. In other words, a decoupling of the function of intelligence as a tool and the experience value of intelligence as sophistication of subjective consciousness.

Same for "free will", the experience of freedom is a positive mental state, the objective ability to make decisions freely is merely a tool to shape physical causality. If we had a better tool to control the world's physical causality, the only true value of "free will" would be its subjective value as a mental state - and that is independant from the actualy objective ability to change anything in the real world. Especially in virtual reality, you can feel subjectively free without ever leaving the house.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: Nihilism, The Last Man

Postby infernovia » Thu Jun 10, 2010 11:09 am UTC

In other words, your definition of "evil" is the destruction of everything except that which is good.

Lets not get ahead of ourselves. By "evil" I meant all the concepts that you wanted to eliminate.

Because ultimately all good things in life boil down to neurochemical dynamics and mental states of positive affect. There is no "good" other than in the good mental states. After all, where would it come from?

Equally muddled, you call a non-malicious non-ulterior goal set that aims at abolishing suffering "cruelty". With such definitions of evil and cruelty, every ethical motivation is always evil and cruel; this is just a muddling of concepts.

Ahaha, instead of being muddled, this is exactly the way it is.

As I said, what you have basically done is take the idea of dominance, warfare, etc. which you have eliminated from the subjects of the AI, to the AI. The AI is the one who dominates and wants to "take over the whole galaxy" as you pointed in your dream. The AI is cruel to all external life-forms, I don't know why you would consider it not so as you already called the death from a predator a "suffering." Are all things that die in the hands of the AI not considered an act of cruelty? Does the AI break the model of killing = bad?

In other words, a decoupling of the function of intelligence as a tool and the experience value of intelligence as sophistication of subjective consciousness.

Not sure if the hedonistic value of intelligence is all that much.

Same for "free will", the experience of freedom is a positive mental state, the objective ability to make decisions freely is merely a tool to shape physical causality. If we had a better tool to control the world's physical causality, the only true value of "free will" would be its subjective value as a mental state - and that is independant from the actualy objective ability to change anything in the real world. Especially in virtual reality, you can feel subjectively free without ever leaving the house.

This was primarily in response to you when you talked about "rights" and "Friendly AI." Forget such concepts, you desire a complete master in this world, let it be so.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Thu Jun 10, 2010 4:21 pm UTC

infernovia wrote:By "evil" I meant all the concepts that you wanted to eliminate.

There's actually just one: suffering. The rest may follow logically from the limited nature of all possible solutions that are at least consistent with the laws of physics, albeit improbable.

Given the premise that we want to abolish suffering, what's the solution space?

1) Complete annihilation of all sentient life.
2) Breaking the darwinian paradigm by setting up a system that controls both evolution and the well-being of all sentients by applying a long-term high-tech approach of complete control. (Friendly AI?)
3) Change the nature of all sentient species by re-designing them so that they can't suffer but still function in evolutionary adaptive terms (gradient solution, enormously ambitious, no long-term control?)
4) No solution. Let's be tortured in all extremes in the context of an open evolution based on moral nihilism. (Sounds better than it feels, which you realize only when you experience the torture yourself.)

So, what's your solution?

Does the AI break the model of killing = bad?

I don't share that view as an absolute moral rule.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: Nihilism, The Last Man

Postby infernovia » Thu Jun 10, 2010 10:16 pm UTC

Ok, but this is how I view your AI, as I consider it a life form and all the happiness simply an extension of our neurochemical reactions into a cohesive being.

The artificial intelligence is powerful, with its goal to desire only the neurochemical reaction/state programmed by the developers. Its goal is to maximize it, so it needs a lot of energy. Anything that steals energy from its purpose, it annihilates it. Each energy is converted to a larger neurochemical reaction (or more distinct neurochemical reactions if you prefer). To this end, it starts draining energy from everything it sees and destroys everything it touches for its one purpose, to be more happy. Happiness is the supreme drive for this AI. A sort of a fractal happiness, sort of a monster that becomes more gleeful with each thing it consumes. I don't think I would consider it intelligent/sapient, it is more primitive in its goals than anything.

The outside, it cannot comprehend it, it only understands what it can analyze and maintain so all things alien to it needs to be absorbed into the system (like the sun) or destroyed (so it can use its energy). So then, the AI is the warrior, the master, the supreme predator, IT handles all the suffering. It handles all situations that is hostile to it. And everything else? Gone under the dominion of the AI neural network. No longer entities, not in the sense of separation anyhow. In this sense, suffering and happiness still exist. Each loss of an energy source means less happiness, means a loss of entities. So then, a being that can handle suffering but with a bigger maximal capacity of it.

Given the premise that we want to abolish suffering, what's the solution space?

I thought it was hedonistic maximization? To make animals stop suffering, their death is the easiest. If I were to pick something, a more comprehensive virtual reality seems like a better idea. Not to abolish suffering, or to be a slave to the AI, but to be able to live your desire.

Edit: And by this, I don't mean the desire of happiness (its simply another emotional state you could want in it), but the things you desire.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Sat Jun 12, 2010 11:38 am UTC

infernovia wrote:The artificial intelligence is powerful, with its goal to desire only the neurochemical reaction/state programmed by the developers. Its goal is to maximize it, so it needs a lot of energy. Anything that steals energy from its purpose, it annihilates it. Each energy is converted to a larger neurochemical reaction (or more distinct neurochemical reactions if you prefer). To this end, it starts draining energy from everything it sees and destroys everything it touches for its one purpose, to be more happy. Happiness is the supreme drive for this AI. A sort of a fractal happiness, sort of a monster that becomes more gleeful with each thing it consumes.

Like Nozick's hypothetical utility monster that was meant to show the moral flaws of utilitarianism. Of course, talks of monsters and gleeful, greedy consumption are merely metaphors that serve to invoke our primal fears of predatorship. Language is constructivism, and what you describe here is actually the best thing that could happen to this world: The exponentially growing transformation of matter and energy into sentient configurations of profound subjective brilliance free from suffering. The only suffering that might be created in this system would be very local and short-term, during the inicial process of transformation. And if it were evolutionary stable, and if it could spread out into space, provided there are no alien civilizations within reach, all power struggles and sources of suffering would exist only in the smallest fraction of that system's future, in the beginning on earth. With the right strategic plan, even that initial suffering could be minimized during takeover.

After that, there would simply be no suffering anymore, because the only sentient states would be the positive ones that are created by the system, while all the consumed matter and energy would come from non-sentient entities like planets, asteroids and stars. The only suffering I could imagine then would occur if the AI itself would be sentient and never be satisfied with the state of hedonistic maximization. But that's a long strech, it's certainly nowhere near akin to the experience of a biological entity being skinned or boiled alive (which is currently happening all the time).

I don't think I would consider it intelligent/sapient, it is more primitive in its goals than anything.

Here, you're representing it like electronic kudzu, a non-thinking pest that spreads. This is nonsensical, because in order to be successful in competition with human groups, it would have to use supremely intelligent and strategic planning to begin with. Kudzu is annoying, but nowhere near a serious organizational competition for human cooperation.

The outside, it cannot comprehend it, it only understands what it can analyze and maintain so all things alien to it needs to be absorbed into the system (like the sun) or destroyed (so it can use its energy)

Of course it has to comprehend the structures of the world in order to absorb them. We do it all the time, and of course, such a system would have to do it too.

And everything else? Gone under the dominion of the AI neural network. No longer entities, not in the sense of separation anyhow.

Well, it depends on the value that the AI's goal system attributes to such phenomena as personhood, or subjective free will etc. If it's not originally programmed to value these principles, it will just do away with the separation, yes. Here is a cartoon vision of this outcome.

In this sense, suffering and happiness still exist. Each loss of an energy source means less happiness, means a loss of entities.

How does your first statement logically follow from your second one? In particular, in what sense do you see suffering occur within such a system, once the power struggles are done with?

I thought it was hedonistic maximization? To make animals stop suffering, their death is the easiest.

Which is why I don't want to maintain natural ecosystems if their resource-recycling functions can be replaced by artifictial cycles. The question whether the prevention of suffering is more important than the creation of happiness is a hard question that depends on many factors, some of which are counter-intuitive: The nature of consciousness, the nature of time, the nature of individuality. However, if there is a solution that can achieve hedonistic maximization, it will certainly entail minimizing or abolishing the occurence of suffering anyway (otherwise, it wouldn't really be much of a solution).

If I were to pick something, a more comprehensive virtual reality seems like a better idea. Not to abolish suffering, or to be a slave to the AI, but to be able to live your desire.

Edit: And by this, I don't mean the desire of happiness (its simply another emotional state you could want in it), but the things you desire.

I don't know what that means. By the very definition of virtuality, the "things" you desire are stimulants of mental states. I mean, the whole point of VR is to create sensory input without creating the physical periphery conditions upon which said input would normally be contingent.

Also, VR alone is not a long-term solution because in and by itself, it's not evolutionary stable. Just like wire-heading. VR-absorbed or wire-headed entities are unproductive and don't care much for self-replication. Only within an organizational framework which enforces long-term stability by systematic control of the context, it could make a real difference in the long-term hedonic calculus. Otherwise, the system itself is inevitably going to be replaced by whatever competing evolving systems might be more adaptive.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Nihilism, The Last Man

Postby morriswalters » Sat Jun 12, 2010 2:21 pm UTC

I have a conceptual problem. If you propose that an AI could be developed and programmed in a way so as to benefit man to the exclusion of itself, is it alive? My reasoning is thus. Life exists. For life to continue to exist then continued existence must be the primary purpose of life. This is inferred by evolution. Any organism where continuing existence was not a primary motivation would at some point in time select itself out of existence. Any other motivation not derived from this by implication is contrary to life. Thus it would be selected out. Here is my conceptual problem. If you define it as alive than it's primary motivation must be self interest, any other purpose would result in it's eventual death. Given my assumptions then the example that come to mind is a point in time where there is sufficient energy to maintain the AI or to maintain humans but not both. What would the AI do. If you state that it is alive then I argue it's only option is to destroy man. If you state that it is not alive then I argue that effectively what you are arguing is that intellect can exist absent biology or a mechanism that acts like it and thus has no purpose other than it's programming. The ability for you to select the motivation for this supreme intellect implies that anyone may do so. Thus self interest still is the primary motivation for the intellect. The danger in the second position is that the AI can produce ideas and/or technology that the lesser intellect may not be able to understand or control.

infernovia
Posts: 931
Joined: Thu Jul 17, 2008 4:27 am UTC

Re: Nihilism, The Last Man

Postby infernovia » Sat Jun 12, 2010 4:55 pm UTC

The only suffering I could imagine then would occur if the AI itself would be sentient and never be satisfied with the state of hedonistic maximization.

In particular, in what sense do you see suffering occur within such a system, once the power struggles are done with?

Just pointing out that to eliminate all predators, you have to make the supreme predator. And of course power struggle still exists, between the AI and the galaxy or the universe or w/e. No guarantees of infinite energy, to expand and to dominate, it will require a time period where it is in a form of stasis. The time period of expansion is kinda like a state's glory days, the ones afterwards is when it starts degenerating.

I mean, to argue that, you basically have to argue that a person who was born with no legs doesn't suffer or a person who lost something in a freak accident (like the floor falling away or something weird like that) didn't suffer. Or a malfunction on a plane causing it to crash, etc.

Here is what I think an intelligent AI would put us:

http://www.exitmundi.nl/suicide.htm
... reaching singularity, oneness, enormous capability to think and process...

Ok, that sounds pretty vague -- but hey, it’s no reason to kill yourself! Well: there is this one crucial detail. Once we’re immortal, we’ll be booooooored!

To us simple, mortal humans this may be a bit hard to grasp. But once we’ve reached Perfection and Singularity and Oneness and all that blah blah, we will have no goal in life. Yes, we will go out and discover the Universe. But after a while, we will have seen it all. With our mortal bodies gone, we will have all the time of the Universe. We will know every corner of the Universe and say: so what? Why search any further? To an intelligent cloud of smoke particles, searching every corner of the Universe will make as much sense as examining every blade of grass on Earth makes to us.

So there you have it. You’ve become a God-like purple space cloud, but now you find that it is soooooo depressing.


I mean yeah, you can just argue that they can form a happiness chemical, but that is pretty banal! It sees no point in happiness, which it can reproduce as a simple:

Code: Select all

declare variable pleasure;  //pleasure is a variable that is used to mark enjoyable actions
.....
for (c = now; c < end of time; ++c)
pleasure = pleasure + 1


Also, VR alone is not a long-term solution because in and by itself, it's not evolutionary stable. Just like wire-heading. VR-absorbed or wire-headed entities are unproductive and don't care much for self-replication. Only within an organizational framework which enforces long-term stability by systematic control of the context, it could make a real difference in the long-term hedonic calculus. Otherwise, the system itself is inevitably going to be replaced by whatever competing evolving systems might be more adaptive.

You are probably right, but I am dealing with more contemporary situations rather than the future. But yeah, you use VR as a way to finish all the drives that the society doesn't allow you to. For example, war.

edit: A virtual war, so of course, lacking violence.

Hedonic Treader
Posts: 187
Joined: Sat Jan 23, 2010 9:16 pm UTC

Re: Nihilism, The Last Man

Postby Hedonic Treader » Sat Jun 12, 2010 10:18 pm UTC

I'll try to quickly to address the recent arguments of both of you.

[We will be bored forever.]
This "problem" is very easy to address. First of all, there are experiences that just never get boring if you balance their frequency right. Sex is one of them, good food or music are other examples. Yes, they're maybe not as thrilling as when you first experience them (or new subsets of them like a new taste or perversion). But in the right balance, they constitute a good hedonic "baseline" of a good life. Then of course, you can stimulate your brain in ways that just don't get boring - they always feel good, or at least they have a frequency within which their sensitivity retains its goodness.

And then there's selective memory erasure. You can watch "The Sixth Sense", be surprised by its ending, then erase your memory of the plot, then watch it again. Billions of times, if you're biologically immortal. Now combine all these approaches to create the supreme good life. I'm astonished that anyone would really think we can transform ourselves into cosmic clouds, but not solve the problem of boredom. That shows a very poor understanding of what a mind actually is, and an utter lack of imagination.


[The system might run low on energy - that constitutes suffering/stasis/the end of hedonism.]
The key here is to understand that the system - unlike a natural ecosystem - doesn't exhibit uncontrolled growth. It exhibits controlled growth if it can and it doesn't when it can't. It would be easy for a singleton to stablilize population growth to whatever limit is sustainable in a given local environment. Of course, if it could use space colonization, the resources would be much more abundant, but ultimately limited as well. Any truly immortal system will have to face entropy eventually. But that doesn't have to cause suffering, it merely puts an upper limit to the total amount of hedonistic value that can be created by this system. If such a limit is inevitable (no physical possibility of sentience without upper temporal bounds), establishing a system that can max out the potential and then painlessly shut down sentient life is the rational conclusion. Why this should be an argument against such a system in the first place, I have no idea.


[The system is the ultimate predator, it preys on entire stars!]
Consumption of matter and energy isn't in itself bad - it's an inevitable part of life itself. The only bad thing about predatorship in natural ecosystems, or in contemporary human factory farming, is the systematic causation of suffering. However, neither artificial resource cycles, nor the consumption of non-sentient entities entail such systematic suffering. It just doesn't follow logically. The only suffering that I could see being caused are during the initial power struggles in the short time in which the AI establishes its role as a singleton, or the local short-term suffering caused by the destruction of natural ecosystems. However, those are types of suffering we accept every single day in the status quo, and the future after this initial phase can be very long.


[The system will either evolve away from its hedonistic purpose, or be selected out by the competition.]
Evolution only happens when there is replication, variation and selection. By checking variation and selection in a systematic strategic way, evolution can be controlled or even halted (I used to call this "locking into positive stasis", but the word stasis sounds horrible, and people were put off by it. The basic idea is that a sufficiently intelligent and powerful system could manage to keep evolution in check even while exhibiting exponential growth. But that's a hypothetical, if such a system would be inherently unstable, or there just isn't any way to control long-term evolution while maintaining a complex system, then the hedonic calculus approach is fundamentally flawed, yes. But I think it's quite conceivable that there are realistic solutions, and that a sufficiently sophisticated intelligence could indentify and implement them. The probability is non-zero, the alternatives are significantly worsel, and the stakes are extremely high.


[predator! master! slave! (and other primal darwinian and tribal hierarchy terms)]
Once again: We are darwinian primates. We think in tribal hierarchies, and we think in relationships of predator and prey. The very point of an artificial hedonistic system would be to break this entire paradigm.


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 5 guests