Ethics of creating intelligences

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Prelates, Moderators General

Re: Ethics of creating intelligences

Postby Copper Bezel » Sun Jan 08, 2012 4:38 am UTC

Again, as Yakk said, there are moral questions before we get there. There would be moral questions about modifying human intelligences and moral questions about the design of bottom-up-but-non-human intelligences. A top-down-yet-human-level intelligence is theoretically possible, but it's really too remote to speculate on. Asking questions about the morality of a particular theoretical example is like asking what color it will be.
~ I know I shouldn't use tildes for decoration, but they always make me feel at home. ~
User avatar
Copper Bezel
 
Posts: 763
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Mission, Kansas, USA

Re: Ethics of creating intelligences

Postby Fire Brns » Sun Jan 08, 2012 4:44 am UTC

I am against killling things in general unless they are a threat, I don't think you can justify ending a conciousness because it is suffering. You ease its pain as best you can; not end its life.
Yakk wrote:The thing is, we can build intelligences. We call it having children.
...
Playing around with your child's mind sate for mischief's sake?
Are we bringing up Plato's Cave?

This is semi tangiental so I'll leave it in the middle, assuming an AI would require more memory space to run they would eventually run out of space and "crash" without being constantly upgraded or cleaned; a sentient machine may not take kindly to personality altering changes in the latter and take self preservation action in the former. This knowledge of death and the lack of a computer god would lead machines to prioritize self preservation and immortality which would be more of a reason to rise up than slavery.

At what level of sentience would these "AI"s have? Would they behaved as lower animals requiring only the mechanical equivelent of <a href="http://en.wikipedia.org/wiki/Hierarchy_of_needs">physiological needs<a/> but with recognition that the world exists. This would be the best possible option to create an AI as, but beware a growing conciousness and eventual omnipotent program.

At the point of inteligence that it can say "I exist" Indentured servitude may be the only viable option
-Create them
-have a facility to condition and raise them
-give them two options: freedom or safety.

Without an uderstanding of how the inteligence would process information as said above it is hard to speculate on the motives of it.
Pfhorrest wrote:As someone who is not easily offended, I don't really mind anything in this conversation.
Mighty Jalapeno wrote:It was the Renaissance. Everyone was Italian.
Fire Brns
 
Posts: 1114
Joined: Thu Oct 20, 2011 2:25 pm UTC

Re: Ethics of creating intelligences

Postby Randomizer » Sun Jan 08, 2012 9:45 pm UTC

@jseah I didn't mean why would we want intelligent machines, I meant why would we want sentient ones? Assuming one had the choice between a sentient and non-sentient version that did the same thing, that is.
----------------------
I don't think computers will ever be sentient no matter how sophisticated they are. I see things like Watson which beat out Jeopardy champions, the Jabberwacky chatbot, which can be fun to chat with, a robot that plays violin, the SOINN robot that can learn, and the iCub which they say "learns like a toddler" to recognize objects, but I don't think any of those are sentient. Do we just keep adding functions and abilities, until we add just that one more that the machine needed and suddenly it's sentient? I don't think it works that way. I mean, Theo Jansen's kinetic sculptures walk along the beach, staying out of the water and dry sand with the aid of a mechanical "brain", which is very clever, but I doubt a bunch of plastic tubes can "feel" anything no matter how cleverly constructed.

On the other hand, we've got people immobilizing fruit flies and putting them in simplified driving simulators, researchers trying to see if slime mold is something they can make computers out of, and then there's a robot controlled by rat brain cells that moves around. Why speak only in hypotheticals when we have real examples of humans mixing biological organisms with machines already? Are those things ethical?

As far as "uprisings"... We've got things like ASIMO, which Honda has been working on since 1986 (though with a different name back then). I see how those things have progressed, what the robots do, how much development it takes to make them incrementally more advanced, and I can't see a "robot uprising" as a legitimate possibility. I'd be more concerned about how people decide to use telexistence robots. If they get sophisticated enough, it'll be the Greater Internet Fuckwad Theory gone RL on us.

I mean, people keep freaking elephants, which are pretty darn smart, to do work for them and those don't go around stampeding people to death. (Well, most of the time.) If use by humans of non-human intelligent beings automatically led to rebellion, I don't think we'd see very many elephants in circus acts or have them painting for us.
Belial wrote:I'm all outraged out. Call me when the violent rebellion starts.
Randomizer
 
Posts: 282
Joined: Fri Feb 25, 2011 8:23 am UTC
Location: My walls are full of hungry wolves.

Re: Ethics of creating intelligences

Postby Copper Bezel » Mon Jan 09, 2012 12:08 am UTC

Agreed, although I don't think that's the question. There's scant trace of human slave rebellions in history.

Randomizer wrote:@jseah I didn't mean why would we want intelligent machines, I meant why would we want sentient ones? Assuming one had the choice between a sentient and non-sentient version that did the same thing, that is.

The idea is that a sapient machine would have the benefit of sapience, so it could do things that require that. I don't think it's any more complicated than that. Some of those things that require sapience would be, say, studying the process of making a sapient machine. = )

Do we just keep adding functions and abilities, until we add just that one more that the machine needed and suddenly it's sentient? I don't think it works that way.

I don't either, but it's worth noting that the only "learning" robots do today is extremely task-specific behavioral stuff - motor control stuff, strategy stuff. Brain stem stuff. There aren't any interesting emergent properties that can happen at that level. The same robots are dressed up in a lot of anthropomorphic nonsense to make them frendlier and more impressive-seeming. My gut instinct is that nothing based on sequential processing of bits could ever be more than an illusion of sapience, but gut instincts are frequently wrong.

It's not about "adding" parts. We don't know how to make any of the interesting parts.

Still, again, there's no reason to assume that future AIs will be sequential machines, anyway. If, at the other extreme, you have an exact functioning model of a human brain that just doesn't happen to use any cells, that consciousness wouldn't be a meaningfully different thing from the brain, and it would be both sapient and sentient.

Why speak only in hypotheticals when we have real examples of humans mixing biological organisms with machines already? Are those things ethical?


What would make them not ethical? There's no sapience involved here. The rat can either become Robocop, or be sold to the other lab and fed fruit beverage product until it gets cancer, so that we can record how much of a particular fruit beverage product it takes, on average, to develop said cancer. We're demonstrably unconcerned with its wellbeing, whether as a fully integrated rat or as a small network of neurons on a chip.
~ I know I shouldn't use tildes for decoration, but they always make me feel at home. ~
User avatar
Copper Bezel
 
Posts: 763
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Mission, Kansas, USA

Re: Ethics of creating intelligences

Postby somebody already took it » Mon Jan 09, 2012 6:15 am UTC

Just wanted to point out that anything that can be done in parallel can be done sequentially, you only gain speed, not computational capability. But perhaps you had something more specific in mind when you used the term?
somebody already took it
 
Posts: 310
Joined: Wed Jul 01, 2009 3:03 am UTC

Re: Ethics of creating intelligences

Postby Copper Bezel » Mon Jan 09, 2012 12:16 pm UTC

No, you're right - I did, of course, have a human-like neural network in mind, and I'm getting into exactly the same sort of ethical speculation that I'm insisting isn't useful. What I really meant to say there was in response to this bit:

I don't think computers will ever be sentient no matter how sophisticated they are.

Even if it were true that "computers" couldn't, there's a spectrum of possible things in between "computers" in the sense meant here and biological brains.

Randomizer seems to feel that cells are special. All I would assert is that whatever it is that society (past, present, or future) might deign special about intelligences, cells aren't really relevant to that. I didn't mean to add any additional positive assertions about what is relevant.
~ I know I shouldn't use tildes for decoration, but they always make me feel at home. ~
User avatar
Copper Bezel
 
Posts: 763
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Mission, Kansas, USA

Re: Ethics of creating intelligences

Postby jseah » Mon Jan 09, 2012 8:50 pm UTC

I think he means that networked intelligence, ala organic brains and such things, are easier to create than AI programs.
I also think he is right, because we have working exampes already.

At the same time, being able to modify intelligences to manipulate their goals may require being able to make AI programs and not simply modifying organic brains.
Mainly because we don't and may never understand such complex networks to the level required to manipulate fine detail like instincts. At least that understanding probably will not come before we are able to make AI programs.
jseah
 
Posts: 274
Joined: Tue Dec 27, 2011 6:18 pm UTC

Re: Ethics of creating intelligences

Postby Copper Bezel » Tue Jan 10, 2012 4:26 am UTC

jseah wrote:I think he means

Oh my, was it still ambiguous? I'm terrible at this. It doesn't need to be organic, but yes, the closer a synthetic brain is to a human one, the easier it will be to (1) build, but also to (2) quantify in ethical terms, to precisely the extent that it matches the human model in each sense.

Mainly because we don't and may never understand such complex networks to the level required to manipulate fine detail like instincts. At least that understanding probably will not come before we are able to make AI programs.

And to me, it seems just the opposite. Of course, it depends on how you define an "AI program" and "fine detail," and even if you mean comparable control of comparably complex systems, you'd still need to define control to some kind of end goals. It's easy to make a brain that can tell a joke or a computer that doesn't feel an inclination to revenge, but the trick is reversing the two.

To take humans out of it for a moment, I think you'd be more likely to have a programmable rat before having a computer program that's as advanced as, but independent of, a rat brain. And I'd really think it likely that somewhere in between the two, you'd have a rat brain emulated somehow, either as a program or as some kind of device, so that you have a fully artificial brain that you still don't fully understand the mechanics of.
~ I know I shouldn't use tildes for decoration, but they always make me feel at home. ~
User avatar
Copper Bezel
 
Posts: 763
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Mission, Kansas, USA

Re: Ethics of creating intelligences

Postby CorruptUser » Tue Jan 10, 2012 5:37 am UTC

We have programmable moths, rats can't be too far behind. Granted it's just sending signals to its brain saying "sex is this way" or "danger that way" but that's all you really need for moths.

Hell, for us oh so mighty humans, I'm not sure we would need much more for control. Everyone alive is currently being controlled through mere induced anxiety from "wrong actions". This can be from peer pressure, or from torture/abuse. It doesn't need 'active' input, but then again, you want automated machines right? You might think you consciously make all your decisions, but have you stopped and asked are they actually your own and not imprinted in you by others?
User avatar
CorruptUser
 
Posts: 5849
Joined: Fri Nov 06, 2009 10:12 pm UTC

Re: Ethics of creating intelligences

Postby thorgold » Thu Jan 12, 2012 4:52 am UTC

As stated by Yakk, the creation of artificial intelligence is akin to having children. For all intents and purposes, we must assume that any AI in the future will be compatible with human intelligences; when we think if "sapience" we think of the human definition of thought. Is our knowledge of self-awareness a universal quality or is it specific to our species? Human history has shown that minor differences in environment and culture can render different people groups completely incompatible. How would humanity react to an intelligence with an entirely different method of being self-aware?

That said, the only possibility for viable (or, at least, human-compatible) AI are those modelled off of the human psyche. Already, the human mind is used as the basis for intelligence - the Turing Test in particular, while inadequate to develop true AI, is an example of how the human psychology is used as the baseline standard for "sapience." Synthetic intelligences already in development show basis in humanity.

Therefore, any AI deemed a "true intelligence" would likely be an entity extremely similar, if not identical, to the human mind. Which means that an AI would require a human method of learning, a human method of communication, and would have basic human psychological needs, including basic rights.

The problem arises in that, therefore, the creation of AI would be somewhat counter-intuitive. Why spend resources creating an intelligence that, by the rights afforded to it as a sapient mind, doesn't necessarily have to return your investment? If you want a rebellious, sapient creation, have a kid.

Perhaps, if the development of AI became a reality along these lines (AI being afforded the same general rights as humans), the solution would be to make AIs serve a mandatory time period, much like mandatory education for human children, to their creator as a life debt and be granted freedom as an independent entity after the indenturement. That way, there'd by incentive to create AI (the service of an AI for a time), and the AI would be given the right to choose for itself how to live.

Of course, the rights given to an AI would be significantly altered if any AI we develop has a significantly different psyche than the human mind. What if we develop self-aware intelligences that are incapable of thinking outside the confines of their systems, like institutionalized criminals? What if AIs are atemporal? Such differences in the simple template of how reality is perceived could render AIs completely incompatible with humans; we'll have created an alien species of intelligence.

=======

EDIT: I realize the above argument belongs more in the "Bicintennial Man's Question" Thread, but as the basis for my argument I'll leave it and go into operational ethics below.

The creation of human-like AIs with "quirks" in the programming to make them sapient slaves would be the true moral question. Would programming an AI to be obedient be unethical? Given my earlier contention that AIs will, in essence, be children... no. We raise children to conform to a specific set of boundaries and rules - obedience, loyalty, respect are all values that are honored by human societies and imparted to the younger generations. Would an AI object to a compulsion it feels natural? Do you object to your own personality? Unless the protocols for behavior were added after an AI is aware of itself and its own behavior, programming an AI as a servant would be completely ethical. Is it slavery if the "slave" wants to do its job of its own accord, whether or not its accord was determined for it at creation?

Programming an AI is only unethical if the personality change is made against its will - for instance, reprogramming an AI to be obedient if it doesn't like its job would be unethical. But programming an AI to enjoy its job as part of its creation would be completely justified, if not merciful! Why create a fully sentient AI, give it the ability to make decisions and be self aware, then enslave it? By creating the AI with the self-conceived (technically) preference for its intended task would be the equivalent of taking a child and giving him his dream job for life.

In creating an intelligence, a hard cynicist would say we're already doing the unethical thing in bringing another mind into a world of suffering. In a way, it is unethical to create a mind - any mind, be it human or artificial. Being born is not a choice, you don't have an option of your parents or genes or circumstances. In AI, therefore, we can't attempt to outmatch nature by giving AIs completely "fair" existences - to do so would be counterproductive to creating AIs at all. Therefore, the question of whether programming an AI a certain way is ethical fades, and actually becomes a positive action for the reason I stated above.

If an AI is programmed to enjoy something, would it protest that it never had a choice in chosing what it enjoys? If an AI doesn't enjoy what it's programmed to do, that's a programming failure. But even then, just because we can design - and acclimate - an AI for a specific task doesn't mean we deny them freedom of choice. In the creation of an AI, the designers make a choice for it. Unlike human children, we can create AIs in an environment that they'll find true joy in, more so than the tumultuous lives of humans.

In fact, I think my entire argument can be summed up by the last line of an ironically relevant Calvin and Hobbes strip:
Spoiler:
Image

"It's only work if somebody makes you do it."

We're not forcing AIs to do something against their will. We create them with characteristics that will suit their purpose and make them enjoy doing that purpose, and let them have fun with it. From a Utilitarian standpoint, making AIs that enjoy their work isn't suppression of will, it's the most appropriate choice for the situation.
You can refuse to think, but you can't refuse the consequences of not thinking.
User avatar
thorgold
 
Posts: 280
Joined: Tue Nov 30, 2010 4:36 am UTC

Re: Ethics of creating intelligences

Postby tomtom2357 » Thu Jan 12, 2012 5:50 am UTC

thorgold wrote:We're not forcing AIs to do something against their will. We create them with characteristics that will suit their purpose and make them enjoy doing that purpose, and let them have fun with it. From a Utilitarian standpoint, making AIs that enjoy their work isn't suppression of will, it's the most appropriate choice for the situation.

Exactly, why make a (sentient) robot that is not going to enjoy what it is supposed to do?
I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.
tomtom2357
 
Posts: 556
Joined: Tue Jul 27, 2010 8:48 am UTC

Re: Ethics of creating intelligences

Postby thorgold » Thu Jan 12, 2012 5:53 am UTC

tomtom2357 wrote:
thorgold wrote:We're not forcing AIs to do something against their will. We create them with characteristics that will suit their purpose and make them enjoy doing that purpose, and let them have fun with it. From a Utilitarian standpoint, making AIs that enjoy their work isn't suppression of will, it's the most appropriate choice for the situation.

Exactly, why make a (sentient) robot that is not going to enjoy what it is supposed to do?

Precisely. The argument that "We're suppressing their choice by programming them to have pre-determined preferences!" is a legitimate one - we are taking an enormous amount of control over a person's life by hand-tailoring their mind, but to neglect to determine a path for our creations, or worse, randomize them, would be as inethical as abandoning a child to its own devices during development. Orphans without strong authoritative figures grow up significantly impaired socially and mentally, imagine an AI in that scenario!
You can refuse to think, but you can't refuse the consequences of not thinking.
User avatar
thorgold
 
Posts: 280
Joined: Tue Nov 30, 2010 4:36 am UTC

Re: Ethics of creating intelligences

Postby TranquilFury » Fri Jan 13, 2012 1:49 am UTC

Depends what your own goals are. If the AI ends up with it's own goals against your predictions, and those goals are incompatible with your own, then creating the intelligence was a bad decision. There's no such thing as a purely rational person, as motivation is inherently irrational. As such, an artificial person would be able to rationally pursue whatever goals it happens to be imbued with, but it would not be able to rationally change it's highest priority goal. I don't see creating an AI as any more evil than having kids, and as an AI without motivation wouldn't do anything, you can either set a primary goal when you create it, or roll the dice on a random goal and hope the one it ends up with is compatible with your own goals, whatever they might be. (for humans this is easy, in that we've been selected to have our irrational goals from instinct. primarily survival and reproduction)
TranquilFury
 
Posts: 126
Joined: Thu Oct 15, 2009 1:24 am UTC

Re: Ethics of creating intelligences

Postby thorgold » Sat Jan 14, 2012 4:01 am UTC

Of course, a seperate thought occured to me today: won't AIs be superior to human intelligences? After all, an entity consiting of circuits and electricity, unbound by biological limitations, would (eventually) outclass a human mind in every area - IQ, creativity, possibly even range of emotion. Therefore, the question is raised whether the creation of AI is ethical for the sake of humans.

Any AIs created would, doubtlessly, contribute to the technological singularity - the infinitely accelerating speed of scientific progress. Every advance man has made - fire, the wheel, electricity - has increased the speed of the next leap in technology. We've gone from mechanical calculators capable of only addition and subtraction to integrated microcircuits in only a century! The creation of an artifical, sapient mind would only further accelerate the rate of technological progres - leading to more and more advanced AIs, until humanity is left in the dust.

Therefore, for humanity as a species, the creation of self-aware and self-improving AIs (strong AI) would be equivalent to species suicide until the technology to bring humanity along with the machines is developed. Humans won't evolve fast enough to keep up with machines, transhumanism would be the necessity to give humanity - rather than AIs - ownership of the future.
You can refuse to think, but you can't refuse the consequences of not thinking.
User avatar
thorgold
 
Posts: 280
Joined: Tue Nov 30, 2010 4:36 am UTC

Re: Ethics of creating intelligences

Postby Armanant » Sat Jan 14, 2012 4:44 am UTC

@thorgold

And yet, I feel that when I have kids, I'm sure as hell going to try my best to make sure they outclass me in every area. In the same vein, if I look at an eventual AI as a child of 'humanity', I kinda feel like making them the best they can be would be the right thing to do. I'd hope that my eventual kids will look after me in my twilight years, before I pass on. Likewise, I guess it wouldn't be too bad if our AI children looked after humanity into its twilight years, before it also passes on..

Maybe I just haven't watched enough "THE AI ARE GOING TO KILL US ALL" movies?
Armanant
 
Posts: 28
Joined: Sat Jul 09, 2011 9:35 pm UTC

Re: Ethics of creating intelligences

Postby aoeu » Sat Jan 14, 2012 5:05 am UTC

Armanant wrote:@thorgold

And yet, I feel that when I have kids, I'm sure as hell going to try my best to make sure they outclass me in every area. In the same vein, if I look at an eventual AI as a child of 'humanity', I kinda feel like making them the best they can be would be the right thing to do. I'd hope that my eventual kids will look after me in my twilight years, before I pass on. Likewise, I guess it wouldn't be too bad if our AI children looked after humanity into its twilight years, before it also passes on..

Maybe I just haven't watched enough "THE AI ARE GOING TO KILL US ALL" movies?

You will be quite alone with your opinion. The general trend is that people are becoming more and more reluctant to have children at all.
aoeu
 
Posts: 286
Joined: Fri Dec 31, 2010 4:58 pm UTC

Re: Ethics of creating intelligences

Postby TranquilFury » Sat Jan 14, 2012 7:41 am UTC

thorgold wrote:Of course, a seperate thought occured to me today: won't AIs be superior to human intelligences? After all, an entity consiting of circuits and electricity, unbound by biological limitations, would (eventually) outclass a human mind in every area - IQ, creativity, possibly even range of emotion. Therefore, the question is raised whether the creation of AI is ethical for the sake of humans.

Not necessarily, the idea superiority is necessarily subjective, and an artificial intelligence is only going to act in those areas it finds motivation. Unless it's created with survival and reproductive instinct(if it's a distributed intelligence, a desire to add nodes is equivalent to reproductive instinct), I'll bet humanity will outlast it.
TranquilFury
 
Posts: 126
Joined: Thu Oct 15, 2009 1:24 am UTC

Re: Ethics of creating intelligences

Postby Copper Bezel » Sat Jan 14, 2012 1:57 pm UTC

thorgold wrote:Therefore, for humanity as a species, the creation of self-aware and self-improving AIs (strong AI) would be equivalent to species suicide until the technology to bring humanity along with the machines is developed. Humans won't evolve fast enough to keep up with machines, transhumanism would be the necessity to give humanity - rather than AIs - ownership of the future.

I've said this before and I'll probably end up saying it again, but humanity will never reach a big, existential moment where our socks are smarter, prettier, and better conversationalists than we are. Advances like that take time, and we'll have lots of little decisions to make along the way, not one big one. Either we'll never invent such socks (stagnation, insurmountable technical hurdles) or we'll be smarter than them before we do (transhumanism.) But I don't see why transhumanism is anything to be afraid of.
~ I know I shouldn't use tildes for decoration, but they always make me feel at home. ~
User avatar
Copper Bezel
 
Posts: 763
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Mission, Kansas, USA

Re: Ethics of creating intelligences

Postby Armanant » Sat Jan 14, 2012 7:34 pm UTC

Copper Bezel wrote:I've said this before and I'll probably end up saying it again, but humanity will never reach a big, existential moment where our socks are smarter, prettier, and better conversationalists than we are. Advances like that take time, and we'll have lots of little decisions to make along the way, not one big one. Either we'll never invent such socks (stagnation, insurmountable technical hurdles) or we'll be smarter than them before we do (transhumanism.) But I don't see why transhumanism is anything to be afraid of.


Wouldn't this happen pretty much when we make an AI that's better at making AI's than we are? Singularity and all that.
Armanant
 
Posts: 28
Joined: Sat Jul 09, 2011 9:35 pm UTC

Re: Ethics of creating intelligences

Postby Copper Bezel » Sun Jan 15, 2012 12:11 am UTC

The AI that's better at making AIs than we are is already socks. That's what I'm saying. The "singularity" is a story about what that event might look like, and I'm saying it's bollocks.
~ I know I shouldn't use tildes for decoration, but they always make me feel at home. ~
User avatar
Copper Bezel
 
Posts: 763
Joined: Wed Oct 12, 2011 6:35 am UTC
Location: Mission, Kansas, USA

Re: Ethics of creating intelligences

Postby Yakk » Sun Jan 15, 2012 12:28 am UTC

Can you find the moment when machines became better at manual labor than people?

I can't. Maybe we aren't even there. But there are entire categories of manual labor that machines are better at -- and, there exists a robot factory for building factory robots.

Already, when we write AI, we toss it at optimizers which take our code and make it faster. We write computer code in abstract languages which are reinterpreted by the compiler to produce code that actually runs on a machine. Many of the most popular ones today do extreme transformations (Java/C#/Python), to the point where few people fluent in the language could even describe the machine-code level operations going on when their code is executed. Other languages go even further (Haskell, for example). In a sense, people don't write AIs -- computers do, with a bit of input from a person.

Even the hardware they are running on is created via an abstract language followed by machine optimizations of the instructions that the person gave the machine. No person alive fully understands a modern CPU.

The humans are, in a sense, cogs in a greater device, consisting of human organizational patterns, computer programs, computer hardware, and people. These larger constructs are, in a sense, smarter than the people in them -- but only in a sense. We've been building (computer-less) versions of this for centuries -- government bureaucracies, research labs, accounting departments, and many other such organizations are designed to solve problems that people are not individually smart enough to solve.

Scale problems keep these organizations separate to some extent. Reproduction is tricky, because these large organisms are ridiculously expensive, so there isn't much room for them to reproduce. And few of them have been able to figure out how to reliably spawnd newer, smarter organizations that also do the same spawning -- these organisms also move really slowly, so there hasn't been much time for them to evolve.

Transhumanism might be reframed as single human can be significantly augmented via similar means. Trivial levels of such augmentation are common -- clothing, cell phones, shoes, smart phones, tool belts, cars, bikes, PCs -- these all augment a human being beyond its own natural abilities.

Things get interesting when the human isn't the important part of the mix.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
User avatar
Yakk
Poster with most posts but no title.
 
Posts: 10324
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Ethics of creating intelligences

Postby thorgold » Mon Jan 16, 2012 4:25 am UTC

Copper Bezel wrote:
thorgold wrote:Therefore, for humanity as a species, the creation of self-aware and self-improving AIs (strong AI) would be equivalent to species suicide until the technology to bring humanity along with the machines is developed. Humans won't evolve fast enough to keep up with machines, transhumanism would be the necessity to give humanity - rather than AIs - ownership of the future.

I've said this before and I'll probably end up saying it again, but humanity will never reach a big, existential moment where our socks are smarter, prettier, and better conversationalists than we are. Advances like that take time, and we'll have lots of little decisions to make along the way, not one big one. Either we'll never invent such socks (stagnation, insurmountable technical hurdles) or we'll be smarter than them before we do (transhumanism.) But I don't see why transhumanism is anything to be afraid of.

Now, in line with that last sentence - "Transhumanism isn't anything to be afraid of" - my point is that, ethically, transhumanism is the only response to the creation of AI. As it stands, we will soon be capable of creating socks that think for themselves - they won't be smarter than us at that point, but they'll be capable of getting smarter. By "singularity," I'm referring to the hyper-accelerating rate of advancement in technology - AIs will accelerate an already speeding process to the point that they'll get smarter faster than we do.

Therefore, having discussed the moral implications of creating an entity (two posts back, all "strong" intelligences get human rights), the ethical question is whether creating an AI is ethical for humanity as a species - by creating an AI, we either doom ourselves to extinction by obsolecense, transcend "protohomo sapiens" through artifical advancement, or unethically put limitations on AIs so that they don't outpace our natural growth.
You can refuse to think, but you can't refuse the consequences of not thinking.
User avatar
thorgold
 
Posts: 280
Joined: Tue Nov 30, 2010 4:36 am UTC

Re: Ethics of creating intelligences

Postby elasto » Mon Jan 16, 2012 9:45 am UTC

I don't think it much matters if something is smarter than us - all that matters is if it is concious and if it has emotions. After all, I need no more care about what the Google servers think about their work than I need care what a calculator or an abacus thinks.

Of course, until we know how we are conscious we don't know that a calculator or an abacus isn't conscious (though I'm fairly sure they're not!)

Well, having said that, if a machine has power over me, then I might just ask it if it minds me not caring about what it thinks - and may well take its answer very seriously if it tells me it's not happy :p
elasto
 
Posts: 1264
Joined: Mon May 10, 2010 1:53 am UTC

Previous

Return to Serious Business

Who is online

Users browsing this forum: No registered users and 2 guests