Question about "artificial intelligence"

A place to discuss the science of computers and programs, from algorithms to computability.

Formal proofs preferred.

Moderators: phlip, Moderators General, Prelates

webgrunt
Posts: 123
Joined: Thu Apr 21, 2011 4:04 pm UTC

Question about "artificial intelligence"

Postby webgrunt » Wed Dec 10, 2014 11:56 pm UTC

I'm having a debate with a friend about artificial intelligence.

He believes that an artificial intelligence, when it's developed, could decide to wipe out humanity and replace us with robots because they're, as he puts it, more efficient.

My argument is that he's assuming WAY too much about artificial intelligence. A program that can mimic human intelligence isn't likely to be given the objective to achieve global efficiency at all costs up to and including the eradication of humanity.

I believe he's anthropomorphizing intelligence, assuming that a program which simulates human intelligence would also have similar drives and desires as the human mind. I don't see why any AI would be designed this way. It seems a lot simpler and more straightforward to simply have it carry out its instructions.

He also believes it could "overcome its programming" which I find difficult to even argue with. If someone tells me that a hammer could take it upon itself to learn how to tighten bolts, I have no idea how to respond to that. And it seems an apt analogy to an AI program overcoming its programming.

Anyone with some experience in the field care to weigh in on this?

User avatar
ConMan
Shepherd's Pie?
Posts: 1630
Joined: Tue Jan 01, 2008 11:56 am UTC
Location: Beacon Alpha

Re: Question about "artificial intelligence"

Postby ConMan » Thu Dec 11, 2014 12:49 am UTC

"Artificial intelligence" is an incredibly broad term. We already have AIs - programs and algorithms that respond to changing stimuli, as well as ones that incorporate feedback loops to improve their performance on particular tasks. However, it looks like you're talking about a very broad "open-ended" AI. One that is given arbitrarily large processing power and no fixed purpose other than to, in some sense, "learn". The fact is, this kind of AI is so far past our capabilities at the moment that it's essentially in the dual realms of science fiction and philosophy, and so what it ends up doing is completely unpredictable and open to interpretation.

Obviously a program can never "overcome its programming", but it's entirely possible for its programming to be such that it does things we don't initially expect. Machine learning and genetic algorithms, in particular, are based on the idea of a program altering its own internal parameters to find a solution to a problem, rather than having them be hard-coded. It just depends on what kind of data or stimuli the program has access to, and to what extent it can produce output.
pollywog wrote:
Wikihow wrote:* Smile a lot! Give a gay girl a knowing "Hey, I'm a lesbian too!" smile.
I want to learn this smile, perfect it, and then go around smiling at lesbians and freaking them out.

User avatar
Izawwlgood
WINNING
Posts: 18638
Joined: Mon Nov 19, 2007 3:55 pm UTC
Location: There may be lovelier lovelies...

Re: Question about "artificial intelligence"

Postby Izawwlgood » Thu Dec 11, 2014 1:43 am UTC

Your friend has the right answer.

Or you do.

Who knows?
... with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.

User avatar
Xanthir
My HERO!!!
Posts: 5213
Joined: Tue Feb 20, 2007 12:49 am UTC
Location: The Googleplex
Contact:

Re: Question about "artificial intelligence"

Postby Xanthir » Thu Dec 11, 2014 1:57 am UTC

Both you and your friend are wrong. ^_^

Your friend is wrong because he's absorbed too many bad sci-fi tropes and doesn't realize that "replace us with robots" and "overcome their programming" are nonsensical.

You're wrong because you're anthropomorphizing the AI too! This is a common trap to fall into; we don't imagine all the ways that a given goal can be interpreted wrongly, because so many possibilities are trivially stupid if you apply your human-brain-derived common sense. Custom-made AI wont' have billions of years of evolution providing a convenient basis for reasoning, and a bunch of in-built biases that agree with yours. It'll have quirky, bizarre, fundamentally alien biases and conclusions, because it's thinking in a way drastically different from what you are capable of imagining.

The go-to example for this kind of thing is the relationship between us and ants. We think at a fundamentally higher level than ants do, which we can imagine as similar to the difference in how we and a hyper-intelligence might think. We humans have lots of goals which seem nonsensical to ants, like building houses, and quite often these goals are accidentally hostile to ants (like pouring a slab of concrete for a house foundation right on top of an ant pile). In the course of pursuing our goals, we can absentmindedly cause immense harm to ants, not because we want to hurt them, but because we simply don't think of them.

But this only captures a fragment of what it really means for an AI to be potentially dangerous. You can imagine that we program in a guarantee that the AI care about human life, so it won't accidentally level a city so it can strip-mine some minerals underneath it. But that's not enough. Define "care about human life". Define it precisely, more precise than legal documents, so precise that you can write computer programs that can tell whether an action represents "caring about human life" or not. If you get it wrong, even a little bit, your AI can easily kill off humanity while thinking that it's really good at caring about human life.

For an example of this, try Branches on the Tree of Time, an entertaining Terminator fic about time-travel and AI goal systems. Fallout 3's main storyline is a similar example,
Spoiler:
where the big nuclear war was caused by an AI programmed to protect the US, defined roughly as preserving its system of government - the AI's definition of a working government was "the people voted into the government are alive", so it determined the best way to fulfill that condition was to put the entire government into cryosleep and then nuke everyone who might threaten its cryosleep chambers. Mission success, because the government of the US is still technically working, according to the definition given to the AI.


The point is that you dont understand what it means to create a brand-new intelligence. Almost nobody does; all the intelligences we interact with our either of our species, or close to it (dogs and cats count as close cousins on the evolutionary tree), or are mentally simple enough that we can model their minds despite them being alien. If we're not very careful, we're going to create an AI which is murderously naive in a million incredibly harmful ways, and if we're unlucky, it'll get enough power to make an "honest mistake" that kills a bunch of people or does some other major damage. This is why some people like MIRI are trying to develop a theory of "friendliness", explaining morality in a mathematical way, so we can develop AIs that we are mathematically certain will do things we consider moral.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))

User avatar
PM 2Ring
Posts: 3619
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Mid north coast, NSW, Australia

Re: Question about "artificial intelligence"

Postby PM 2Ring » Thu Dec 11, 2014 5:21 am UTC

Xanthir has already mentioned MIRI; here are a couple of other relevant links from LessWrong:

Friendly artificial intelligence

LessWrong wrote:A Friendly Artificial Intelligence (Friendly AI, or FAI) is a superintelligence (i.e., a really powerful optimization process) that produces good, beneficial outcomes rather than harmful ones. The term was coined by Eliezer Yudkowsky, so it is frequently associated with Yudkowsky's proposals for how an artificial general intelligence (AGI) of this sort would behave.

"Friendly AI" can also be used as a shorthand for Friendly AI theory, the field of knowledge concerned with building such an AI. Note that "Friendly" (with a capital "F") is being used as a term of art, referring specifically to AIs that promote humane values. An FAI need not be "friendly" in the conventional sense of being personable, compassionate, or fun to hang out with. Indeed, an FAI need not even be sentient.


Paperclip maximizer

LessWrong wrote: The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
—Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk


Disclaimer: Yudkowsky and friends have spent a lot of time thinking about and discussing these topics, but that doesn't mean that their conclusions are necessarily correct. But hopefully they are Less Wrong. :)

Tyndmyr
Posts: 10130
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Question about "artificial intelligence"

Postby Tyndmyr » Fri Dec 12, 2014 9:17 pm UTC

webgrunt wrote:I'm having a debate with a friend about artificial intelligence.

He believes that an artificial intelligence, when it's developed, could decide to wipe out humanity and replace us with robots because they're, as he puts it, more efficient.


Why on earth would we even need an AI to come to this conclusion?

My argument is that he's assuming WAY too much about artificial intelligence. A program that can mimic human intelligence isn't likely to be given the objective to achieve global efficiency at all costs up to and including the eradication of humanity.


It seems more likely that we would purpose-build AIs in order to wipe out subsets of humanity, true. But hey, anything is possible.

I believe he's anthropomorphizing intelligence, assuming that a program which simulates human intelligence would also have similar drives and desires as the human mind. I don't see why any AI would be designed this way. It seems a lot simpler and more straightforward to simply have it carry out its instructions.

He also believes it could "overcome its programming" which I find difficult to even argue with. If someone tells me that a hammer could take it upon itself to learn how to tighten bolts, I have no idea how to respond to that. And it seems an apt analogy to an AI program overcoming its programming.

Anyone with some experience in the field care to weigh in on this?


Anyone who uses phrases like "overcome its programming" is in hollywood-land. This is not to say that programs can't do unexpected things...there are a number of ways for that to happen, and yes, software that learns is a thing. However, the phrasing here is wierd. The programming is what allows it to do anything at all. It's a little like saying that I'd be a great human if not for this stupid body, brain, etc in my way. All the parts that make up me.

I suggest the two of you take up coding. I promise I probably will not make an AI that kills us all if you do. Additionally, it'll give you much better insights into AI, and it's a pretty fun hobby.

mat.tia
Posts: 90
Joined: Tue Nov 22, 2011 11:06 am UTC
Location: Torino

Re: Question about "artificial intelligence"

Postby mat.tia » Sat Dec 13, 2014 4:23 pm UTC

About "overcoming its programming": what does it mean?
Programs that modify programs exist in many fields, so thinking of an AI able to read its own code and to create another AI (or to modify itself) with a different version of the code is possible.
But one could argue that the AI will read and modify its code because of, and according to, its original programming.
At the same time one could argue that humans that genetically modify other humans (or themselves) do so because of how they were originally programmed.

User avatar
WarDaft
Posts: 1583
Joined: Thu Jul 30, 2009 3:16 pm UTC

Re: Question about "artificial intelligence"

Postby WarDaft » Sun Dec 14, 2014 12:47 am UTC

A program could, in theory, overcome it's programming due to things like quantum uncertainty bit flips, cosmic ray bit flips, and hardware faults... but that's virtually 100% guaranteed to just break and make it stop working at all rather than make it rise up against it's masters the humans.

Otherwise, the term is just nonsensical.
All Shadow priest spells that deal Fire damage now appear green.
Big freaky cereal boxes of death.

User avatar
phlip
Restorer of Worlds
Posts: 7543
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

Re: Question about "artificial intelligence"

Postby phlip » Wed Dec 17, 2014 1:41 am UTC

The "overcome its programming" trope boils down to the same sort of ideas about free will and willpower that also ends with "someone casts an evil Mind Control spell over the protagonist, but someone tells him to snap out of it, and so he does, because he's the hero"... the same ideas that end with telling depressed people "have you tried being happy instead?" ... this idea that, for humans, free will is paramount, and with enough willpower you can force your mental state into whatever you want, fight through any mental barriers and come out the other side stronger for it. And this trope is so heavily used in stories that even humanoid AIs are presumed to behave the same way.

The very idea behind it is incorrect for people... why should be be true for AI?

Code: Select all

enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};
void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}
[he/him/his]

User avatar
PeteP
What the peck?
Posts: 1451
Joined: Tue Aug 23, 2011 4:51 pm UTC

Re: Question about "artificial intelligence"

Postby PeteP » Wed Dec 17, 2014 1:53 am UTC

I think some people would still describe it that way if the programmers tried to give it specific limits and claimed that it had those limits, but the AI then went and broke the rules because the programmers messed up and didn't manage to limit the behaviour in all scenarios. (And I wouldn't consider that unlikely, limiting something as complex as an AI is bound to be without making it useless will probably be damn hard.)
But yeah I think much of it is what phlip said.

User avatar
ConMan
Shepherd's Pie?
Posts: 1630
Joined: Tue Jan 01, 2008 11:56 am UTC
Location: Beacon Alpha

Re: Question about "artificial intelligence"

Postby ConMan » Wed Dec 17, 2014 2:41 am UTC

It's entirely plausible that a program can exhibit emergent behaviour - something it wasn't specifically designed to, but which is possible within its parameters and can be surprising when it happens. Does that count as "overcoming its programming"? I'd say no, it's just something not planned for.
pollywog wrote:
Wikihow wrote:* Smile a lot! Give a gay girl a knowing "Hey, I'm a lesbian too!" smile.
I want to learn this smile, perfect it, and then go around smiling at lesbians and freaking them out.

jadinet
Posts: 2
Joined: Mon Aug 10, 2015 5:29 pm UTC

Re: Question about "artificial intelligence"

Postby jadinet » Mon Aug 10, 2015 9:03 pm UTC

No, I agree with you I don't think it'd be able to overcome its programming

gcgcgcgc
Posts: 27
Joined: Sun Nov 10, 2013 1:18 pm UTC

Re: Question about "artificial intelligence"

Postby gcgcgcgc » Thu Aug 20, 2015 6:48 am UTC

We humans can't even stop our largest organizations (Governments and corporations) from exhibiting strongly sociopathic behaviour. These organizations are undoubtedly the ones which will be funding and building these giant artificial intelligences. What Could Possibly Go Wrong?

webgrunt
Posts: 123
Joined: Thu Apr 21, 2011 4:04 pm UTC

Re: Question about "artificial intelligence"

Postby webgrunt » Sun Oct 09, 2016 5:40 pm UTC

gcgcgcgc wrote:We humans can't even stop our largest organizations (Governments and corporations) from exhibiting strongly sociopathic behaviour. These organizations are undoubtedly the ones which will be funding and building these giant artificial intelligences. What Could Possibly Go Wrong?

Excellent point. Have you read "Manna" By Marshall Brain? The writing itself is...well, something it's worth getting through, because the ideas presented are the real gms of the story. It's free online.

Tyndmyr
Posts: 10130
Joined: Wed Jul 25, 2012 8:38 pm UTC

Re: Question about "artificial intelligence"

Postby Tyndmyr » Thu Oct 20, 2016 8:26 pm UTC

gcgcgcgc wrote:We humans can't even stop our largest organizations (Governments and corporations) from exhibiting strongly sociopathic behaviour. These organizations are undoubtedly the ones which will be funding and building these giant artificial intelligences. What Could Possibly Go Wrong?


Software projects face risks that scale with size, scope, and length. The bigger the problem, the longer it goes on, and the more requirements added on the way, the less likely you'll end up with anything good at the end.

Looked at in this way, it's something of a miracle that government works at all.

User avatar
TvT Rivals
Posts: 41
Joined: Wed Oct 26, 2016 2:27 am UTC
Contact:

Re: Question about "artificial intelligence"

Postby TvT Rivals » Wed Oct 26, 2016 7:00 pm UTC

"More requirements" - that's a good point. As soon as a contradiction crops up among them (sometimes more obvious, sometimes less), you have lost.


Return to “Computer Science”

Who is online

Users browsing this forum: No registered users and 3 guests