Can a computer have free will?

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

Can a computer have free will?

Yes, if humans can, so can computers.
63
55%
No, only humans (or similar biological entities) can have free will.
17
15%
No, nothing and no one really has free will
35
30%
 
Total votes: 115

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Can a computer have free will?

Postby deepone » Tue Jan 22, 2013 7:25 pm UTC

About the poll options: I would suggest that you pick option 1 if you consider both option 1 and 3 reasonable, since option 1 is more directly relevant to this thread.

This issue is tightly related to the definition of free will, for which there already is a thread, but I hope that it will be allowed to start a new thread to discuss this narrower issue in a more constrained context.

When discussing issues like free will it is often problematic to agree on the details about related concepts, such as choice and the self. I'd like to suggest that this thread is used to try having a discussion where we explicitly ground statements in algorithms and logic commonly used in computing science, etc. That way, we can make it clear what we're talking about. For example, what kind of algorithm would be interesting to consider as making a choice? A simple if-else-clause? An evolutionary algorithm? Why?

Please try to keep issues about free will that are not grounded in computer algorithms etc out of this thread (try the "Definition of Free Will" thread). If we do that I think this is motivated as a new thread.

Myself, I believe that any definition of free will that would make sense for humans would also (potentially) make sense for computers (AIs). If you disagree, please refer to computer algorithms (preferably advanced ones) that you have considered but that you do not think fit the bill.

I would say that anything that has a goal (some condition that is not true that should be made true) has a will, and that the degree to which this goal is independent of the surrounding environment can be taken as a sensible measure of "freedom". A computer certainly can have goals but it may be interesting to discuss what's needed to make them interesting (e.g., sophisticated plans?). Independence is also interesting to discuss. I'd suggest two types of independence to start with: resistance to outside influence here and now (essentially the tendency to change rules/goals depending on input), and dependence on outside influence for initiation (e.g., a programmer). On interesting point is how independence develops over time as an algorithm runs. E.g., to what degree are goals developed through an evolutionary algorithm independent of the initial programming?

leady
Posts: 1592
Joined: Mon Jun 18, 2012 12:28 pm UTC

Re: Can a computer have free will?

Postby leady » Tue Jan 22, 2013 7:53 pm UTC

One caveat to my vote, is that computers under the current understanding and implementation of their mathematical logic can not have free will. I'm open to new frontiers changing that perspective

User avatar
ucim
Posts: 6715
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can a computer have free will?

Postby ucim » Tue Jan 22, 2013 8:14 pm UTC

Yes, this issue is tightly related to the definition of free will. Since computers by nature are deterministic, in order to apply the concept of free will to this deterministic machine we would have to look at the computer as a whole. "If it walks like a duck..."

A sufficiently complex system would certainly walk like a duck. But then that poses the question of what is "sufficiently complex", and what this is measured in relation to. I presume it would be measured with relation to people (who would be making the determination), using the scale:
  • it's doing just what I expect.
  • It's doing sorta what I hoped.
  • Why is it doing that?
  • What the f...
  • Head for the hills!
Now while computers are certainly deterministic, they also are increasingly sensitive to initial conditions. Computers boot differently every time (even though they "shouldn't"). Maybe this is because of timing issues, where the hard disk happens to park after spindown, whether any number of scheduled operations are pending, or many other things (I'd love to be enlightened here). They are also increasingly responsive to outside conditions, such as networks and data. They are not simple tools like hammers any more.

They make choices. (at least in the weak sense)

deepone wrote:I would say that anything that has a goal (some condition that is not true that should be made true) has a will, and that the degree to which this goal is independent of the surrounding environment can be taken as a sensible measure of "freedom".
The example that comes most readily to mind is a spreadsheet's goal-seek function. Would that qualify?

I'd say that networked computers (and the people who run them and are dependent on their output) are the most significant AI development there is, and it is developing all by itself. Considered as a single entity, I'd say it definately has free will. I'd even go as far as to say we probably can't comprehend what "it" is "doing", any more than a stomach cell understands what reading a newspaper is.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
dudiobugtron
Posts: 1098
Joined: Mon Jul 30, 2012 9:14 am UTC
Location: The Outlier

Re: Can a computer have free will?

Postby dudiobugtron » Tue Jan 22, 2013 10:26 pm UTC

There's quite a bit of discussion on whether a humans are just complex computers, or whether there's something 'magic' about us that computers could never emulate - specifically this thread:
viewtopic.php?f=12&t=92819

There's quite a lot of reading there, but it also does cover a number of the concepts you appear to be interested in.
Image

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Re: Can a computer have free will?

Postby deepone » Wed Jan 23, 2013 2:40 am UTC

ucim wrote:Yes, this issue is tightly related to the definition of free will. Since computers by nature are deterministic, in order to apply the concept of free will to this deterministic machine we would have to look at the computer as a whole. "If it walks like a duck..."

A sufficiently complex system would certainly walk like a duck. But then that poses the question of what is "sufficiently complex", and what this is measured in relation to. I presume it would be measured with relation to people ...

I would like it if we could drop the issue of determinism, given that we assume that computers are deterministic (say we discuss an ideal computer). I do agree that looking at a system as a whole, and the complexity contained within that system, are key questions but I think other terms than determinism would be more suitable. Complexity is interesting in itself, in particular irreducible complexity or compressibility.
ucim wrote:Now while computers are certainly deterministic, they also are increasingly sensitive to initial conditions. Computers boot differently every time (even though they "shouldn't"). Maybe this is because of timing issues, where the hard disk happens to park after spindown, whether any number of scheduled operations are pending, or many other things (I'd love to be enlightened here). They are also increasingly responsive to outside conditions, such as networks and data. They are not simple tools like hammers any more.

This line of reasoning seem to target the determinism aspect related to free will by pointing out that computers can be chaotic and (very) difficult to predict. Sure, that may be true, but I don't think it's all that interesting since it seems to rely on aspects of real computers that differ from ideal computers. You might as well plug in a random input (e.g., from radioactive decay) into your ideal deterministic program and say that your freedom comes from this. But what do you think about the "free=independent" interpretation, and explicitly dropping the issue of determinism (taking it for granted for purposes of this discussion)?
ucim wrote:
deepone wrote:I would say that anything that has a goal (some condition that is not true that should be made true) has a will, and that the degree to which this goal is independent of the surrounding environment can be taken as a sensible measure of "freedom".
The example that comes most readily to mind is a spreadsheet's goal-seek function. Would that qualify?

I guess I'm not as familiar with spreadsheets as you are. :) What algorithm is this?
dudiobugtron wrote:There's quite a bit of discussion on whether a humans are just complex computers, or whether there's something 'magic' about us that computers could never emulate - specifically this thread:
viewtopic.php?f=12&t=92819

There's quite a lot of reading there, but it also does cover a number of the concepts you appear to be interested in.

Well, honestly I'm not looking for reading material. These questions have been among my primary interests for more than a decade. I am looking for live discussion and personal opinions, so if you want to highlight something from that thread that is particularly relevant for your view on this issue, please do.

User avatar
ucim
Posts: 6715
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can a computer have free will?

Postby ucim » Wed Jan 23, 2013 3:14 am UTC

deepone wrote:I would like it if we could drop the issue of determinism, given that we assume that computers are deterministic...
Fine with me. I merely used the statement to justify my wholistic approach here. Better to be explicit than to assume, no? And for the record, I also don't like the word "deterministic" because it carries too much baggage. I agree we should explicitly drop determinism, now that it has been explicitly mentioned (and put in its place).

deepone wrote:This line of reasoning seem to target the determinism aspect related to free will by pointing out that computers can be chaotic and (very) difficult to predict. Sure, that may be true, but I don't think it's all that interesting since it seems to rely on aspects of real computers that differ from ideal computers.
My point here, is simply that computers can exhibit chaotic behavior, which is interesting. This is true of ideal computers in a non-ideal or unpredictable environment also. One of the differences between living things and computers is that life is analog, while computers are discrete. I am merely pointing out that, in the real world, the impact of this fact is greatly lessened by its interactions with the real world, and by its own complexity. This is more easily seen in the behavior of networked computers, especially those tasked with some goal (minimizing network traffic spikes, for example). So, to me, the difference between life and computers is not a fundamental one but an incremental one.

deepone wrote:I guess I'm not as familiar with spreadsheets as you are. :) What algorithm is this?
A spreadsheet consists of a number of cells arranged (usually) in a grid, with many cells dependent on the values in other cells through formulas which can be quite complex, especially when the values of cells used in a calculation in turn depend on values in other cells. Changing one cell causes a cascade of changes in other cells as their values get recalculated.

You can then ask the spreadsheet to try to make two cells equal, or to make one cell as great as possible, or whatever, by figuring out what the values in all the other cells should be to ahieve this result. You might want to maximize profit, or set the sea level equal to the highest treetop, or set a certain rate of temperature increase, and then see what conditions would lead to that result. This is called "goal seeking". The spreadsheet would try various values using its own algorithms to figure out the best approach, and come up with a result.

Granted, the user is setting the goal there. But now suppose we network all the computers that have information of relevance (for example, purchases on Amazon vs book prices and demographics) and tell the computer to maximize profit. There will be feedback loops (as prices change, people's book purchasing changes too) which cause behavior (such as closing all the stores in the South) which was not specifically requested, but because the complexity of the system exceeds our own understanding of the system, could be quite surprising.

A board of directors might spend weeks deciding whether or not to close a store, or pull out of an entire region, and in doing so, they would be making a decision, of their own free will. A suitable computer network, given simple but interdependent instructions, could end up doing the same thing. I think that definitely qualifies as making a decision, and as having free will.

Does the computer network "know" what it is doing? Hard to say - you'd have to define what is meant by "know". And although we communicate with individual computers via the terminal, we are not really interacting with the network itself. In the book "Godel, Escher Bach" (highly recommended) this is akin to the difference between talking to an ant, and talking to the ant colony.

Off topic, but not irrelevant to it, look at bird flocking behavior - very complex flocking, as it turns out, can be explained by each bird having a very simple algorithm, but it's the interactions between them that causes the richness of the behavior.

Computers, like people, exist in an environment with which they interact. This interaction is where the interesting parts lie, IMHO

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Re: Can a computer have free will?

Postby deepone » Wed Jan 23, 2013 3:56 am UTC

ucim wrote:...

I agree with you on much of what you write but I'd still like to consider a more "pure" version of the problem. Imagine an AI that only interfaces with a virtual environment, removing issues of interacting with a chaotic society or an analog reality. (By the way, it is not a given that reality is analog, but that's off topic). Can it still have (as much?) free will?
ucim wrote:Does the computer network "know" what it is doing? Hard to say - you'd have to define what is meant by "know". And although we communicate with individual computers via the terminal, we are not really interacting with the network itself. In the book "Godel, Escher Bach" (highly recommended) this is akin to the difference between talking to an ant, and talking to the ant colony.

Well, humans don't know what they're doing! (There may be theories that may be correct, but in general we don't know). I don't think knowing what you do has anything to do with free will.
I believe I've read that book, but it was a long time ago and I don't remember much details.
ucim wrote:Off topic, but not irrelevant to it, look at bird flocking behavior - very complex flocking, as it turns out, can be explained by each bird having a very simple algorithm, but it's the interactions between them that causes the richness of the behavior.

Yes, that is a very nice algorithm. :)
ucim wrote:Computers, like people, exist in an environment with which they interact. This interaction is where the interesting parts lie, IMHO

I agree with this to a large degree, in general. But I don't think that it's central to the issue of free will, really. Do you?

User avatar
jules.LT
Posts: 1539
Joined: Sun Jul 19, 2009 8:20 pm UTC
Location: Paris, France, Europe

Re: Can a computer have free will?

Postby jules.LT » Wed Jan 23, 2013 10:18 am UTC

I think that the way humans think could be modeled in a computer, so it follows that a computer (or at least this model) can have free will if a human can.
A computer/program could also have free will in other ways, depending on how you define free will.

But I don't think we can get much more specific without a precise definition of free will...
Bertrand Russell wrote:Not to be absolutely certain is, I think, one of the essential things in rationality.
Richard Feynman & many others wrote:Keep an open mind – but not so open that your brain falls out

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Re: Can a computer have free will?

Postby deepone » Wed Jan 23, 2013 11:41 am UTC

jules.LT wrote:I think that the way humans think could be modeled in a computer, so it follows that a computer (or at least this model) can have free will if a human can.
A computer/program could also have free will in other ways, depending on how you define free will.

But I don't think we can get much more specific without a precise definition of free will...

Well, how about considering a program or an algorithm to be a definition of free will? That is, if we can specify a type of program or a type of algorithm that we would sensibly say has free will, then that can be taken as a definition of free will. And perhaps a more exact definition than can be given in the English language at that. Or what do you think? Just as you would arrive at any other definition by coming up with a reasonable description and discuss how it matches what we might want free will to mean, we can do this directly with computer programs as well. That is, we can ask "is it reasonable to say that this computer program has free will" without any predefined explicit definition, just as we could ask "is this definition xyz of free will reasonable" in English (also without any predefined explicit definition).

I agree that computers could in principle model human thought, etc, but I think that it is interesting to consider "simpler" models. For example, I think it's interesting to discuss to what degree an chess-playing AI has free will, and formulate directly why that would be a good or bad definition of free will.

Trebla
Posts: 386
Joined: Fri Apr 02, 2010 1:51 pm UTC

Re: Can a computer have free will?

Postby Trebla » Wed Jan 23, 2013 12:38 pm UTC

Just out of curiosity, how would you describe current human free will differing from current computer free will (or current amoeba free will, or virus free will, or slug free will, but those are OT)? Maybe this question would be better in the free will thread, but since this one is specifically asking about computers compared to humans, it seems understanding the current gap is an on topic question.

User avatar
jules.LT
Posts: 1539
Joined: Sun Jul 19, 2009 8:20 pm UTC
Location: Paris, France, Europe

Re: Can a computer have free will?

Postby jules.LT » Wed Jan 23, 2013 1:04 pm UTC

I'm not sure the comparison with humans can avoid vastly overlapping the other thread...

One interesting question that falls within the scope of this thread without encroaching too much on the other one is
-> Why doesn't a chess computer have free will? (I think we all agree that it doesn't)
It has goals, makes analyses of given situations, simulates the outcome of different actions and selects one on the basis of this "reflexion" (ok, ucim: "selection" has less of a baggage than "choice"; use that word, please: "weak choice" just feels wrong).

I'd say that it is because it has to do with how its goals are built-in.
The chess computer has the ultimate goal of winning. The (only?) subgoal would be achieving superior strategic positions (this includes the taking of pieces). I don't think it dynamically assigns new goals or subgoals for itself. Would that or a stricter variation of that be enough for us to consider it as having free will?
Bertrand Russell wrote:Not to be absolutely certain is, I think, one of the essential things in rationality.
Richard Feynman & many others wrote:Keep an open mind – but not so open that your brain falls out

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Re: Can a computer have free will?

Postby deepone » Wed Jan 23, 2013 2:22 pm UTC

Trebla wrote:Just out of curiosity, how would you describe current human free will differing from current computer free will (or current amoeba free will, or virus free will, or slug free will, but those are OT)? Maybe this question would be better in the free will thread, but since this one is specifically asking about computers compared to humans, it seems understanding the current gap is an on topic question.

Going into this may take us off-topic, yes, but very briefly I consider all forms of intelligence to be about predicting what might happen and then the key difference becomes what and how much you can predict, e.g., how far into the future. Maybe I'll want get back to this from another starting point (focused on computers and intelligent algorithms) later.
jules.LT wrote:The chess computer has the ultimate goal of winning. The (only?) subgoal would be achieving superior strategic positions (this includes the taking of pieces). I don't think it dynamically assigns new goals or subgoals for itself. Would that or a stricter variation of that be enough for us to consider it as having free will?

Before I try to give a longer answer to this - what do you mean by dynamically assigning new goals? Can you imagine an algorithm for that? Or describe it in a way that can be clearly related to algorithms?

User avatar
jules.LT
Posts: 1539
Joined: Sun Jul 19, 2009 8:20 pm UTC
Location: Paris, France, Europe

Re: Can a computer have free will?

Postby jules.LT » Wed Jan 23, 2013 3:10 pm UTC

deepone wrote:Before I try to give a longer answer to this - what do you mean by dynamically assigning new goals? Can you imagine an algorithm for that? Or describe it in a way that can be clearly related to algorithms?

I mean that the algorithm or being should:
- Have intermediary objectives that advance its ultimate purpose
- Organize its tasks in an appropriate manner in order to advance towards these goals
- Set new goals/subgoals for itself, rehierarchize or abandon them dynamically (as a response to the environment/inputs) in order to better achieve higher-order goals or its ultimate purpose
- Reorganize its tasks to advance towards the new goals
Bertrand Russell wrote:Not to be absolutely certain is, I think, one of the essential things in rationality.
Richard Feynman & many others wrote:Keep an open mind – but not so open that your brain falls out

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Re: Can a computer have free will?

Postby deepone » Wed Jan 23, 2013 3:38 pm UTC

jules.LT wrote:
deepone wrote:Before I try to give a longer answer to this - what do you mean by dynamically assigning new goals? Can you imagine an algorithm for that? Or describe it in a way that can be clearly related to algorithms?

I mean that the algorithm or being should:
- Have intermediary objectives that advance its ultimate purpose
- Organize its tasks in an appropriate manner in order to advance towards these goals
- Set new goals/subgoals for itself, rehierarchize or abandon them dynamically (as a response to the environment/inputs) in order to better achieve higher-order goals or its ultimate purpose
- Reorganize its tasks to advance towards the new goals

These points seem like they should often be there in a chess AI. Intermediate objectives and task organization sound like plans to me, and rehierarchizing and abandoning old subgoals or reorganizing tasks sound like re-planning. Do you have another view?

I prefer to consider "free will" as a matter of degree, and although a chess AI may have so low a degree of free will as to make the term dubious I do think it is not obvious what the difference in kind is to an imagined advanced AI that we would more readily consider having free will.

User avatar
LaserGuy
Posts: 4570
Joined: Thu Jan 15, 2009 5:33 pm UTC

Re: Can a computer have free will?

Postby LaserGuy » Wed Jan 23, 2013 3:49 pm UTC

jules.LT wrote:I'm not sure the comparison with humans can avoid vastly overlapping the other thread...

One interesting question that falls within the scope of this thread without encroaching too much on the other one is
-> Why doesn't a chess computer have free will? (I think we all agree that it doesn't)
It has goals, makes analyses of given situations, simulates the outcome of different actions and selects one on the basis of this "reflexion" (ok, ucim: "selection" has less of a baggage than "choice"; use that word, please: "weak choice" just feels wrong).

I'd say that it is because it has to do with how its goals are built-in.
The chess computer has the ultimate goal of winning. The (only?) subgoal would be achieving superior strategic positions (this includes the taking of pieces). I don't think it dynamically assigns new goals or subgoals for itself. Would that or a stricter variation of that be enough for us to consider it as having free will?


I don't think chess computers even go so far as looking for superior strategic positions per se. At its simplest, a chess algorithm is a program that looks at every possible available move and asks the question "After N moves, which of my current moves will give me the highest probability of winning?" The moves it chooses (assuming the algorithms are good) happen to be superior strategic positions because those are the ones that are most likely to lead it to win. While someone with experience in such matters would probably be able to answer better, I strongly suspect that these algorithms don't work in terms of goals or even strategy--the game is completely abstracted into an optimization calculation.

I think chess is much too limited of a problem to be able to say anything about either intelligence or free will. I think that you'd find the exercise far more interesting/productive if you could put the computer in a situation where 1) it has to react to events that it has never "seen" before and 2) the rules available to it are not so rigidly defined: some can be bent or broken or have ambiguities associated with them.

User avatar
jules.LT
Posts: 1539
Joined: Sun Jul 19, 2009 8:20 pm UTC
Location: Paris, France, Europe

Re: Can a computer have free will?

Postby jules.LT » Wed Jan 23, 2013 4:46 pm UTC

@Laserguy:
No computer can simulate every possibility until they reach a win or not. They have to evaluate the value of strategic positions in between the current turn and the end of the game.
Chess does presents the computer with situations it has "never seen before": it is not possible to encode every possible chess position. The memory would probably take more physical space than the universe has.

deepone wrote:These points seem like they should often be there in a chess AI. Intermediate objectives and task organization sound like plans to me, and rehierarchizing and abandoning old subgoals or reorganizing tasks sound like re-planning. Do you have another view?

I prefer to consider "free will" as a matter of degree, and although a chess AI may have so low a degree of free will as to make the term dubious I do think it is not obvious what the difference in kind is to an imagined advanced AI that we would more readily consider having free will.

I was only talking about what dynamic goal-setting.
I agree that free will is a matter of degree, but I think that this only comes into play when we consider how free the will is, not what kind of structure it has.

What kind of procedures would you call "re-planning"? I don't think that the computer can be said to change its objectives: as far as I know, it simulates possible board evolutions and evaluates the strategic value of each, then picks the highest.
Last edited by jules.LT on Wed Jan 23, 2013 4:58 pm UTC, edited 2 times in total.
Bertrand Russell wrote:Not to be absolutely certain is, I think, one of the essential things in rationality.
Richard Feynman & many others wrote:Keep an open mind – but not so open that your brain falls out

User avatar
LaserGuy
Posts: 4570
Joined: Thu Jan 15, 2009 5:33 pm UTC

Re: Can a computer have free will?

Postby LaserGuy » Wed Jan 23, 2013 4:54 pm UTC

jules.LT wrote:No computer can simulate every possibility until they reach a win or not. They have to evaluate the value of strategic positions in between the current turn and the end of the game.
Chess does presents the computer with situations it has "never seen before": it is not possible to encode every possible chess position.


They have an algorithm that evaluates the strength of a move in an arbitrary chess position. It literally doesn't matter where the pieces on the board are positioned: As long as the position is legal, the computer will be able to spit out the best available move to whatever depth is it able to go. You don't need to encode every position.

Trebla
Posts: 386
Joined: Fri Apr 02, 2010 1:51 pm UTC

Re: Can a computer have free will?

Postby Trebla » Wed Jan 23, 2013 4:59 pm UTC

LaserGuy wrote:I don't think chess computers even go so far as looking for superior strategic positions per se. At its simplest, a chess algorithm is a program that looks at every possible available move and asks the question "After N moves, which of my current moves will give me the highest probability of winning?"


At its simplest, yes, but that's what humans do, too, when reduced to the simple final goal of winning. What's the difference between calculating the expected value of 100,000 possible positions against a well-defined algorithm/heuristic (and trying to get to intermediate positions that also have higher value, weighted against later positions that have potential lower value... modern chess programs are more complex than "look at every possible position N moves out, determine value of board, maximize") rather than a human player analyzing a few dozen based on an internal and unknown decision making process.

As an example of intermediate strategy, a program may go for a position 3 moves away with a possible limit of 60 (A) value rather than one of 80 value (B) because A is higher than 80 in a large number of branches out from it, and only hits the minimum of 60 in a very small number of possible future positions, essentially anticipating the likelihood of the opponent making mistakes, and this is an evolved likelihood based on how the opponent performs to the computers maximized expectation of the opponent in previous moves. If the opponent has not consistently made the move that the computer anticipates is best for them, the computer is more likely to take risks knowing that it could be sub-optimal if the human plays perfectly.

User avatar
jules.LT
Posts: 1539
Joined: Sun Jul 19, 2009 8:20 pm UTC
Location: Paris, France, Europe

Re: Can a computer have free will?

Postby jules.LT » Wed Jan 23, 2013 5:03 pm UTC

LaserGuy wrote:They have an algorithm that evaluates the strength of a move in an arbitrary chess position. It literally doesn't matter where the pieces on the board are positioned: As long as the position is legal, the computer will be able to spit out the best available move to whatever depth is it able to go. You don't need to encode every position.

Doesn't this satisfy your condition that "it has to react to events that it has never "seen" before"?
Also, how would you identify the better move without assigning a value to the possible board states after a move?

The "probability of winning" is almost impossible for a computer to evaluate: humans do that mostly by comparing to similar situations they've known in the past, because we are exceedingly good at recognizing patterns. The computer can't do that to the same level, it has to use a different technique.
Bertrand Russell wrote:Not to be absolutely certain is, I think, one of the essential things in rationality.
Richard Feynman & many others wrote:Keep an open mind – but not so open that your brain falls out

User avatar
LaserGuy
Posts: 4570
Joined: Thu Jan 15, 2009 5:33 pm UTC

Re: Can a computer have free will?

Postby LaserGuy » Wed Jan 23, 2013 5:11 pm UTC

Trebla wrote:At its simplest, yes, but that's what humans do, too, when reduced to the simple final goal of winning. What's the difference between calculating the expected value of 100,000 possible positions against a well-defined algorithm/heuristic (and trying to get to intermediate positions that also have higher value, weighted against later positions that have potential lower value... modern chess programs are more complex than "look at every possible position N moves out, determine value of board, maximize") rather than a human player analyzing a few dozen based on an internal and unknown decision making process.


I don't believe humans have free will in the context of playing chess either. Except perhaps at the meta level where a human can decide to play or not, to cheat or not, to try to win or not, to knock over the board and spill the pieces or not. But at the level of actual play, I think that chess is sufficiently algorithmic that it is a poor test for free will.

Doesn't this satisfy your condition that "it has to react to events that it has never "seen" before"?
Also, how would you identify the better move without assigning a value to the possible board states after a move?


No, because it's still a chess board, with chess pieces on it, and the same rules apply every time. The essential qualities of the situation are the same, the method of analysis is exactly the same, and the desired outcome is still the same. The solution method is independent of the configuration of the pieces on the board, but different boards may produce different solutions.

User avatar
jules.LT
Posts: 1539
Joined: Sun Jul 19, 2009 8:20 pm UTC
Location: Paris, France, Europe

Re: Can a computer have free will?

Postby jules.LT » Wed Jan 23, 2013 5:41 pm UTC

I still think that chess is a sufficient approximation of the world for our purpose here.
There is a also a finite (if unfathomable) number of possible situations in the real world, and the laws of physics aren't bent or broken on a regular basis.

You do raise an interesting point, though: does the entity being able to dynamically adjust its methods dynamically interfere with whether it has free will?
My answer would be no.
Bertrand Russell wrote:Not to be absolutely certain is, I think, one of the essential things in rationality.
Richard Feynman & many others wrote:Keep an open mind – but not so open that your brain falls out

User avatar
ucim
Posts: 6715
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can a computer have free will?

Postby ucim » Wed Jan 23, 2013 6:24 pm UTC

deepone wrote:Imagine an AI that only interfaces with a virtual environment, removing issues of interacting with a chaotic society or an analog reality.
Ok, something akin to sim-city running by itself? Each "person" could be considered a separate entity (perhaps even run by a separate computer). This closed, sterile system should qualify. I will note that, since this is an idealized version of the question, the computers would not be subject to real-world idiosyncrasies (such as the boot-up issues I referred to earlier), and every time it runs from the same initial conditions, it will run exactly the same way.

The question then becomes "can the 'people' in this ideal sim-city have a property of autonomous agency which we would be comfortable calling 'free will'?".

Am I reading this right?

In thinking about the question I muse that...
Spoiler:
  1. Free will is not an objective quality. A definition can make it look objective, but only by hiding the subjective parts in the very words and concepts (goal, choice, imagine etc) used in the definition.
  2. Being subjective, it has more to do with my (the observer's) willingness to let go of the mechanical predictability of an entity's behavior, and this in turn has to do with the relative complexity of the entity's "mind" compared to mine. As I let go of the mechanical aspects of the entity, I interact with it in a more and more abstract manner. I interact with the "whole", not with its innards.
  3. As such, free will has a lot to do with how I will treat the entity. That is, I am likely to treat something differently if I think of it as having free will than if I think of it as being purely mechanical. This eventually bears on the degree of blame and responsibility I assign to it.
  4. There is no question in my mind that computers and their associated software can become complex enough for me to let go of the mechanics, and deal with the entity on an abstract level. Indeed, I'm certain that one day they will be complex enough for this to be the only viable option, especially when independent computers are networked together and interact with each other.
... and conclude that, in the aforementioned scenario, the entities would consider each other to have free will, and if they are complex enough, I would too.

deepone wrote:
ucim wrote:Computers, like people, exist in an environment with which they interact. This interaction is where the interesting parts lie, IMHO
I agree with this to a large degree, in general. But I don't think that it's central to the issue of free will, really. Do you?
I am not at all convinced that it is not central, or at least very related. I view free will as related to relative complexity, but not in the abstract. Specifically, I think of free will is a model of an aspect of the behavior of an entity; a model which is preferred over other models when their interactions (with me, or any other observer) are best mediated at a high level of abstractness (or, put another way, when interacting with them close to the mechanism level is impractical, unrewarding, or unenlightening). Having independent agents (separate computers) interact to form a larger whole introduces this level of complexity (or chaos) more quickly.

jules.LT wrote:-> Why doesn't a chess computer have free will? (I think we all agree that it doesn't)
I wouldn't dismiss it so quickly. Within the context of the program, it might not be unreasonable to say that it does, especially for chess programs that learn from experience. To form a parallel, the chess computer's ultimate (hardwired, if you will) goal is to checkmate the opponent. To do that, it forms its own subgoals (capture the queen, fork the rook, etc) based on the evolving game position. A living thing has as its ultimate (hardwired) goal to reproduce. To do that, it forms its own subgoals (find a mate, become attractive, defeat competitors, etc.) Chess is certainly simpler, but I don't see this as a matter of kind, just one of degree. It is also a (probably necessary) consequence of the artificiality of chess, akin to the artificiality of the "ideal computer in an ideal environment".

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

Trebla
Posts: 386
Joined: Fri Apr 02, 2010 1:51 pm UTC

Re: Can a computer have free will?

Postby Trebla » Wed Jan 23, 2013 6:35 pm UTC

Just to get back away from the chess digression...

What would it look like for a computer to have "free will"? What test (or tests) must it pass before we say "that is no longer a simple (arrangement of) stimulus response, that computer is now better described as having free will"? Could a computer that can pass a Turing test be said to have "free will" or would that be insufficient (and potentially unnecessary)? Relatedly, are we talking about human level "free will" as the threshold? Do lower animals differ in this property? What's the lowest life form that we would ascribe "free will" to given our current understanding?

I apologize if these sound like dumb questions, but I am lost on all of them and would love some insights to consider.

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Re: Can a computer have free will?

Postby deepone » Wed Jan 23, 2013 7:33 pm UTC

jules.LT wrote:
deepone wrote:These points seem like they should often be there in a chess AI. Intermediate objectives and task organization sound like plans to me, and rehierarchizing and abandoning old subgoals or reorganizing tasks sound like re-planning. Do you have another view?

I prefer to consider "free will" as a matter of degree, and although a chess AI may have so low a degree of free will as to make the term dubious I do think it is not obvious what the difference in kind is to an imagined advanced AI that we would more readily consider having free will.

I was only talking about what dynamic goal-setting.
I agree that free will is a matter of degree, but I think that this only comes into play when we consider how free the will is, not what kind of structure it has.

I'm a bit unsure what you include in "structure", but if you consider free=independent then I think that what data is handled and how it is processed (e.g., in form of goals/plans that are adapted/replaced in response to x, or similar) is important.
jules.LT wrote:What kind of procedures would you call "re-planning"? I don't think that the computer can be said to change its objectives: as far as I know, it simulates possible board evolutions and evaluates the strategic value of each, then picks the highest.

Well, I'm not that familiar with modern (i.e., more than simple searching) chess AIs but isn't it sensible to think of the selected/found preferred board evolutions as plans? Plans that are changed and replaced as the opponent acts. That is, a plan in the general sense that there is a preparedness to act in relation to certain expected developments.
jules.LT wrote:I still think that chess is a sufficient approximation of the world for our purpose here.
There is a also a finite (if unfathomable) number of possible situations in the real world, and the laws of physics aren't bent or broken on a regular basis.

You do raise an interesting point, though: does the entity being able to dynamically adjust its methods dynamically interfere with whether it has free will?
My answer would be no.

I agree with the first point, and I think the second issue is ... "strange". What is meant by changing methods? If the method in question is general enough it's like asking a brain to stop using chemicals. Sure, most chess AIs are probably coded specifically to play chess and nothing else. Is that the key point?
ucim wrote:Am I reading this right?

Yes, I essentially agree with all of what you wrote there.
ucim wrote:
deepone wrote:
ucim wrote:Computers, like people, exist in an environment with which they interact. This interaction is where the interesting parts lie, IMHO
I agree with this to a large degree, in general. But I don't think that it's central to the issue of free will, really. Do you?
I am not at all convinced that it is not central, or at least very related. I view free will as related to relative complexity, but not in the abstract. Specifically, I think of free will is a model of an aspect of the behavior of an entity; a model which is preferred over other models when their interactions (with me, or any other observer) are best mediated at a high level of abstractness (or, put another way, when interacting with them close to the mechanism level is impractical, unrewarding, or unenlightening). Having independent agents (separate computers) interact to form a larger whole introduces this level of complexity (or chaos) more quickly.

I'm a bit confused as it seems to me that you are discussing the importance of interaction in two different roles. The first seem to be related to whether an external actor can usefully interact with an entity as if it had an independent will, etc. I think that's certainly related but I would rather look at it from the other end - "what computational properties of an entity might make this stance reasonable?" The second seem to point to how interaction among parts of a whole increase complexity. That's certainly true and possibly important but it also seems almost trivial in a sense. I mean, the important part is the complexity, in my mind. Interaction between parts may be an important way to get complexity but I don't see that this formulation is critical if you accept/assume/postulate that you can get great complexity from algorithms in general. E.g., are the cells in a cellular automaton interacting in this important way? This becomes confusing when interaction is also taken to mean interaction with something external, i.e., input/output to/from the computer.
Trebla wrote:Just to get back away from the chess digression...

What would it look like for a computer to have "free will"? What test (or tests) must it pass before we say "that is no longer a simple (arrangement of) stimulus response, that computer is now better described as having free will"? Could a computer that can pass a Turing test be said to have "free will" or would that be insufficient (and potentially unnecessary)? Relatedly, are we talking about human level "free will" as the threshold? Do lower animals differ in this property? What's the lowest life form that we would ascribe "free will" to given our current understanding?

I apologize if these sound like dumb questions, but I am lost on all of them and would love some insights to consider.

Well, I think that this discussion is working towards suggesting answers to these questions. We don't need to start out answering them. E.g., the discussion about chess AIs is a discussion about whether a computer playing chess is a good answer to your first question. It may be, with the right explanation.

I'm not really interested in the computer passing any tests. I'm more interesting in knowing that it is running an algorithm that I consider to be sensible to talk about as having free will. For me, that would entail learning from experience to predict possible futures and react to stimuli in line with how these fit into internal simulations and preferences. I don't think that the Turing test would be reliable in either direction. It wouldn't convince me on this point. Thresholds in life forms is kinda off topic in this thread, but since we're discussing free will in a chess AI we are at least talking about the possibility of free will at a rather low level compared to most life forms. :)

Trebla
Posts: 386
Joined: Fri Apr 02, 2010 1:51 pm UTC

Re: Can a computer have free will?

Postby Trebla » Wed Jan 23, 2013 7:58 pm UTC

deepone wrote:Well, I think that this discussion is working towards suggesting answers to these questions. We don't need to start out answering them. E.g., the discussion about chess AIs is a discussion about whether a computer playing chess is a good answer to your first question. It may be, with the right explanation.


I guess I was asking in terms of the poll, figured that if I knew what you were looking for I could answer the poll, as is, I don't know how to answer...

There was a riddle posted in the riddles sub-forum some time ago about "twins in a maze" and how they would find each other. Eventually it clarified to "two identical robots" (absolutely identical features including activation time, so a "pseudo-random number generator" would return the same thing for each) placed at opposite poles of a perfectly symmetrical (i.e., featureless) sphere with no available inputs to break the symmetry (rotation, the concept of north). Would it be possible for these two to ever meet? I think the thread died after some discussion with no real resolution, but perhaps this would be a good indication of free will... if one of the two robots could simply "decide to walk that way" and the other did not make the same decision, perhaps it could be said to have free will. If the algorithm is rigid enough that the two identical robots would make the some "choice" at every decision point, then maybe it's insufficient to say it had free will. At a glance this sounds reasonable, maybe not though, just a thought.

User avatar
LaserGuy
Posts: 4570
Joined: Thu Jan 15, 2009 5:33 pm UTC

Re: Can a computer have free will?

Postby LaserGuy » Wed Jan 23, 2013 10:06 pm UTC

jules.LT wrote:I still think that chess is a sufficient approximation of the world for our purpose here.
There is a also a finite (if unfathomable) number of possible situations in the real world, and the laws of physics aren't bent or broken on a regular basis.


Not the laws of physics, no. I was more thinking of, say, national laws. Legally, you aren't allowed to steal, for example. But people can and do steal anyway, and there are lots of shades of gray depending on the circumstances. It's a rule that is bent and broken on a regular basis. Free will is a question that is heavily implicated in questions of morality; I think having a system that is so pristine and isolated from that context is probably not sufficient to really begin asking the kinds of questions that free will is meant to answer.

jules.LT wrote:You do raise an interesting point, though: does the entity being able to dynamically adjust its methods dynamically interfere with whether it has free will?
My answer would be no.


I'd say that dynamically adjusting your methods is a necessary condition for intelligence (ie. the ability to learn), and intelligence is a necessary condition for free will.

jigawatt
Posts: 33
Joined: Fri Jul 01, 2011 9:35 pm UTC

Re: Can a computer have free will?

Postby jigawatt » Wed Jan 23, 2013 11:18 pm UTC

We were so worried that machines would start thinking like humans, we never considered what would happen if humans started thinking like machines.

User avatar
ucim
Posts: 6715
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can a computer have free will?

Postby ucim » Thu Jan 24, 2013 2:53 am UTC

deepone wrote:..."what computational properties of an entity might make this stance reasonable?"...
Ability to learn and to modify its behavior based on experience. This both requires and generates a certain level of complexity (related to how much and how well it learns).

As to complexity coming from many sources, this is true. On one level it doesn't matter where it comes from. On another level, complexity from asynchronous interactions is potentially a different kind of complexity, and interactions are the way a system learns. The ability to learn would seem to me to be important in assigning free will to an entity - one that cannot learn can hardly be said to be more than an automaton. If you are looking for objective criteria, I think that one qualifies - even moreso than mere complexity.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Re: Can a computer have free will?

Postby deepone » Thu Jan 24, 2013 10:22 am UTC

ucim wrote:
deepone wrote:..."what computational properties of an entity might make this stance reasonable?"...
Ability to learn and to modify its behavior based on experience. This both requires and generates a certain level of complexity (related to how much and how well it learns).

Right. But just learning and modifying behavior is trivial, in a simple form. (E.g., "that worked -> +1 weight to trying it next time", etc). I agree that it is a core issue, but I think it's interesting to consider further the "required structure" of this learning and modifying. Maybe the discussion below is on that track.
ucim wrote:As to complexity coming from many sources, this is true. On one level it doesn't matter where it comes from. On another level, complexity from asynchronous interactions is potentially a different kind of complexity, and interactions are the way a system learns. The ability to learn would seem to me to be important in assigning free will to an entity - one that cannot learn can hardly be said to be more than an automaton. If you are looking for objective criteria, I think that one qualifies - even moreso than mere complexity.

Could you use some other terms to make it clear what kind of interactions you are talking about? For example, evaluating many potential scenarios in parallel (potentially each one quite sophisticated) and then comparing then (or enter them into some kind of competition or selection algorithm), would that fit the bill?

User avatar
jules.LT
Posts: 1539
Joined: Sun Jul 19, 2009 8:20 pm UTC
Location: Paris, France, Europe

Re: Can a computer have free will?

Postby jules.LT » Thu Jan 24, 2013 2:23 pm UTC

deepone wrote:just learning and modifying behavior is trivial, in a simple form. (E.g., "that worked -> +1 weight to trying it next time", etc). I agree that it is a core issue, but I think it's interesting to consider further the "required structure" of this learning and modifying. Maybe the discussion below is on that track.

This is why I mentioned dynamic method adjustment, as in adjustment of the structure of the information treatment rather than just a parameter.
Last edited by jules.LT on Thu Jan 24, 2013 2:37 pm UTC, edited 1 time in total.
Bertrand Russell wrote:Not to be absolutely certain is, I think, one of the essential things in rationality.
Richard Feynman & many others wrote:Keep an open mind – but not so open that your brain falls out

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Re: Can a computer have free will?

Postby deepone » Thu Jan 24, 2013 2:37 pm UTC

jules.LT wrote:
deepone wrote:just learning and modifying behavior is trivial, in a simple form. (E.g., "that worked -> +1 weight to trying it next time", etc). I agree that it is a core issue, but I think it's interesting to consider further the "required structure" of this learning and modifying. Maybe the discussion below is on that track.

this is why I mentioned dynamic method adjustment, as in adjustment of the structure of the information treatment rather than just a parameter.

Ok. This is an interesting direction but I'd like to examine this further. Could you elaborate on the "structure" of information treatment? Are we essentially talking about an algorithm? Perhaps on a higher level, like modules, libraries, classes, etc, that work together and pass data around in an actual program? Like an UML diagram perhaps? (I'm not very good with those, but it was in a course long ago).

User avatar
ucim
Posts: 6715
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can a computer have free will?

Postby ucim » Thu Jan 24, 2013 2:39 pm UTC

deepone wrote:...just learning and modifying behavior is trivial, in a simple form. (E.g., "that worked -> +1 weight to trying it next time", etc).
Isn't that what people do sometimes? (I got a ticket here last time, -1 to speeding right now). The end result is environmentally induced self-modified behavior. Pretty soon you're talking about real money.

deepone wrote:
ucim wrote:[...]On another level, complexity from asynchronous interactions is potentially a different kind of complexity, and interactions are the way a system learns.[...]
Could you use some other terms to make it clear what kind of interactions you are talking about? For example, evaluating many potential scenarios in parallel (potentially each one quite sophisticated) and then comparing then (or enter them into some kind of competition or selection algorithm), would that fit the bill?
No, simply evaluating many potential scenarios in parallel is not an interaction, it is a thought process. I am highlighting the data-gathering part in the process. To the extent that the behavior is independent of the data, the system has less free will because it has less awareness of its surroundings (which is important to free will - if you don't know what to select from, you can't select). But also, to the extent that the system is totally dependent on the data, there is no free will either. Somewhere in the middle, in the interaction between the data (the observations made by the entity) and the processing of the data (where selections between possible actions are made) is where free will arises.

Persistence of the data ("memory") and of the effects of the data on processing ("learning") are important aspects of this.

This can be accomplished in many ways, most of which I have not even thought of, so I am hesitant to say that one computer architecture possesses it and another doesn't. But it seems to me (at this point) that the ability for data to modify the processing in significant enough ways as to qualify as "learning" is an important part of free will.
Spoiler:
On a fundamental level there is no difference between data and program anyway.
Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
jules.LT
Posts: 1539
Joined: Sun Jul 19, 2009 8:20 pm UTC
Location: Paris, France, Europe

Re: Can a computer have free will?

Postby jules.LT » Thu Jan 24, 2013 2:52 pm UTC

deepone wrote:Could you elaborate on the "structure" of information treatment? Are we essentially talking about an algorithm? Perhaps on a higher level, like modules, libraries, classes, etc, that work together and pass data around in an actual program? Like an UML diagram perhaps? (I'm not very good with those, but it was in a course long ago).

A good example of dynamic structure adjustment would be the conception of a brand new module according to the need, by the program itself.
I have never been to a serious course in programming, btw; it's all from my little experience (BASIC-like calculator language, html, a tiny bit of php, some basic classes in Pascal, a small bit of VBA in Excel...), wikipedia and discussions on the subject
Bertrand Russell wrote:Not to be absolutely certain is, I think, one of the essential things in rationality.
Richard Feynman & many others wrote:Keep an open mind – but not so open that your brain falls out

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Re: Can a computer have free will?

Postby deepone » Thu Jan 24, 2013 3:58 pm UTC

ucim wrote:
deepone wrote:...just learning and modifying behavior is trivial, in a simple form. (E.g., "that worked -> +1 weight to trying it next time", etc).
Isn't that what people do sometimes? (I got a ticket here last time, -1 to speeding right now). The end result is environmentally induced self-modified behavior. Pretty soon you're talking about real money.

Sure it is, but if we're happy with this as the only condition then free will in computers is trivial. You're free to take that position of course, but I'd like to reach a more developed "definition".
ucim wrote:
deepone wrote:
ucim wrote:[...]On another level, complexity from asynchronous interactions is potentially a different kind of complexity, and interactions are the way a system learns.[...]
Could you use some other terms to make it clear what kind of interactions you are talking about? For example, evaluating many potential scenarios in parallel (potentially each one quite sophisticated) and then comparing then (or enter them into some kind of competition or selection algorithm), would that fit the bill?
No, simply evaluating many potential scenarios in parallel is not an interaction, it is a thought process.

One hardly excludes the other? And that's my problem with "interaction" as a term here. Since there is interaction everywhere! E.g., parts of the brain interact with each other etc. But taking it in that direction goes of topic, as does describing it as a thought process really. I don't think that's necessary.
ucim wrote:I am highlighting the data-gathering part in the process. To the extent that the behavior is independent of the data, the system has less free will because it has less awareness of its surroundings (which is important to free will - if you don't know what to select from, you can't select). But also, to the extent that the system is totally dependent on the data, there is no free will either. Somewhere in the middle, in the interaction between the data (the observations made by the entity) and the processing of the data (where selections between possible actions are made) is where free will arises.

Persistence of the data ("memory") and of the effects of the data on processing ("learning") are important aspects of this.

I essentially agree with this.
ucim wrote:This can be accomplished in many ways, most of which I have not even thought of, so I am hesitant to say that one computer architecture possesses it and another doesn't. But it seems to me (at this point) that the ability for data to modify the processing in significant enough ways as to qualify as "learning" is an important part of free will.

I think (based on scientific literature) that one aspect that is likely to be important is the capability to run simulations that match the external environment that the AI can act in. For the chess AI this is correctly simulating possible (and likely) evolutions of the game. In general I think a promising approach is essentially lossy compression of experience, in such a manner that a simulated experience similar to what has happened in previous experience can be generated from partial similarities. (E.g., board positions in chess lead to simulations of what may happen that are similar to what has happened before in similar positions). I don't remember the names off the top of my head now, but there are such algorithms.
ucim wrote:On a fundamental level there is no difference between data and program anyway.

Right. This is perhaps particularly relevant if decisions are made based on simulations that are in turn essentially data driven.
jules.LT wrote:
deepone wrote:Could you elaborate on the "structure" of information treatment? Are we essentially talking about an algorithm? Perhaps on a higher level, like modules, libraries, classes, etc, that work together and pass data around in an actual program? Like an UML diagram perhaps? (I'm not very good with those, but it was in a course long ago).

A good example of dynamic structure adjustment would be the conception of a brand new module according to the need, by the program itself.

Perhaps this really should be considered in relation to the remark about about programs being data anyway? You would need an algorithm to construct the new module, i.e., the new module would be data output from another module.
How would you compare this to, expanding an artificial neural network (ANN) in response to an error? In some sense, maybe even the "patterns" that form in an ordinary ANN can be considered new modules? They are additions of new functionality in response to the "need" to reduce errors, in some sense.

User avatar
ucim
Posts: 6715
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can a computer have free will?

Postby ucim » Thu Jan 24, 2013 5:39 pm UTC

deepone wrote:Sure it is, but if we're happy with ["that worked -> +1 weight to trying it next time"] as the only condition then free will in computers is trivial. You're free to take that position of course, but I'd like to reach a more developed "definition".
It's a matter of degree. The example given is a pretty low degree. In computers it is often implemented with a simple index variable. I don't know how it is implemented in living things; that is an active topic of research. But I'm not convinced that the method of implementation is all that important.

However, it seems that you are looking for that very thing - a method of implementation guide to the existence of free will (in computers, anyway).

deepone wrote:I think (based on scientific literature) that one aspect that is likely to be important is the capability to run simulations that match the external environment that the AI can act in.
That may turn out to be the way humans (and other forms of conscious life-as-we-know-it) do this. However I would not take this as a limiting condition, especially when looking at non-life-as-we-know-it entities such as computers. Look at what-it-does, rather than how-it-does-it (keeping in mind that on one level, what-it-does is actually, on another level, a how-it-does-it). Specifically, simulation is a how-it-does-it on the level of analysis and learning. The lack of simulation is not a deal-breaker for me, if analysis and learning can be successfully accomplished in other ways.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Re: Can a computer have free will?

Postby deepone » Thu Jan 24, 2013 9:33 pm UTC

ucim wrote:
deepone wrote:Sure it is, but if we're happy with ["that worked -> +1 weight to trying it next time"] as the only condition then free will in computers is trivial. You're free to take that position of course, but I'd like to reach a more developed "definition".
It's a matter of degree. The example given is a pretty low degree. In computers it is often implemented with a simple index variable. I don't know how it is implemented in living things; that is an active topic of research. But I'm not convinced that the method of implementation is all that important.

However, it seems that you are looking for that very thing - a method of implementation guide to the existence of free will (in computers, anyway).

deepone wrote:I think (based on scientific literature) that one aspect that is likely to be important is the capability to run simulations that match the external environment that the AI can act in.
That may turn out to be the way humans (and other forms of conscious life-as-we-know-it) do this. However I would not take this as a limiting condition, especially when looking at non-life-as-we-know-it entities such as computers. Look at what-it-does, rather than how-it-does-it (keeping in mind that on one level, what-it-does is actually, on another level, a how-it-does-it). Specifically, simulation is a how-it-does-it on the level of analysis and learning. The lack of simulation is not a deal-breaker for me, if analysis and learning can be successfully accomplished in other ways.

I wouldn't really call simulation a method as much as a principle. I'm not sure where you draw the line between method and principle, but I think there is reason to setup a working hypothesis using principles such as simulation, or similar. The distinction between what and how also becomes rather blurred in many cases. Is "adapting" and "learning" what or how? I think it's marginally more general than "simulating".

I would say that anything that makes dynamic predictions related to an external environment is simulating aspects of that environment, and I have become convinced that predictions are fundamental (like, related to the second law of thermodynamics) for life, intelligence and agency (which should include free will computers) in any form.

The real (personal) reason for (starting) this thread is that I'm frustrated with attempts to define "free will" in terms of language. Since I'm quite familiar with computers (and believe in the possibility of strong AI) it seems to me that it should be possible to describe a phenomenon in terms of algorithmic rules and computing science principles that is essentially a working definition of free will. And I wanted to see what you all thought of this.

User avatar
ucim
Posts: 6715
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Can a computer have free will?

Postby ucim » Fri Jan 25, 2013 7:36 pm UTC

deepone wrote:I would say that anything that makes dynamic predictions related to an external environment is simulating aspects of that environment...
... even at the most trivial level? If so, this is not what I mean by simulation, and if not, at what threshold would you start to use the word? Adopting this view, a simple counter simulates meteoric bombardment to some extent, and might even be enough for decisions to be made. However $impact++; hardly rises to the level of a "real" simulation in my mind.

deepone wrote:The real (personal) reason for (starting) this thread is that I'm frustrated with attempts to define "free will" in terms of language. Since I'm quite familiar with computers (and believe in the possibility of strong AI) it seems to me that it should be possible to describe a phenomenon in terms of algorithmic rules and computing science principles that is essentially a working definition of free will. And I wanted to see what you all thought of this.
I think this is unrealistic. It's almost self-contradictory, as algorithms are arguably the antithesis of free will.

Will (free or otherwise) is the way we see the result of algorithms when we are (abstracted) far enough away from them.

For a way of looking at this, consider the interface. If you don't have access to a program's private functions, you must interact with the program using only the publicly available methods. Whatever it does behind the scenes is "its will". How free that will appears to be has to do with the size of the option space (set of choices available to it), and that is hidden from us. We can only discern the apparent size of this space based on observations of its behavior under differing circumstances. If we delve below the hood and invade the private functions, we could discern the "real" size of the option space, but enough delving into an algorithmic system will ultimately reduce the size of this space to one.
Spoiler:
...absent a random component, but that's a red herring, as randomness is not the source of freedom.
Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Heartfelt thanks from addams and from me - you really made a difference.

Fire Brns
Posts: 1114
Joined: Thu Oct 20, 2011 2:25 pm UTC

Re: Can a computer have free will?

Postby Fire Brns » Fri Jan 25, 2013 7:48 pm UTC

jigawatt wrote:We were so worried that machines would start thinking like humans, we never considered what would happen if humans started thinking like machines.
I'm concerned that if machines achieved true consciousness and free will, they will begin to question if humans can have true consciousness and free will.
Pfhorrest wrote:As someone who is not easily offended, I don't really mind anything in this conversation.
Mighty Jalapeno wrote:It was the Renaissance. Everyone was Italian.

Cinemal
Posts: 11
Joined: Thu Jan 24, 2013 9:20 pm UTC

Re: Can a computer have free will?

Postby Cinemal » Fri Jan 25, 2013 8:47 pm UTC

Trebla wrote:There was a riddle posted in the riddles sub-forum some time ago about "twins in a maze" and how they would find each other. Eventually it clarified to "two identical robots" (absolutely identical features including activation time, so a "pseudo-random number generator" would return the same thing for each) placed at opposite poles of a perfectly symmetrical (i.e., featureless) sphere with no available inputs to break the symmetry (rotation, the concept of north). Would it be possible for these two to ever meet? I think the thread died after some discussion with no real resolution, but perhaps this would be a good indication of free will... if one of the two robots could simply "decide to walk that way" and the other did not make the same decision, perhaps it could be said to have free will. If the algorithm is rigid enough that the two identical robots would make the some "choice" at every decision point, then maybe it's insufficient to say it had free will. At a glance this sounds reasonable, maybe not though, just a thought.

This is quite a clever approach. In discursive terms, it suggests something akin to the calculus and limits. Quite clearly, the problem was being gradually restructured to eliminate any possible way for the robots to meet, while still maintaining an intellectual illusion that it was possible.

Consider how it would be thought of if someone suggested this change:

The robots are connected by a stiff mesh which encompasses the globe (think "latitute and longitude lines, made out of metal with the robots welded to it at opposite points").

There is surely a psychological objection to this, as it must completely prevent the robots from actually meeting. However, it should be clear that it actually means that all a robot has to do is move in such a way that it notices the mesh, and it will have acted in a way that the other robot did not. So long as the robots move identically, the mesh would always move with them, and they would have no awareness that it connected to another robot.

So let's return to brief psychological objection. When one first thinks of the mesh in fixed terms, as something that stops the robots from getting to each other, it seems like it's "cheating" -- like it's too-strong a way of keeping them apart. And yet other constraints -- like ensuring that the robots are particle-for-particle identical, and that the globe is sufficiently larger than the robots that they cannot see each other -- which are _intended_ to ensure that the robots never meet, are not considered objectionable because one can still imagine it happening. But that's just because one part of your mind -- the part that internally constructs a model of a globe-thing with two robot-things at opposite side-things of it -- either isn't able or isn't willing to treat the information that the robots are identical as a strong constraint in the way that it is able and bound to do when one adds to the model a globe-encompasing-mesh-thing connecting the robot-things. That part of the brain flags that as "preventing free will" or at least "preventing independent motion" -- but that was already prevented by the robots being identical -- the mind just didn't want to accept it.

The direction of the argument -- of the constraints added to the robot example -- is that in a context where actions are determined, free will is impossible. And in the real-world, the general message, claim and promise from the "smart" people who purvey "scientific knowledge" is that the world operates in a way that science can describe and that one can know science is right because the events in world can be predicted by that science. There is no claim that the scientific prediction determines the events of the world -- that would be seen as absurd -- but there is an implicit claim that something determines the events of the world, and the scientific prediction is a useful model of that something. Thus, if the promises of science are believed to apply universally, then all events are determined, and there is no act of will which is free from that determination.

Of course, one is not required to believe that the promises of science propagandists are as valid as the existing models the program of science has created. If the promise of scientific universality is merely as accurate as the Newtonian model of motion, that leaves plenty of room in the universe for non-deterministic willfulness.

In terms of a computer having free-will, it suggests that the only room available for that to happen is within an algorithm which contains fundamentally non-deterministic components -- which every aspect of computer design, production and programming is intended to prevent. Arguably, any computer capable of exercising free will would be considered broken -- and not really a computer.

User avatar
deepone
Posts: 88
Joined: Mon Jun 16, 2008 9:57 pm UTC

Re: Can a computer have free will?

Postby deepone » Sat Jan 26, 2013 12:14 pm UTC

ucim wrote:
deepone wrote:I would say that anything that makes dynamic predictions related to an external environment is simulating aspects of that environment...
... even at the most trivial level? If so, this is not what I mean by simulation, and if not, at what threshold would you start to use the word? Adopting this view, a simple counter simulates meteoric bombardment to some extent, and might even be enough for decisions to be made. However $impact++; hardly rises to the level of a "real" simulation in my mind.

Well, I'd say that's mostly because the system simulated in your example is so trivial. I do think that the extent of the simulation is most definitely interesting for our discussion, but I think it's easiest to define simulation in a way that includes the trivial and then discuss what we might want to require of the simulation, than to find some threshold below which it is not a simulation at all.
ucim wrote:
deepone wrote:The real (personal) reason for (starting) this thread is that I'm frustrated with attempts to define "free will" in terms of language. Since I'm quite familiar with computers (and believe in the possibility of strong AI) it seems to me that it should be possible to describe a phenomenon in terms of algorithmic rules and computing science principles that is essentially a working definition of free will. And I wanted to see what you all thought of this.
I think this is unrealistic. It's almost self-contradictory, as algorithms are arguably the antithesis of free will.

Not necessarily. I'd say that's the core of this discussion. If you consider algorithms to be the antithesis of free will then I'd expect you to reject the idea of computers with free will. How can you not?
Spoiler:
I think that those that accept that the human mind is physical/biological and still believe in free will (which are quite a few) really must accept that there are algorithms describing free will. That's not novel, I think, but it would probably take us off topic here to delve into that.

ucim wrote:Will (free or otherwise) is the way we see the result of algorithms when we are (abstracted) far enough away from them.

For a way of looking at this, consider the interface. If you don't have access to a program's private functions, you must interact with the program using only the publicly available methods. Whatever it does behind the scenes is "its will". How free that will appears to be has to do with the size of the option space (set of choices available to it), and that is hidden from us. We can only discern the apparent size of this space based on observations of its behavior under differing circumstances. If we delve below the hood and invade the private functions, we could discern the "real" size of the option space, but enough delving into an algorithmic system will ultimately reduce the size of this space to one.

I think this is only true for a specific subset of algorithms, and describing that subset is what I'm after.
ucim wrote:...absent a random component, but that's a red herring, as randomness is not the source of freedom.

I agree with your conclusion here - but doesn't this also mean that we must have a definition of freedom that is possible to describe with algorithms? Anything that is not random can be generated with an algorithm - that's true by definition, I believe. I.e., randomness is exactly that which cannot be generated by an algorithm (i.e., that which cannot be compressed).


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 17 guests