What if the two prisoners were perfect logicians and each also knows that they are both perfect logicians?

**Spoiler:**

**Moderators:** jestingrabbit, Moderators General, Prelates

Reference, just in case.

What if the two prisoners were perfect logicians and each also knows that they are both perfect logicians?

**Spoiler:**

What if the two prisoners were perfect logicians and each also knows that they are both perfect logicians?

This is a block of text that can be added to posts you make. There is a 300 character limit.

You're gonna have to explain your reasoning. I have a feeling that I'll disagree with it.

I guess something like:

"If we are both perfect logicians and know we are perfect logicians, we will make always the same decision faced with the same data. So we either both cooperate or both defect, and it's better to both cooperate. I know it is better to cooperate and he knows I know it's better to cooperate so we both cooperate"

"If we are both perfect logicians and know we are perfect logicians, we will make always the same decision faced with the same data. So we either both cooperate or both defect, and it's better to both cooperate. I know it is better to cooperate and he knows I know it's better to cooperate so we both cooperate"

I think the hypothesis is insufficient to achieve superrationality. I think you may have to go as far as specifying the fact that they are both perfect logicians is common knowledge between them.

Yes without common knowledge the problem devolves back to "He knows that whatever I do He is better of betraying so he is guaranteed to betray so I must betray to protect myself."

With common knowledge I'm still not sure you get there. I see the argument and it's compelling but I'm not sure it's rigorously valid which it would need to be for a logician to get there.

With common knowledge I'm still not sure you get there. I see the argument and it's compelling but I'm not sure it's rigorously valid which it would need to be for a logician to get there.

- jestingrabbit
- Factoids are just Datas that haven't grown up yet
**Posts:**5967**Joined:**Tue Nov 28, 2006 9:50 pm UTC**Location:**Sydney

So, just to lay out what is going on here. There are two kinds of game solvers that are being discussed here. One is called a rational agent whereas the other is known as a a superrational agent.

Normally when we talk about a game player being "logical" or what have you, we mean that it is a rational agent ie "always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions." That's very different from the superrational perspective.

I also want to point out that we have done this to death in this subforum. Like, there are few things that have wasted more pages with more futility. So, I want to set ground rules.

1) Say things that are cognisant of the distinction that I have just drawn.

2) Read carefully what others have written.

3) Try to give the best possible interpretation to the statements of others.

-jr

Normally when we talk about a game player being "logical" or what have you, we mean that it is a rational agent ie "always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions." That's very different from the superrational perspective.

I also want to point out that we have done this to death in this subforum. Like, there are few things that have wasted more pages with more futility. So, I want to set ground rules.

1) Say things that are cognisant of the distinction that I have just drawn.

2) Read carefully what others have written.

3) Try to give the best possible interpretation to the statements of others.

-jr

ameretrifle wrote:Magic space feudalism is therefore a viable idea.

I'm not sure there is anything left to say.

quintopia wrote: I think you may have to go as far as specifying the fact that they are both perfect logicians is common knowledge between them.

That was what I was trying to get at when I said:

each also knows that they are both perfect logicians

Sorry if it wasn't clear.

I was actually not aware of superrationality. Thanks for bringing it up! Interesting read.

This is a block of text that can be added to posts you make. There is a 300 character limit.

Ah. My bad. I thought you were trying to say that "each knowing the other is a perfect logician" and no more than that was sufficient to achieve superrationality.

- Vytron
**Posts:**429**Joined:**Mon Oct 19, 2009 10:11 am UTC**Location:**The Outside. I use She/He/Her/His/Him as gender neutral pronouns :P

I don't think Superrationality is achievable in practice even with common knowledge. Suppose we had this variant of the problem:

-What if the two prisoners were perfect logicians and each also knows that they are both perfect logicians? And one of them is told the other has already made a decision on how to act?

This breaks the symmetry and now the second logician has an advantage. Thinks "if the other is a perfect logician and I'm a perfect logician he'd have known that we were going to cooperate and that that was our best possible outcome, so he just cooperated." At this point you can't call her a perfect logician if she makes at this point the mistake to cooperate, because betraying is better for her at this point!

Note however, that at no point were they told what the other prisoner chose, only that they had already made a decision. Since defecting when the other defects is better for you than cooperating when the other defects, the first agent had already concluded that defecting is the optimal move for the second agent, so it's better to defect first. And we end with the original problem when both defect.

Now, how close in time does it need to be the first agent's decision to the second to cause both of them to defect? My claim is that the distance doesn't matter, because once it goes to the negative the second agent picks first and the first agent is told the other has already picked, and both defect.

There's no magic simultaneous moment at which both cooperate, so both always defect.

For short, being a perfect logician, and knowing your opponent is a perfect logician, and knowing that this is common knowledge, isn't enough to achieve superrationality, because at the first point in your thought chain where you have concluded the opponent is going to cooperate, your perfectly logical move is to defect.

So it should look like this:

What if the two prisoners were superrational?

**Spoiler:**

But we already knew that, and I don't think there's a way to force conditions to make agents superrational, because rationally it's better to defect regardless of the opponent's choice.

-What if the two prisoners were perfect logicians and each also knows that they are both perfect logicians? And one of them is told the other has already made a decision on how to act?

This breaks the symmetry and now the second logician has an advantage. Thinks "if the other is a perfect logician and I'm a perfect logician he'd have known that we were going to cooperate and that that was our best possible outcome, so he just cooperated." At this point you can't call her a perfect logician if she makes at this point the mistake to cooperate, because betraying is better for her at this point!

Note however, that at no point were they told what the other prisoner chose, only that they had already made a decision. Since defecting when the other defects is better for you than cooperating when the other defects, the first agent had already concluded that defecting is the optimal move for the second agent, so it's better to defect first. And we end with the original problem when both defect.

Now, how close in time does it need to be the first agent's decision to the second to cause both of them to defect? My claim is that the distance doesn't matter, because once it goes to the negative the second agent picks first and the first agent is told the other has already picked, and both defect.

There's no magic simultaneous moment at which both cooperate, so both always defect.

For short, being a perfect logician, and knowing your opponent is a perfect logician, and knowing that this is common knowledge, isn't enough to achieve superrationality, because at the first point in your thought chain where you have concluded the opponent is going to cooperate, your perfectly logical move is to defect.

So it should look like this:

What if the two prisoners were superrational?

But we already knew that, and I don't think there's a way to force conditions to make agents superrational, because rationally it's better to defect regardless of the opponent's choice.

Vytron wrote:For short, being a perfect logician, and knowing your opponent is a perfect logician, and knowing that this is common knowledge, isn't enough to achieve superrationality, because at the first point in your thought chain where you have concluded the opponent is going to cooperate, your perfectly logical move is to defect.

And this is exactly why your problem is different from the original one. You've handed one of them a piece of information that could change their mind. Without that piece of information, there is never any point in your thought chain where you can validly conclude the opponent is going to cooperate. I will admit that any amount of asymmetry could throw off superrationality. This is why superrational agents who also care about pareto efficiency would request to be put in separate rooms and be given no information about the other's decision process. AND that they be guaranteed that every piece of information they are given about the other player, the other player also receives about them. AND that their final decisions be "locked in" simultaneously.

Superrationality is a specific type of irrationality that may occasionally lead to better outcomes than similar games with all rational players, in the case that every player is superrational.

Basically, superrationality is deductively begging the question. The argument for why cooperate is the logical choice hinges upon the fact that the actor has made the logical choice, and hence the opponent will also make the same choice. But you can't use the fact that cooperate is the logical choice to prove that cooperate is the logical choice!

Alternately, superrationality relies on purposefully stopping your logical train at some point. You say "If I and the other person both choose cooperate, we get a lot of money.". Then, you willfully stop yourself from realizing that you'd personally get even more money by defecting instead at this point, and in doing so, you expect everyone else to stop themselves from realizing this as well.

Basically, superrationality is deductively begging the question. The argument for why cooperate is the logical choice hinges upon the fact that the actor has made the logical choice, and hence the opponent will also make the same choice. But you can't use the fact that cooperate is the logical choice to prove that cooperate is the logical choice!

Alternately, superrationality relies on purposefully stopping your logical train at some point. You say "If I and the other person both choose cooperate, we get a lot of money.". Then, you willfully stop yourself from realizing that you'd personally get even more money by defecting instead at this point, and in doing so, you expect everyone else to stop themselves from realizing this as well.

(∫|p|^{2})(∫|q|^{2}) ≥ (∫|pq|)^{2}

Thanks, skeptical scientist, for knowing symbols and giving them to me.

Thanks, skeptical scientist, for knowing symbols and giving them to me.

The reasoning usually looks sort of like this:

-the other person is exactly as intelligent as me and is in an exactly symmetric situation.

-therefore we will make exactly the same decision.

-so I am choosing between CC and DD

-CC gets me more reward than DD

-so I will choose cooperate

At this point, there is no reason for the superrational actor not to consider that defecting would get him even more reward. But then he knows the other person will think of it too, and defecting would move them back to DD.

If you are going to argue that something is begging the question, it should be that first step. Why would an equally intelligent person in a symmetric situation always make the same decision? Or: why, once we know we'll make the same decision, would we not also know what that decision will be (DD)?

But this scenario is often brought up in situations for which the players can prove the other player is like them in some way. For instance, two artificial intelligences capable of advanced reasoning who both know that the other is running on exactly the same underlying algorithms.

-the other person is exactly as intelligent as me and is in an exactly symmetric situation.

-therefore we will make exactly the same decision.

-so I am choosing between CC and DD

-CC gets me more reward than DD

-so I will choose cooperate

At this point, there is no reason for the superrational actor not to consider that defecting would get him even more reward. But then he knows the other person will think of it too, and defecting would move them back to DD.

If you are going to argue that something is begging the question, it should be that first step. Why would an equally intelligent person in a symmetric situation always make the same decision? Or: why, once we know we'll make the same decision, would we not also know what that decision will be (DD)?

But this scenario is often brought up in situations for which the players can prove the other player is like them in some way. For instance, two artificial intelligences capable of advanced reasoning who both know that the other is running on exactly the same underlying algorithms.

Rationality vs superrationality, and various permutations of "perfect logician" stipulations, are all beside the real point, I think. Discussion of this nature is usually prompted by an intuitive feeling that the DD solution has to be wrong, somehow, because of how clearly inferior it is to CC and how CC tends to dominate over DD in analogous real world situations.

The true resolution to this bit of cognitive dissonance is to recognize that true pure prisoner's dilemma situations are extremely rare in real life. Analogous real life situations are far more likely to actually be iterated prisoner's dilemma with an unknown and indefinite number of iterations. In that variation of the problem, the Defect strategy's payout is altered by the ability of the other player to punish you in future iterations, and the lack of knowledge about the number of iterations means that potential for punishment exists (as far as the players know) on every iteration without exception. This results in variations on Tit-For-Tat being the dominant strategies, with the precise optimal variant depending on what strategies the other players use.

The true resolution to this bit of cognitive dissonance is to recognize that true pure prisoner's dilemma situations are extremely rare in real life. Analogous real life situations are far more likely to actually be iterated prisoner's dilemma with an unknown and indefinite number of iterations. In that variation of the problem, the Defect strategy's payout is altered by the ability of the other player to punish you in future iterations, and the lack of knowledge about the number of iterations means that potential for punishment exists (as far as the players know) on every iteration without exception. This results in variations on Tit-For-Tat being the dominant strategies, with the precise optimal variant depending on what strategies the other players use.

I have no such intuitive feeling. DD is obviously right in a situation where it really and truly is a classical PD. I would most likely defect in such a situation. But the fact remains that DD is not Pareto-optimal, and that's not just an intuition.

- Vytron
**Posts:**429**Joined:**Mon Oct 19, 2009 10:11 am UTC**Location:**The Outside. I use She/He/Her/His/Him as gender neutral pronouns :P

quintopia wrote:At this point, there is no reason for the superrational actor not to consider that defecting would get him even more reward.

Yes, but that only applies to superrational actors. Perfect logicians with perfect knowledge and common knowledge about each other's perfect knowledge will conclude that if the other being were to cooperate then defecting is better, and if the other being were to defect then defecting is better, so they would defect.

If superrational actors have to truncate their logic to achieve the better outcome, then their logic is truncated, and therefore not perfect.

And, anyway, I don't think the original PD is about logic, it is about trust. Even if we were only on the single stance scenario, my mother and me, or my sister and me, or a close friend and me, would be certain to cooperate, because we'd trust that we'd not betray each other. Superrational beings have the trust that whatever they decide to do will also be decided by the other being, this is something perfect logicians with common knowledge don't have.

It's like in the blue eyes puzzle: superrational beings can leave the island in n days, where n is the number of possible eye colors, while perfect logicians leave the island in n days, where n is the number of people with your eye color, because by day n if they didn't leave you know all they share the same color with you.

There's nothing that can make perfect logicians superrational.

I misspoke. That should have read "the perfect logicians has no reason not to consider that defecting would get him even more reward" but you are right. There is one more thing they need to have as common knowledge to be superrational: a shared desire to exploit the poser of the dilemma for as much total utility as possible.

The argument was that he would consider defecting, and then consider that the other player would also consider it. Then they would both realize that the other player would also realize that both were going to defect and that that would fail to exploit the dilemma organizer. So then both players consider cooperating again . This would go around forever, unless both players truncate their consideration somewhere, so they both then think "we're both going to do the same thing, what thing would accomplish our shared goals the best?"

The argument was that he would consider defecting, and then consider that the other player would also consider it. Then they would both realize that the other player would also realize that both were going to defect and that that would fail to exploit the dilemma organizer. So then both players consider cooperating again . This would go around forever, unless both players truncate their consideration somewhere, so they both then think "we're both going to do the same thing, what thing would accomplish our shared goals the best?"

- Vytron
**Posts:**429**Joined:**Mon Oct 19, 2009 10:11 am UTC**Location:**The Outside. I use She/He/Her/His/Him as gender neutral pronouns :P

Yeah, if the Prisoner Dilemma turns into a competition to defeat the dilemma organizer, I don't think it's necessary for them to be perfect logicians or have common knowledge to both cooperate. Common logicians would do it.

- SirGabriel
**Posts:**42**Joined:**Wed Jul 16, 2014 11:54 pm UTC

Vytron wrote:It's like in the blue eyes puzzle: superrational beings can leave the island in n days, where n is the number of possible eye colors

How could superrational beings know their eye color so quickly?

- Vytron
**Posts:**429**Joined:**Mon Oct 19, 2009 10:11 am UTC**Location:**The Outside. I use She/He/Her/His/Him as gender neutral pronouns :P

Offtopic

**Spoiler:**

Vytron wrote:OfftopicSpoiler:

- Vytron
**Posts:**429**Joined:**Mon Oct 19, 2009 10:11 am UTC**Location:**The Outside. I use She/He/Her/His/Him as gender neutral pronouns :P

Offtopic

**Spoiler:**

- SirGabriel
**Posts:**42**Joined:**Wed Jul 16, 2014 11:54 pm UTC

Vytron wrote:OfftopicSpoiler:

There is one type of superrational agent that can shortcut the blue eyes puzzle:

**Spoiler:**

- Vytron
**Posts:**429**Joined:**Mon Oct 19, 2009 10:11 am UTC**Location:**The Outside. I use She/He/Her/His/Him as gender neutral pronouns :P

Offtopic

**Spoiler:**

Back on topic:

The problem with superrationality in this instance is that this logic is not complete. Our agent is solving half of the problem, then just guessing rather than solving the other half. Here's what this argument should actually look like:

-The other person is exactly as intelligent as me and is in an exactly symmetric situation.

-Therefore we will make exactly the same decision.

-So I have ruled out CD and DC as possible outcomes, leaving CC and DD as the only remaining options. Now I have to determine which of them is correct.

-I cannot actually influence the other person's actions. Intentionally acting illogically would not magically force the other person to do so.

-So choosing C is strictly inferior to choosing D, as it always has inferior rewards.

-So I have also now ruled out CC, leaving DD as the only option.

-So I will choose to defect.

(and really, we can only actually rule out CD and DC in this way if we've already proven that the optimal strategy is not a mixed strategy)

-the other person is exactly as intelligent as me and is in an exactly symmetric situation.

-therefore we will make exactly the same decision.

-so I am choosing between CC and DD

-CC gets me more reward than DD

-so I will choose cooperate

The problem with superrationality in this instance is that this logic is not complete. Our agent is solving half of the problem, then just guessing rather than solving the other half. Here's what this argument should actually look like:

-The other person is exactly as intelligent as me and is in an exactly symmetric situation.

-Therefore we will make exactly the same decision.

-So I have ruled out CD and DC as possible outcomes, leaving CC and DD as the only remaining options. Now I have to determine which of them is correct.

-I cannot actually influence the other person's actions. Intentionally acting illogically would not magically force the other person to do so.

-So choosing C is strictly inferior to choosing D, as it always has inferior rewards.

-So I have also now ruled out CC, leaving DD as the only option.

-So I will choose to defect.

(and really, we can only actually rule out CD and DC in this way if we've already proven that the optimal strategy is not a mixed strategy)

Offtopic:

**Spoiler:**

(∫|p|^{2})(∫|q|^{2}) ≥ (∫|pq|)^{2}

Thanks, skeptical scientist, for knowing symbols and giving them to me.

Thanks, skeptical scientist, for knowing symbols and giving them to me.

lordatog wrote:Back on topic:-the other person is exactly as intelligent as me and is in an exactly symmetric situation.

-therefore we will make exactly the same decision.

-so I am choosing between CC and DD

-CC gets me more reward than DD

-so I will choose cooperate

The problem with superrationality in this instance is that this logic is not complete. Our agent is solving half of the problem, then just guessing rather than solving the other half. Here's what this argument should actually look like:

-The other person is exactly as intelligent as me and is in an exactly symmetric situation.

-Therefore we will make exactly the same decision.

-So I have ruled out CD and DC as possible outcomes, leaving CC and DD as the only remaining options. Now I have to determine which of them is correct.

-I cannot actually influence the other person's actions. Intentionally acting illogically would not magically force the other person to do so.

-So choosing C is strictly inferior to choosing D, as it always has inferior rewards.

-So I have also now ruled out CC, leaving DD as the only option.

-So I will choose to defect.

(and really, we can only actually rule out CD and DC in this way if we've already proven that the optimal strategy is not a mixed strategy)

I don't believe the agent is solving only half the problem.

If we accept the fact that super-rational agents will always use the same strategy and know that as common knowledge. Of the two discrete strategies CC has higher expected rewards than DD. If we allow a probablistic strategy where p is the probability of cooperating C is the reward for double co-operate, D is the reward for double defect B is the reward for betraying and 0 is the reward for being betrayed and B>C>D>0 Then there is a formula for the tactic the players should take whcih I can't quite work out but it turns out

if 2C<(D+B) then there is a non co-operate tactic the superrational agents will use.

At B=101, C=55, D=1 E(x)max = 55 at p=1

At B=101, C=45, D=1 E(x)max ~= 45.55 at p=0.9

Edit to add maths

E(x) = (C+D-B)p^2 + (B-2D)p +D

Completing the square for the above polynomial gives

E(x) = (C+D-B) * [ (B-2D) / (2 {C+D-B} ) +p]^2 - [ (B-2D)^2 / (4 {C+D-B} ) ] +D

The relevant component of that is [ (B-2D) / (2 {C+D-B} ) +p]^2 because E(x) reaches a limit when that = 0

That limit is a maxima when B>C+D (limit is maxima when coefficient of second order is negative)

Solving that for p gives

p = (B-2D)/(2B-2C-2D) (where B<>C+D {Which is fine because as established above B>C+D in all relevant cases})

0<=p<=1

@ p=1 gives 2B-2C-2D = B - 2D

B-2C=0

B=2C

@ p=0 0 = B-2D

B=2D

@ P=0.5 B-C-D=B-2D

-C=-D

C=D

so E(x)max is found at 0.5<p<1 when B>C+D, B>2C

Vytron wrote:OfftopicSpoiler:

- Vytron
**Posts:**429**Joined:**Mon Oct 19, 2009 10:11 am UTC**Location:**The Outside. I use She/He/Her/His/Him as gender neutral pronouns :P

Offtopic

**Spoiler:**

Users browsing this forum: No registered users and 10 guests