更改

跳到导航 跳到搜索
删除50,237字节 、 2021年9月24日 (五) 20:54
第151行: 第151行:  
<br>
 
<br>
   −
==The iterated prisoner's dilemma==
+
==重复囚徒困境==
<font color="#ff8000">重复囚徒困境 iterated prisoner's dilemma </font> {{more citations needed section|date=November 2012}}
  −
 
  −
If two players play prisoner's dilemma more than once in succession and they remember previous actions of their opponent and change their strategy accordingly, the game is called iterated prisoner's dilemma.
  −
 
  −
If two players play prisoner's dilemma more than once in succession and they remember previous actions of their opponent and change their strategy accordingly, the game is called iterated prisoner's dilemma.
      
如果两个参与者连续进行多次囚徒困境博弈,他们记住对手先前的行动并相应地改变策略,这种博弈被称为重复囚徒困境。
 
如果两个参与者连续进行多次囚徒困境博弈,他们记住对手先前的行动并相应地改变策略,这种博弈被称为重复囚徒困境。
       +
除了上面的一般形式之外,重复版本还要求<math>2R > T + S</math>,防止交替合作和背叛比相互合作有更大的回报。
   −
In addition to the general form above, the iterative version also requires that {{tmath|2R > T + S}}, to prevent alternating cooperation and defection giving a greater reward than mutual cooperation.
  −
  −
In addition to the general form above, the iterative version also requires that , to prevent alternating cooperation and defection giving a greater reward than mutual cooperation.
  −
  −
除了上面的一般形式之外,重复版本还要求{{tmath|2R > T + S}},防止交替合作和背叛比相互合作有更大的回报。
  −
  −
  −
  −
The iterated prisoner's dilemma game is fundamental to some theories of human cooperation and trust. On the assumption that the game can model transactions between two people requiring trust, cooperative behaviour in populations may be modeled by a multi-player, iterated, version of the game. It has, consequently, fascinated many scholars over the years. In 1975, Grofman and Pool estimated the count of scholarly articles devoted to it at over 2,000. The iterated prisoner's dilemma has also been referred to as the "[[Peace war game|peace-war game]]".<ref name = Shy>{{cite book | title= Industrial Organization: Theory and Applications | publisher=Massachusetts Institute of Technology Press | first1= Oz | last1=Shy |url=https://books.google.com/?id=tr4CjJ5LlRcC&pg=PR13&dq=industrial+organization+theory+and+applications  | year=1995 | isbn=978-0262193665 | accessdate=February 27, 2013}}</ref>
  −
  −
The iterated prisoner's dilemma game is fundamental to some theories of human cooperation and trust. On the assumption that the game can model transactions between two people requiring trust, cooperative behaviour in populations may be modeled by a multi-player, iterated, version of the game. It has, consequently, fascinated many scholars over the years. In 1975, Grofman and Pool estimated the count of scholarly articles devoted to it at over 2,000. The iterated prisoner's dilemma has also been referred to as the "peace-war game".
      
重复囚徒困境博弈是人类合作与信任的理论基础。假设博弈可以为两个需要信任的人之间的交易建模,那么群体中的合作行为也可以由多个参与者重复的博弈模型来建模。因此,这些年来,它吸引了许多学者。1975年,葛夫曼 Grofman和普尔 Pool估计专门撰写有关该领域的学术文章超过2000篇。重复囚徒困境也被称为“和平-战争博弈”。<ref name = Shy>{{cite book | title= Industrial Organization: Theory and Applications | publisher=Massachusetts Institute of Technology Press | first1= Oz | last1=Shy |url=https://books.google.com/?id=tr4CjJ5LlRcC&pg=PR13&dq=industrial+organization+theory+and+applications  | year=1995 | isbn=978-0262193665 | accessdate=February 27, 2013}}</ref>
 
重复囚徒困境博弈是人类合作与信任的理论基础。假设博弈可以为两个需要信任的人之间的交易建模,那么群体中的合作行为也可以由多个参与者重复的博弈模型来建模。因此,这些年来,它吸引了许多学者。1975年,葛夫曼 Grofman和普尔 Pool估计专门撰写有关该领域的学术文章超过2000篇。重复囚徒困境也被称为“和平-战争博弈”。<ref name = Shy>{{cite book | title= Industrial Organization: Theory and Applications | publisher=Massachusetts Institute of Technology Press | first1= Oz | last1=Shy |url=https://books.google.com/?id=tr4CjJ5LlRcC&pg=PR13&dq=industrial+organization+theory+and+applications  | year=1995 | isbn=978-0262193665 | accessdate=February 27, 2013}}</ref>
      −
 
+
如果这个游戏正好玩了''N''次,并且两个玩家都知道这一点,那么在所有回合中最佳的策略就是叛变。唯一可能的纳什均衡点就是永远叛变。证明是通过归纳法证出来的: 不妨假设一个人在最后一回合叛变,因为对手之后没有机会反击。因此,双方都会在最后一个回合叛变。所以玩家同样也会在倒数第二回合时叛变,因为无论采取什么策略,对手都会在倒数第一回合叛变,依此类推。如果博弈次数未知但次数有限的情况也同样如此。
If the game is played exactly ''N'' times and both players know this, then it is optimal to defect in all rounds. The only possible [[Nash equilibrium]] is to always defect. The proof is [[Mathematical induction|inductive]]: one might as well defect on the last turn, since the opponent will not have a chance to later retaliate. Therefore, both will defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on.  The same applies if the game length is unknown but has a known upper limit.
  −
 
  −
If the game is played exactly N times and both players know this, then it is optimal to defect in all rounds. The only possible Nash equilibrium is to always defect. The proof is inductive: one might as well defect on the last turn, since the opponent will not have a chance to later retaliate. Therefore, both will defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on.  The same applies if the game length is unknown but has a known upper limit.
  −
 
  −
如果这个游戏正好玩了N次,并且两个玩家都知道这一点,那么在所有回合中最佳的策略就是叛变。唯一可能的纳什均衡点就是永远叛变。证明是通过归纳法证出来的: 不妨假设一个人在最后一回合叛变,因为对手之后没有机会反击。因此,双方都会在最后一个回合叛变。所以玩家同样也会在倒数第二回合时叛变,因为无论采取什么策略,对手都会在倒数第一回合叛变,依此类推。如果博弈次数未知但次数有限的情况也同样如此。
  −
 
  −
Unlike the standard prisoner's dilemma, in the iterated prisoner's dilemma the defection strategy is counter-intuitive and fails badly to predict the behavior of human players. Within standard economic theory, though, this is the only correct answer.  The [[superrational]] strategy in the iterated prisoner's dilemma with fixed ''N'' is to cooperate against a superrational opponent, and in the limit of large ''N'', experimental results on strategies agree with the superrational version, not the game-theoretic rational one.
  −
 
  −
Unlike the standard prisoner's dilemma, in the iterated prisoner's dilemma the defection strategy is counter-intuitive and fails badly to predict the behavior of human players. Within standard economic theory, though, this is the only correct answer.  The superrational strategy in the iterated prisoner's dilemma with fixed N is to cooperate against a superrational opponent, and in the limit of large N, experimental results on strategies agree with the superrational version, not the game-theoretic rational one.
  −
 
  −
与标准的囚徒困境不同,在重复囚徒困境中,叛变策略是严重违反直觉的,以至于不能很好地预测人类玩家的行为。然而,在标准的经济理论中,这是唯一正确的答案。具有固定次数 N的重复囚徒困境中的<font color="#ff8000">超理性 superrational</font>策略是与超理性对手进行合作,在N很大的限制下,实验结果的策略与超理性结果的策略一致,而不是博弈论的理性结果。
  −
 
        −
For [[cooperation]] to emerge between game theoretic rational players, the total number of rounds ''N'' must be unknown to the players. In this case "always defect" may no longer be a strictly dominant strategy, only a Nash equilibrium. Amongst results shown by [[Robert Aumann]] in a 1959 paper, rational players repeatedly interacting for indefinitely long games can sustain the cooperative outcome.
+
与标准的囚徒困境不同,在重复囚徒困境中,叛变策略是严重违反直觉的,以至于不能很好地预测人类玩家的行为。然而,在标准的经济理论中,这是唯一正确的答案。具有固定次数 N的重复囚徒困境中的<font color="#ff8000">超理性 superrational</font>策略是与超理性对手进行合作,在''N''很大的限制下,实验结果的策略与超理性结果的策略一致,而不是博弈论的理性结果。
   −
For cooperation to emerge between game theoretic rational players, the total number of rounds N must be unknown to the players. In this case "always defect" may no longer be a strictly dominant strategy, only a Nash equilibrium. Amongst results shown by Robert Aumann in a 1959 paper, rational players repeatedly interacting for indefinitely long games can sustain the cooperative outcome.
      
为了使合作在博弈论的理性参与者之间出现,参与者必须不知道回合总数N。在这种情况下,“总是叛变”可能不再是一个严格占优策略,而只是一个纳什均衡。罗伯特·奥曼 Robert Aumann在1959年的一篇论文中表明,理性参与者在无限多次的博弈中通过反复互动可以维持合作的结果。
 
为了使合作在博弈论的理性参与者之间出现,参与者必须不知道回合总数N。在这种情况下,“总是叛变”可能不再是一个严格占优策略,而只是一个纳什均衡。罗伯特·奥曼 Robert Aumann在1959年的一篇论文中表明,理性参与者在无限多次的博弈中通过反复互动可以维持合作的结果。
   −
  −
  −
According to a 2019 experimental study in the ''American Economic Review'' which tested what strategies real-life subjects used in iterated prisoners' dilemma situations with perfect monitoring, the majority of chosen strategies were always defect, [[Tit for tat|tit-for-tat]], and [[Grim trigger]]. Which strategy the subjects chose depended on the parameters of the game.<ref>{{Cite journal|last=Dal Bó|first=Pedro|last2=Fréchette|first2=Guillaume R.|date=2019|title=Strategy Choice in the Infinitely Repeated Prisoner's Dilemma|journal=American Economic Review|language=en|volume=109|issue=11|pages=3929–3952|doi=10.1257/aer.20181480|issn=0002-8282}}</ref>
  −
  −
According to a 2019 experimental study in the American Economic Review which tested what strategies real-life subjects used in iterated prisoners' dilemma situations with perfect monitoring, the majority of chosen strategies were always defect, tit-for-tat, and Grim trigger. Which strategy the subjects chose depended on the parameters of the game.
      
根据《美国经济评论》于2019年进行的一项实验研究,该实验中通过完美的监控测试了现实中被用在重复囚徒困境情况下的策略,监测选择的策略总是背叛,针锋相对的和 <font color="#ff8000"> 冷酷触发策略 Grim trigger</font>。受试者选择的策略取决于博弈的参数。<ref>{{Cite journal|last=Dal Bó|first=Pedro|last2=Fréchette|first2=Guillaume R.|date=2019|title=Strategy Choice in the Infinitely Repeated Prisoner's Dilemma|journal=American Economic Review|language=en|volume=109|issue=11|pages=3929–3952|doi=10.1257/aer.20181480|issn=0002-8282}}</ref>
 
根据《美国经济评论》于2019年进行的一项实验研究,该实验中通过完美的监控测试了现实中被用在重复囚徒困境情况下的策略,监测选择的策略总是背叛,针锋相对的和 <font color="#ff8000"> 冷酷触发策略 Grim trigger</font>。受试者选择的策略取决于博弈的参数。<ref>{{Cite journal|last=Dal Bó|first=Pedro|last2=Fréchette|first2=Guillaume R.|date=2019|title=Strategy Choice in the Infinitely Repeated Prisoner's Dilemma|journal=American Economic Review|language=en|volume=109|issue=11|pages=3929–3952|doi=10.1257/aer.20181480|issn=0002-8282}}</ref>
第208行: 第175行:       −
===Strategy for the iterated prisoner's dilemma===
+
===重复囚徒困境下的策略===
重复囚徒困境下的策略
+
罗伯特·阿克塞尔罗德 Robert Axelrod在他的著作《合作的进化 The Evolution of Cooperation》(1984)中激起了人们对重复囚徒困境(IPD)的兴趣。在这篇文章中,他报道了自己组织的固定N次囚徒困境的比赛,参与者必须一次又一次地选择他们的共同策略,并且要记住他们之前的遭遇。Axelrod邀请世界各地的学术界同仁设计计算机策略来参加IPD锦标赛。输入的程序在算法复杂性、最初敌意、宽恕能力等方面有很大差异。
Interest in the iterated prisoner's dilemma (IPD) was kindled by [[Robert Axelrod]] in his book ''[[The Evolution of Cooperation]]'' (1984). In it he reports on a tournament he organized of the ''N'' step prisoner's dilemma (with ''N'' fixed) in which participants have to choose their mutual strategy again and again, and have memory of their previous encounters. Axelrod invited academic colleagues all over the world to devise computer strategies to compete in an IPD tournament. The programs that were entered varied widely in algorithmic complexity, initial hostility, capacity for forgiveness, and so forth.
     −
Interest in the iterated prisoner's dilemma (IPD) was kindled by Robert Axelrod in his book The Evolution of Cooperation (1984). In it he reports on a tournament he organized of the N step prisoner's dilemma (with N fixed) in which participants have to choose their mutual strategy again and again, and have memory of their previous encounters. Axelrod invited academic colleagues all over the world to devise computer strategies to compete in an IPD tournament. The programs that were entered varied widely in algorithmic complexity, initial hostility, capacity for forgiveness, and so forth.
     −
罗伯特·阿克塞尔罗德 Robert Axelrod在他的著作《合作的进化》(1984)中激起了人们对重复囚徒困境(IPD)的兴趣。在这篇文章中,他报道了自己组织的固定N次囚徒困境的比赛,参与者必须一次又一次地选择他们的共同策略,并且要记住他们之前的遭遇。阿克塞尔罗德邀请世界各地的学术界同仁设计计算机策略来参加IPD锦标赛。输入的程序在算法复杂性、最初敌意、宽恕能力等方面有很大差异。
+
Axelrod发现,当这些遭遇长时间在许多玩家身上重复发生时,每个玩家都有不同的策略,从长远来看,贪婪策略往往表现得非常糟糕,而更加利他的策略表现得更好,这完全是根据自身利益来判断的。他利用这一结果揭示了通过自然选择,从最初纯粹自私行为向利他行为进化的可能机制。
   −
  −
  −
Axelrod discovered that when these encounters were repeated over a long period of time with many players, each with different strategies, greedy strategies tended to do very poorly in the long run while more [[altruism|altruistic]] strategies did better, as judged purely by self-interest. He used this to show a possible mechanism for the evolution of altruistic behaviour from mechanisms that are initially purely selfish, by [[natural selection]].
  −
  −
Axelrod discovered that when these encounters were repeated over a long period of time with many players, each with different strategies, greedy strategies tended to do very poorly in the long run while more altruistic strategies did better, as judged purely by self-interest. He used this to show a possible mechanism for the evolution of altruistic behaviour from mechanisms that are initially purely selfish, by natural selection.
  −
  −
阿克塞尔罗德发现,当这些遭遇长时间在许多玩家身上重复发生时,每个玩家都有不同的策略,从长远来看,贪婪策略往往表现得非常糟糕,而更加利他的策略表现得更好,这完全是根据自身利益来判断的。他利用这一结果揭示了通过自然选择,从最初纯粹自私行为向利他行为进化的可能机制。
  −
  −
  −
  −
The winning [[deterministic algorithm|deterministic]] strategy was tit for tat, which [[Anatol Rapoport]] developed and entered into the tournament. It was the simplest of any program entered, containing only four lines of [[BASIC]], and won the contest. The strategy is simply to cooperate on the first iteration of the game; after that, the player does what his or her opponent did on the previous move. Depending on the situation, a slightly better strategy can be "tit for tat with forgiveness". When the opponent defects, on the next move, the player sometimes cooperates anyway, with a small probability (around 1–5%). This allows for occasional recovery from getting trapped in a cycle of defections. The exact probability depends on the line-up of opponents.
  −
  −
The winning deterministic strategy was tit for tat, which Anatol Rapoport developed and entered into the tournament. It was the simplest of any program entered, containing only four lines of BASIC, and won the contest. The strategy is simply to cooperate on the first iteration of the game; after that, the player does what his or her opponent did on the previous move. Depending on the situation, a slightly better strategy can be "tit for tat with forgiveness". When the opponent defects, on the next move, the player sometimes cooperates anyway, with a small probability (around 1–5%). This allows for occasional recovery from getting trapped in a cycle of defections. The exact probability depends on the line-up of opponents.
      
最终获胜的决定性策略是针锋相对策略,这是阿纳托尔·拉波波特 Anatol Rapoport开发并参加比赛的策略。这是所有参赛程序中最简单的一个,只有四行 BASIC 语言,并且赢得了比赛。策略很简单,就是在游戏的第一次重复中进行合作; 在此之后,玩家将执行做他的对手在前一步中所做的事情。根据具体情况,一个稍微好一点的策略可以是“带着宽恕之心针锋相对”。当对手叛变时,在下一次博弈中,玩家有时还是会合作,但概率很小(大约1-5%)。这允许博弈偶尔能从陷入叛变循环中恢复过来。确切的概率取决于对手的安排。
 
最终获胜的决定性策略是针锋相对策略,这是阿纳托尔·拉波波特 Anatol Rapoport开发并参加比赛的策略。这是所有参赛程序中最简单的一个,只有四行 BASIC 语言,并且赢得了比赛。策略很简单,就是在游戏的第一次重复中进行合作; 在此之后,玩家将执行做他的对手在前一步中所做的事情。根据具体情况,一个稍微好一点的策略可以是“带着宽恕之心针锋相对”。当对手叛变时,在下一次博弈中,玩家有时还是会合作,但概率很小(大约1-5%)。这允许博弈偶尔能从陷入叛变循环中恢复过来。确切的概率取决于对手的安排。
       +
通过分析得分最高的战略,Axelrod阐述了战略成功的几个必要条件。
   −
By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful.
     −
By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful.
+
*友好:最重要的条件是策略必须是好的,也就是说,它不会在对手之前叛变(这有时被称为“乐观”算法)。几乎所有得分最高的策略都是友好的; 因此,一个纯粹的自私策略不会为了纯粹自身的利益而“欺骗”对手。
   −
通过分析得分最高的战略,阿克塞尔罗德阐述了战略成功的几个必要条件。
+
*报复:然而,阿克塞尔罗德认为,成功的战略决不能是盲目的乐观主义。它有时必须进行报复。非报复策略的一个例子就是永远合作。这是一个非常糟糕的选择,因为“肮脏”的策略会无情地利用这些玩家。
    +
*宽容: 成功的策略也必须是宽容的。虽然玩家会报复,但如果对手不继续叛变,他们将再次回到合作的状态。这阻止了长时间的报复和反报复,最大限度地提高积分。
    +
*不嫉妒: 最后一个品质是不嫉妒,不强求比对手得分更多。
   −
; Nice: The most important condition is that the strategy must be "nice", that is, it will not defect before its opponent does (this is sometimes referred to as an "optimistic" algorithm). Almost all of the top-scoring strategies were nice; therefore, a purely selfish strategy will not "cheat" on its opponent, for purely self-interested reasons first.
  −
  −
Nice: The most important condition is that the strategy must be "nice", that is, it will not defect before its opponent does (this is sometimes referred to as an "optimistic" algorithm). Almost all of the top-scoring strategies were nice; therefore, a purely selfish strategy will not "cheat" on its opponent, for purely self-interested reasons first.
  −
  −
;友好:最重要的条件是策略必须是好的,也就是说,它不会在对手之前叛变(这有时被称为“乐观”算法)。几乎所有得分最高的策略都是友好的; 因此,一个纯粹的自私策略不会为了纯粹自身的利益而“欺骗”对手。
  −
  −
; Retaliating: However, Axelrod contended, the successful strategy must not be a blind optimist. It must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies will ruthlessly exploit such players.
  −
  −
Retaliating: However, Axelrod contended, the successful strategy must not be a blind optimist. It must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies will ruthlessly exploit such players.
  −
  −
;报复:然而,阿克塞尔罗德认为,成功的战略决不能是盲目的乐观主义。它有时必须进行报复。非报复策略的一个例子就是永远合作。这是一个非常糟糕的选择,因为“肮脏”的策略会无情地利用这些玩家。
  −
  −
; Forgiving: Successful strategies must also be forgiving. Though players will retaliate, they will once again fall back to cooperating if the opponent does not continue to defect. This stops long runs of revenge and counter-revenge, maximizing points.
  −
  −
Forgiving: Successful strategies must also be forgiving. Though players will retaliate, they will once again fall back to cooperating if the opponent does not continue to defect. This stops long runs of revenge and counter-revenge, maximizing points.
  −
  −
;宽容: 成功的策略也必须是宽容的。虽然玩家会报复,但如果对手不继续叛变,他们将再次回到合作的状态。这阻止了长时间的报复和反报复,最大限度地提高积分。
  −
  −
; Non-envious: The last quality is being non-envious, that is not striving to score more than the opponent.
  −
  −
Non-envious: The last quality is being non-envious, that is not striving to score more than the opponent.
  −
  −
;不嫉妒: 最后一个品质是不嫉妒,不强求比对手得分更多。
  −
  −
  −
  −
The optimal (points-maximizing) strategy for the one-time PD game is simply defection; as explained above, this is true whatever the composition of opponents may be. However, in the iterated-PD game the optimal strategy depends upon the strategies of likely opponents, and how they will react to defections and cooperations. For example, consider a population where everyone defects every time, except for a single individual following the tit for tat strategy. That individual is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy for that individual is to defect every time. In a population with a certain percentage of always-defectors and the rest being tit for tat players, the optimal strategy for an individual depends on the percentage, and on the length of the game.
  −
  −
The optimal (points-maximizing) strategy for the one-time PD game is simply defection; as explained above, this is true whatever the composition of opponents may be. However, in the iterated-PD game the optimal strategy depends upon the strategies of likely opponents, and how they will react to defections and cooperations. For example, consider a population where everyone defects every time, except for a single individual following the tit for tat strategy. That individual is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy for that individual is to defect every time. In a population with a certain percentage of always-defectors and the rest being tit for tat players, the optimal strategy for an individual depends on the percentage, and on the length of the game.
      
对于一次性的囚徒困境博弈,最优(点数最大化)策略就是简单的叛变; 正如上面所说,无论对手的构成如何,这都是正确的。然而,在重复囚徒困境博弈中,最优策略取决于可能的对手的策略,以及他们对叛变和合作的反应。例如,考虑一个群体,其中每个人每次都会叛变,只有一个人遵循针锋相对的策略。那个人就会由于第一回合的失利而处于轻微的不利地位。在这样一个群体中,个体的最佳策略是每次都叛变。在一定比例的总是选择背叛的玩家和其余组成选择针锋相对策略的玩家的人群中,个人的最佳策略取决于这一比例和博弈的次数。
 
对于一次性的囚徒困境博弈,最优(点数最大化)策略就是简单的叛变; 正如上面所说,无论对手的构成如何,这都是正确的。然而,在重复囚徒困境博弈中,最优策略取决于可能的对手的策略,以及他们对叛变和合作的反应。例如,考虑一个群体,其中每个人每次都会叛变,只有一个人遵循针锋相对的策略。那个人就会由于第一回合的失利而处于轻微的不利地位。在这样一个群体中,个体的最佳策略是每次都叛变。在一定比例的总是选择背叛的玩家和其余组成选择针锋相对策略的玩家的人群中,个人的最佳策略取决于这一比例和博弈的次数。
   −
  −
  −
In the strategy called Pavlov, [[win-stay, lose-switch]], faced with a failure to cooperate, the player switches strategy the next turn.<ref>http://www.pnas.org/content/pnas/93/7/2686.full.pdf</ref>  In certain circumstances,{{specify|date=November 2012}} Pavlov beats all other strategies by giving preferential treatment to co-players using a similar strategy.
  −
  −
In the strategy called Pavlov, win-stay, lose-switch, faced with a failure to cooperate, the player switches strategy the next turn.  In certain circumstances, Pavlov beats all other strategies by giving preferential treatment to co-players using a similar strategy.
      
在所谓的<font color="#ff8000">巴甫洛夫策略 Pavlov strategy</font>中,<font color="#ff8000">去输存赢 win-stay, lose-switch</font>,面对一次合作失败,玩家将在下一次变换策略。<ref>http://www.pnas.org/content/pnas/93/7/2686.full.pdf</ref>在某些情况下,{{specify|date=November 2012}}巴甫洛夫通过使用类似策略给与合作者优惠待遇打败了其他所有策略。
 
在所谓的<font color="#ff8000">巴甫洛夫策略 Pavlov strategy</font>中,<font color="#ff8000">去输存赢 win-stay, lose-switch</font>,面对一次合作失败,玩家将在下一次变换策略。<ref>http://www.pnas.org/content/pnas/93/7/2686.full.pdf</ref>在某些情况下,{{specify|date=November 2012}}巴甫洛夫通过使用类似策略给与合作者优惠待遇打败了其他所有策略。
   −
  −
  −
Deriving the optimal strategy is generally done in two ways:
  −
  −
Deriving the optimal strategy is generally done in two ways:
      
得出最佳策略通常有两种方法:
 
得出最佳策略通常有两种方法:
   −
* [[Bayesian Nash equilibrium]]: If the statistical distribution of opposing strategies can be determined (e.g. 50% tit for tat, 50% always cooperate) an optimal counter-strategy can be derived analytically.{{efn|1=For example see the 2003 study<ref>{{cite web|url= http://econ.hevra.haifa.ac.il/~mbengad/seminars/whole1.pdf|title=Bayesian Nash equilibrium; a statistical test of the hypothesis|url-status=dead|archive-url= https://web.archive.org/web/20051002195142/http://econ.hevra.haifa.ac.il/~mbengad/seminars/whole1.pdf|archive-date=2005-10-02|publisher=[[Tel Aviv University]]}}</ref> for discussion of the concept and whether it can apply in real [[economic]] or strategic situations.}}
+
* [[贝叶斯纳什均衡]]:如果可以确定对立策略的统计分布(例如,50%针锋相对,50%总是合作),那么,可以通过分析得出最佳的反策略(例如2003年的研究<ref>{{cite web|url= http://econ.hevra.haifa.ac.il/~mbengad/seminars/whole1.pdf|title=Bayesian Nash equilibrium; a statistical test of the hypothesis|url-status=dead|archive-url= https://web.archive.org/web/20051002195142/http://econ.hevra.haifa.ac.il/~mbengad/seminars/whole1.pdf|archive-date=2005-10-02|publisher=[[Tel Aviv University]]}}</ref>讨论这一概念以及它是否可以应用于实际经济或战略情况。)
   −
<font color="#ff8000">贝叶斯纳什均衡  Bayesian Nash equilibrium</font>:如果可以确定对立策略的统计分布(例如,50%针锋相对,50%总是合作),那么,可以通过分析得出最佳的反策略{{efn|1=例如2003年的研究<ref>{{cite web|url= http://econ.hevra.haifa.ac.il/~mbengad/seminars/whole1.pdf|title=Bayesian Nash equilibrium; a statistical test of the hypothesis|url-status=dead|archive-url= https://web.archive.org/web/20051002195142/http://econ.hevra.haifa.ac.il/~mbengad/seminars/whole1.pdf|archive-date=2005-10-02|publisher=[[Tel Aviv University]]}}</ref>讨论这一概念以及它是否可以应用于实际经济或战略情况。}}
      +
* [[蒙特卡罗方法]]已经对种群进行了模拟,分数低的个体死亡,分数高的个体繁殖(<font color="#ff8000">遗传算法 genetic algorithm </font>用于寻找一个最佳策略)。最终群体中的算法组合通常取决于初始总体的组合。引入突变(繁殖过程中的随机变异)可以减少对初始种群的依赖性。使用这种系统进行经验性实验往往会为针锋相对的玩家带来麻烦(见Chess 1988),{{Clarify|date=August 2016}},但是没有分析证据表明这种情况会一直发生。<ref>{{Citation|last=Wu|first=Jiadong|title=Cooperation on the Monte Carlo Rule: Prisoner's Dilemma Game on the Grid|date=2019|work=Theoretical Computer Science|volume=1069|pages=3–15|editor-last=Sun|editor-first=Xiaoming|publisher=Springer Singapore|language=en|doi=10.1007/978-981-15-0105-0_1|isbn=978-981-15-0104-3|last2=Zhao|first2=Chengye|editor2-last=He|editor2-first=Kun|editor3-last=Chen|editor3-first=Xiaoyun}}</ref>
   −
* [[Monte Carlo method|Monte Carlo]] simulations of populations have been made, where individuals with low scores die off, and those with high scores reproduce (a [[genetic algorithm]] for finding an optimal strategy). The mix of algorithms in the final population generally depends on the mix in the initial population. The introduction of mutation (random variation during reproduction) lessens the dependency on the initial population; empirical experiments with such systems tend to produce tit for tat players (see for instance Chess 1988),{{Clarify|date=August 2016}} but no analytic proof exists that this will always occur.<ref>{{Citation|last=Wu|first=Jiadong|title=Cooperation on the Monte Carlo Rule: Prisoner's Dilemma Game on the Grid|date=2019|work=Theoretical Computer Science|volume=1069|pages=3–15|editor-last=Sun|editor-first=Xiaoming|publisher=Springer Singapore|language=en|doi=10.1007/978-981-15-0105-0_1|isbn=978-981-15-0104-3|last2=Zhao|first2=Chengye|editor2-last=He|editor2-first=Kun|editor3-last=Chen|editor3-first=Xiaoyun}}</ref>
  −
  −
<font color="#ff8000">蒙特卡洛方法 Monte Carlo method </font>已经对种群进行了模拟,分数低的个体死亡,分数高的个体繁殖(<font color="#ff8000">遗传算法 genetic algorithm </font>用于寻找一个最佳策略)。最终群体中的算法组合通常取决于初始总体的组合。引入突变(繁殖过程中的随机变异)可以减少对初始种群的依赖性。使用这种系统进行经验性实验往往会为针锋相对的玩家带来麻烦(见Chess 1988),{{Clarify|date=August 2016}},但是没有分析证据表明这种情况会一直发生。<ref>{{Citation|last=Wu|first=Jiadong|title=Cooperation on the Monte Carlo Rule: Prisoner's Dilemma Game on the Grid|date=2019|work=Theoretical Computer Science|volume=1069|pages=3–15|editor-last=Sun|editor-first=Xiaoming|publisher=Springer Singapore|language=en|doi=10.1007/978-981-15-0105-0_1|isbn=978-981-15-0104-3|last2=Zhao|first2=Chengye|editor2-last=He|editor2-first=Kun|editor3-last=Chen|editor3-first=Xiaoyun}}</ref>
  −
  −
Although tit for tat is considered to be the most [[robust]] basic strategy, a team from [[Southampton University]] in England introduced a new strategy at the 20th-anniversary iterated prisoner's dilemma competition, which proved to be more successful than tit for tat. This strategy relied on collusion between programs to achieve the highest number of points for a single program. The university submitted 60 programs to the competition, which were designed to recognize each other through a series of five to ten moves at the start.<ref>{{cite press release|url= http://www.southampton.ac.uk/mediacentre/news/2004/oct/04_151.shtml|publisher=University of Southampton|title=University of Southampton team wins Prisoner's Dilemma competition|date=7 October 2004|url-status=dead|archive-url= https://web.archive.org/web/20140421055745/http://www.southampton.ac.uk/mediacentre/news/2004/oct/04_151.shtml|archive-date=2014-04-21}}</ref> Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of points for the defector. If the program realized that it was playing a non-Southampton player, it would continuously defect in an attempt to minimize the score of the competing program. As a result, the 2004 Prisoners' Dilemma Tournament results show [[University of Southampton]]'s strategies in the first three places, despite having fewer wins and many more losses than the GRIM strategy. (In a PD tournament, the aim of the game is not to "win" matches&nbsp;– that can easily be achieved by frequent defection). Also, even without implicit collusion between [[computer program|software strategies]] (exploited by the Southampton team) tit for tat is not always the absolute winner of any given tournament; it would be more precise to say that its long run results over a series of tournaments outperform its rivals. (In any one event a given strategy can be slightly better adjusted to the competition than tit for tat, but tit for tat is more robust). The same applies for the tit for tat with forgiveness variant, and other optimal strategies: on any given day they might not "win" against a specific mix of counter-strategies. An alternative way of putting it is using the Darwinian [[Evolutionarily stable strategy|ESS]] simulation. In such a simulation, tit for tat will almost always come to dominate, though nasty strategies will drift in and out of the population because a tit for tat population is penetrable by non-retaliating nice strategies, which in turn are easy prey for the nasty strategies. [[Richard Dawkins]] showed that here, no static mix of strategies form a stable equilibrium and the system will always oscillate between bounds.}} this strategy ended up taking the top three positions in the competition, as well as a number of positions towards the bottom.
  −
  −
Although tit for tat is considered to be the most robust basic strategy, a team from Southampton University in England introduced a new strategy at the 20th-anniversary iterated prisoner's dilemma competition, which proved to be more successful than tit for tat. This strategy relied on collusion between programs to achieve the highest number of points for a single program. The university submitted 60 programs to the competition, which were designed to recognize each other through a series of five to ten moves at the start. Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of points for the defector. If the program realized that it was playing a non-Southampton player, it would continuously defect in an attempt to minimize the score of the competing program. As a result, the 2004 Prisoners' Dilemma Tournament results show University of Southampton's strategies in the first three places, despite having fewer wins and many more losses than the GRIM strategy. (In a PD tournament, the aim of the game is not to "win" matches&nbsp;– that can easily be achieved by frequent defection). Also, even without implicit collusion between software strategies (exploited by the Southampton team) tit for tat is not always the absolute winner of any given tournament; it would be more precise to say that its long run results over a series of tournaments outperform its rivals. (In any one event a given strategy can be slightly better adjusted to the competition than tit for tat, but tit for tat is more robust). The same applies for the tit for tat with forgiveness variant, and other optimal strategies: on any given day they might not "win" against a specific mix of counter-strategies. An alternative way of putting it is using the Darwinian ESS simulation. In such a simulation, tit for tat will almost always come to dominate, though nasty strategies will drift in and out of the population because a tit for tat population is penetrable by non-retaliating nice strategies, which in turn are easy prey for the nasty strategies. Richard Dawkins showed that here, no static mix of strategies form a stable equilibrium and the system will always oscillate between bounds.}} this strategy ended up taking the top three positions in the competition, as well as a number of positions towards the bottom.
      
尽管针锋相对被认为是最有力的基本策略,来自英格兰南安普敦大学的一个团队在20周年的重复囚徒困境竞赛中提出了一个新策略,这个策略被证明比针锋相对更为成功。这种策略依赖于程序之间的串通,以获得单个程序的最高分数。这所大学提交了60个程序,这些程序的设计目的是在比赛开始时通过一系列的5到10个动作来互相认识。<ref>{{cite press release|url= http://www.southampton.ac.uk/mediacentre/news/2004/oct/04_151.shtml|publisher=University of Southampton|title=University of Southampton team wins Prisoner's Dilemma competition|date=7 October 2004|url-status=dead|archive-url= https://web.archive.org/web/20140421055745/http://www.southampton.ac.uk/mediacentre/news/2004/oct/04_151.shtml|archive-date=2014-04-21}}</ref>一旦认识建立,一个程序总是合作,另一个程序总是叛变,保证叛变者得到最多的分数。如果这个程序意识到它正在和一个非南安普顿的球员比赛,它会不断地叛变,试图最小化与之竞争的程序的得分。因此,2004年囚徒困境锦标赛的结果显示了南安普敦大学战略位居前三名,尽管它比冷酷战略赢得更少,输的更多。(在囚徒困境锦标赛中,比赛的目的不是“赢”比赛——这一点频繁叛变很容易实现)。此外,即使没有软件策略之间的暗中串通(南安普顿队利用了这一点) ,针锋相对并不总是任何特定锦标赛的绝对赢家; 更准确地说,它是在一系列锦标赛中的长期结果超过了它的竞争对手。(在任何一个事件中,一个给定的策略可以比针锋相对稍微更好地适应竞争,但是针锋相对更稳健)。这同样适用于带有宽恕变量的针锋相对,和其他最佳策略: 在任何特定的一天,他们可能不会“赢得”一个特定的混合反战略。另一种方法是使用达尔文 Darwinian的<font color="#ff8000"> ESS模拟 ESS simulation</font>。在这样的模拟中,针锋相对几乎总是占主导地位,尽管讨厌的策略会在人群中漂移,因为使用针锋相对策略的人群可以通过非报复性的好策略进行渗透,这反过来使他们容易成为讨厌策略的猎物。理查德·道金斯 Richard Dawkins指出,在这里,没有静态的混合策略会形成一个稳定的平衡,系统将始终在边界之间振荡。这种策略最终在比赛中获得了前三名的成绩,或者是接近垫底的成绩。
 
尽管针锋相对被认为是最有力的基本策略,来自英格兰南安普敦大学的一个团队在20周年的重复囚徒困境竞赛中提出了一个新策略,这个策略被证明比针锋相对更为成功。这种策略依赖于程序之间的串通,以获得单个程序的最高分数。这所大学提交了60个程序,这些程序的设计目的是在比赛开始时通过一系列的5到10个动作来互相认识。<ref>{{cite press release|url= http://www.southampton.ac.uk/mediacentre/news/2004/oct/04_151.shtml|publisher=University of Southampton|title=University of Southampton team wins Prisoner's Dilemma competition|date=7 October 2004|url-status=dead|archive-url= https://web.archive.org/web/20140421055745/http://www.southampton.ac.uk/mediacentre/news/2004/oct/04_151.shtml|archive-date=2014-04-21}}</ref>一旦认识建立,一个程序总是合作,另一个程序总是叛变,保证叛变者得到最多的分数。如果这个程序意识到它正在和一个非南安普顿的球员比赛,它会不断地叛变,试图最小化与之竞争的程序的得分。因此,2004年囚徒困境锦标赛的结果显示了南安普敦大学战略位居前三名,尽管它比冷酷战略赢得更少,输的更多。(在囚徒困境锦标赛中,比赛的目的不是“赢”比赛——这一点频繁叛变很容易实现)。此外,即使没有软件策略之间的暗中串通(南安普顿队利用了这一点) ,针锋相对并不总是任何特定锦标赛的绝对赢家; 更准确地说,它是在一系列锦标赛中的长期结果超过了它的竞争对手。(在任何一个事件中,一个给定的策略可以比针锋相对稍微更好地适应竞争,但是针锋相对更稳健)。这同样适用于带有宽恕变量的针锋相对,和其他最佳策略: 在任何特定的一天,他们可能不会“赢得”一个特定的混合反战略。另一种方法是使用达尔文 Darwinian的<font color="#ff8000"> ESS模拟 ESS simulation</font>。在这样的模拟中,针锋相对几乎总是占主导地位,尽管讨厌的策略会在人群中漂移,因为使用针锋相对策略的人群可以通过非报复性的好策略进行渗透,这反过来使他们容易成为讨厌策略的猎物。理查德·道金斯 Richard Dawkins指出,在这里,没有静态的混合策略会形成一个稳定的平衡,系统将始终在边界之间振荡。这种策略最终在比赛中获得了前三名的成绩,或者是接近垫底的成绩。
   −
  −
  −
This strategy takes advantage of the fact that multiple entries were allowed in this particular competition and that the performance of a team was measured by that of the highest-scoring player (meaning that the use of self-sacrificing players was a form of [[minmaxing]]). In a competition where one has control of only a single player, tit for tat is certainly a better strategy. Because of this new rule, this competition also has little theoretical significance when analyzing single agent strategies as compared to Axelrod's seminal tournament. However, it provided a basis for analysing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise. In fact, long before this new-rules tournament was played, Dawkins, in his book ''[[The Selfish Gene]]'', pointed out the possibility of such strategies winning if multiple entries were allowed, but he remarked that most probably Axelrod would not have allowed them if they had been submitted. It also relies on circumventing rules about the prisoner's dilemma in that there is no communication allowed between the two players, which the Southampton programs arguably did with their opening "ten move dance" to recognize one another; this only reinforces just how valuable communication can be in shifting the balance of the game.
  −
  −
This strategy takes advantage of the fact that multiple entries were allowed in this particular competition and that the performance of a team was measured by that of the highest-scoring player (meaning that the use of self-sacrificing players was a form of minmaxing). In a competition where one has control of only a single player, tit for tat is certainly a better strategy. Because of this new rule, this competition also has little theoretical significance when analyzing single agent strategies as compared to Axelrod's seminal tournament. However, it provided a basis for analysing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise. In fact, long before this new-rules tournament was played, Dawkins, in his book The Selfish Gene, pointed out the possibility of such strategies winning if multiple entries were allowed, but he remarked that most probably Axelrod would not have allowed them if they had been submitted. It also relies on circumventing rules about the prisoner's dilemma in that there is no communication allowed between the two players, which the Southampton programs arguably did with their opening "ten move dance" to recognize one another; this only reinforces just how valuable communication can be in shifting the balance of the game.
      
这种策略利用了这样一个事实,即在这场特殊的比赛中允许多个参赛项目,并且团队的表现由得分最高的项目来衡量(这意味着使用自我牺牲的项目是一种分数最大化的形式)。在一个只能控制一个玩家的比赛中,针锋相对当然是一个更好的策略。由于这一新规则的存在,与阿克塞尔罗德的具有深远影响的竞赛相比,这种竞赛在分析单个主体策略时也就没有什么理论意义。然而,它为在分析多主体框架下,特别是在存在干扰的情况下,如何实现协作策略提供了基础。事实上,早在这场新规则锦标赛开始之前,道金斯就在他的《自私的基因》一书中指出,如果允许多次参赛,这种策略就有可能获胜,但他说,如果提交这种策略的话,阿克塞尔罗德很可能不会允许。因为它依赖于规避囚徒困境的规则,即两个参与者之间不允许交流,南安普顿的项目可以说在开场的“十步舞”中就是这样做以认识对方的; 这只是强调了交流在改变游戏平衡方面的价值。
 
这种策略利用了这样一个事实,即在这场特殊的比赛中允许多个参赛项目,并且团队的表现由得分最高的项目来衡量(这意味着使用自我牺牲的项目是一种分数最大化的形式)。在一个只能控制一个玩家的比赛中,针锋相对当然是一个更好的策略。由于这一新规则的存在,与阿克塞尔罗德的具有深远影响的竞赛相比,这种竞赛在分析单个主体策略时也就没有什么理论意义。然而,它为在分析多主体框架下,特别是在存在干扰的情况下,如何实现协作策略提供了基础。事实上,早在这场新规则锦标赛开始之前,道金斯就在他的《自私的基因》一书中指出,如果允许多次参赛,这种策略就有可能获胜,但他说,如果提交这种策略的话,阿克塞尔罗德很可能不会允许。因为它依赖于规避囚徒困境的规则,即两个参与者之间不允许交流,南安普顿的项目可以说在开场的“十步舞”中就是这样做以认识对方的; 这只是强调了交流在改变游戏平衡方面的价值。
  −
===Stochastic iterated prisoner's dilemma===
  −
随机重复囚徒困境
  −
  −
  −
In a stochastic iterated prisoner's dilemma game, strategies are specified by in terms of "cooperation probabilities".<ref name=Press2012>{{cite journal|last1=Press|first1=WH|last2=Dyson|first2=FJ|title=Iterated Prisoner's Dilemma contains strategies that dominate any evolutionary opponent|journal=[[Proceedings of the National Academy of Sciences of the United States of America]]|date=26 June 2012|volume=109|issue=26|pages=10409–13|doi=10.1073/pnas.1206569109|pmid=22615375|pmc=3387070|bibcode=2012PNAS..10910409P}}</ref> In an encounter between player ''X'' and player ''Y'', ''X'' 's strategy is specified by a set of probabilities ''P'' of cooperating with ''Y''. ''P'' is a function of the outcomes of their previous encounters or some subset thereof. If ''P'' is a function of only their most recent ''n'' encounters, it is called a "memory-n" strategy. A memory-1 strategy is then specified by four cooperation probabilities:  <math>P=\{P_{cc},P_{cd},P_{dc},P_{dd}\}</math>, where <math>P_{ab}</math> is the probability that ''X'' will cooperate in the present encounter given that the previous encounter was characterized by (ab). For example, if the previous encounter was one in which ''X'' cooperated and ''Y'' defected, then <math>P_{cd}</math> is the probability that ''X'' will cooperate in the present encounter. If each of the probabilities are either 1 or 0, the strategy is called deterministic. An example of a deterministic strategy is the tit for tat strategy written as ''P''={1,0,1,0}, in which ''X'' responds as ''Y'' did in the previous encounter. Another is the [[win–stay, lose–switch]] strategy written as ''P''={1,0,0,1}, in which ''X'' responds as in the previous encounter, if it was a "win" (i.e. cc or dc) but changes strategy if it was a loss (i.e. cd or dd). It has been shown that for any memory-n strategy there is a corresponding memory-1 strategy which gives the same statistical results, so that only memory-1 strategies need be considered.<ref name="Press2012"/>
  −
  −
In a stochastic iterated prisoner's dilemma game, strategies are specified by in terms of "cooperation probabilities". In an encounter between player X and player Y, X 's strategy is specified by a set of probabilities P of cooperating with Y. P is a function of the outcomes of their previous encounters or some subset thereof. If P is a function of only their most recent n encounters, it is called a "memory-n" strategy. A memory-1 strategy is then specified by four cooperation probabilities:  <math>P=\{P_{cc},P_{cd},P_{dc},P_{dd}\}</math>, where <math>P_{ab}</math> is the probability that X will cooperate in the present encounter given that the previous encounter was characterized by (ab). For example, if the previous encounter was one in which X cooperated and Y defected, then <math>P_{cd}</math> is the probability that X will cooperate in the present encounter. If each of the probabilities are either 1 or 0, the strategy is called deterministic. An example of a deterministic strategy is the tit for tat strategy written as P={1,0,1,0}, in which X responds as Y did in the previous encounter. Another is the win–stay, lose–switch strategy written as P={1,0,0,1}, in which X responds as in the previous encounter, if it was a "win" (i.e. cc or dc) but changes strategy if it was a loss (i.e. cd or dd). It has been shown that for any memory-n strategy there is a corresponding memory-1 strategy which gives the same statistical results, so that only memory-1 strategies need be considered.
  −
  −
在随机重复<font color="#ff8000"> 囚徒困境prisoner's dilemma</font>博弈中,策略由“合作概率”来确定。<ref name=Press2012>{{cite journal|last1=Press|first1=WH|last2=Dyson|first2=FJ|title=Iterated Prisoner's Dilemma contains strategies that dominate any evolutionary opponent|journal=[[Proceedings of the National Academy of Sciences of the United States of America]]|date=26 June 2012|volume=109|issue=26|pages=10409–13|doi=10.1073/pnas.1206569109|pmid=22615375|pmc=3387070|bibcode=2012PNAS..10910409P}}</ref>在玩家''X''和玩家''Y''之间的遭遇中,''X''的策略由一组与''Y''合作的概率''P''确定,''P''是他们之前遭遇的结果的函数,或者是其中的一些子集。如果''P''只是它们最近遇到次数 ''n''的函数,那么它被称为“记忆-n”策略。我们可以由四个联合概率指定一个记忆-1策略: <math>P=\{P_{cc},P_{cd},P_{dc},P_{dd}\}</math>,其中<math>P_{ab}</math>是在当前遭遇中基于先前联合的概率。如果每个概率都是1或0,这种策略称为确定性策略。确定性策略的一个例子是针锋相对策略,写成 p {1,0,1,0} ,其中 x 的反应和 y 在前一次遭遇中的反应一样。另一种是胜-保持-败-转换策略,它被写成 p {1,0,0,1} ,在这种策略中,如果 x 获得胜利(即:cc 或 dc),x会做出与上一次遭遇一样的反应 ,但如果失败,x会改变策略(即cd 或 dd)。研究表明,对于任何一种记忆-n 策略,存在一个相应的记忆-1策略,这个策略给出相同的统计结果,因此只需要考虑记忆-1策略。<ref name="Press2012"/>
         +
===随机重复囚徒困境===
   −
If we define ''P'' as the above 4-element strategy vector of ''X'' and <math>Q=\{Q_{cc},Q_{cd},Q_{dc},Q_{dd}\}</math> as the 4-element strategy vector of ''Y'', a transition matrix ''M'' may be defined for ''X'' whose ''ij'' th entry is the probability that the outcome of a particular encounter between ''X'' and ''Y'' will be ''j'' given that the previous encounter was ''i'', where ''i'' and ''j'' are one of the four outcome indices: ''cc'', ''cd'', ''dc'', or ''dd''. For example, from ''X'' 's point of view, the probability that the outcome of the present encounter is ''cd'' given that the previous encounter was ''cd'' is equal to <math>M_{cd,cd}=P_{cd}(1-Q_{dc})</math>. (The indices for ''Q'' are from ''Y'' 's point of view: a ''cd'' outcome for ''X'' is a ''dc'' outcome for ''Y''.) Under these definitions, the iterated prisoner's dilemma qualifies as a [[stochastic process]] and ''M'' is a [[stochastic matrix]], allowing all of the theory of stochastic processes to be applied.<ref name="Press2012"/>
+
在随机重复囚徒困境博弈中,策略由“合作概率”来确定。<ref name=Press2012>{{cite journal|last1=Press|first1=WH|last2=Dyson|first2=FJ|title=Iterated Prisoner's Dilemma contains strategies that dominate any evolutionary opponent|journal=[[Proceedings of the National Academy of Sciences of the United States of America]]|date=26 June 2012|volume=109|issue=26|pages=10409–13|doi=10.1073/pnas.1206569109|pmid=22615375|pmc=3387070|bibcode=2012PNAS..10910409P}}</ref>在玩家''X''和玩家''Y''之间的遭遇中,''X''的策略由一组与''Y''合作的概率''P''确定,''P''是他们之前遭遇的结果的函数,或者是其中的一些子集。如果''P''只是它们最近遇到次数 ''n''的函数,那么它被称为“记忆-n”策略。我们可以由四个联合概率指定一个记忆-1策略: <math>P=\{P_{cc},P_{cd},P_{dc},P_{dd}\}</math>,其中<math>P_{ab}</math>是在当前遭遇中基于先前联合的概率。如果每个概率都是1或0,这种策略称为确定性策略。确定性策略的一个例子是针锋相对策略,写成 p {1,0,1,0} ,其中 x 的反应和 y 在前一次遭遇中的反应一样。另一种是胜-保持-败-转换策略,它被写成 p {1,0,0,1} ,在这种策略中,如果 x 获得胜利(:cc 或 dc),x会做出与上一次遭遇一样的反应 ,但如果失败,x会改变策略(即cd 或 dd)。研究表明,对于任何一种记忆-n 策略,存在一个相应的记忆-1策略,这个策略给出相同的统计结果,因此只需要考虑记忆-1策略。<ref name="Press2012"/>
   −
If we define P as the above 4-element strategy vector of X and <math>Q=\{Q_{cc},Q_{cd},Q_{dc},Q_{dd}\}</math> as the 4-element strategy vector of Y, a transition matrix M may be defined for X whose ij th entry is the probability that the outcome of a particular encounter between X and Y will be j given that the previous encounter was i, where i and j are one of the four outcome indices: cc, cd, dc, or dd. For example, from X 's point of view, the probability that the outcome of the present encounter is cd given that the previous encounter was cd is equal to <math>M_{cd,cd}=P_{cd}(1-Q_{dc})</math>. (The indices for Q are from Y 's point of view: a cd outcome for X is a dc outcome for Y.)  Under these definitions, the iterated prisoner's dilemma qualifies as a stochastic process and M is a stochastic matrix, allowing all of the theory of stochastic processes to be applied.
      
如果我们将''P''定义为''X''的上述4元策略向量,并将<math>Q=\{Q_{cc},Q_{cd},Q_{dc},Q_{dd}\}</math>定义为''Y''的4元策略向量,则对于''X''可以定义一个转移矩阵''M'',其第ij项是''X''和''Y''之间特定相遇的结果为j的概率,给定i,其中i和j是cc、cd、dc或dd 四个结果索引中的一个。例如,从''X''的角度来看,如果给定''cd'',那么这次的结果是''cd''的概率等于<math>M_{cd,cd}=P_{cd}(1-Q_{dc})</math>。(''Q''的指标是 从''Y''的角度: ''X''的''cd''结果是''Y''的''dc''结果)在这些定义下,重复的囚徒困境被定义为一个随机过程,''M''是一个随机矩阵,允许应用所有的随机过程理论。<ref name="Press2012"/>
 
如果我们将''P''定义为''X''的上述4元策略向量,并将<math>Q=\{Q_{cc},Q_{cd},Q_{dc},Q_{dd}\}</math>定义为''Y''的4元策略向量,则对于''X''可以定义一个转移矩阵''M'',其第ij项是''X''和''Y''之间特定相遇的结果为j的概率,给定i,其中i和j是cc、cd、dc或dd 四个结果索引中的一个。例如,从''X''的角度来看,如果给定''cd'',那么这次的结果是''cd''的概率等于<math>M_{cd,cd}=P_{cd}(1-Q_{dc})</math>。(''Q''的指标是 从''Y''的角度: ''X''的''cd''结果是''Y''的''dc''结果)在这些定义下,重复的囚徒困境被定义为一个随机过程,''M''是一个随机矩阵,允许应用所有的随机过程理论。<ref name="Press2012"/>
   −
  −
  −
One result of stochastic theory is that there exists a stationary vector ''v'' for the matrix ''M'' such that <math>v\cdot M=v</math>. Without loss of generality, it may be specified that ''v'' is normalized so that the sum of its four components is unity. The ''ij'' th entry in <math>M^n</math> will give the probability that the outcome of an encounter between ''X'' and ''Y'' will be ''j'' given that the encounter ''n'' steps previous is ''i''. In the limit as ''n'' approaches infinity, ''M'' will converge to a matrix with fixed values, giving the long-term probabilities of an encounter producing ''j'' which will be independent of ''i''. In other words, the rows of <math>M^\infty</math> will be identical, giving the long-term equilibrium result probabilities of the iterated prisoners dilemma without the need to explicitly evaluate a large number of interactions. It can be seen that ''v'' is a stationary vector for <math>M^n</math> and particularly <math>M^\infty</math>, so that each row of <math>M^\infty</math> will be equal to ''v''. Thus the stationary vector specifies the equilibrium outcome probabilities for ''X''. Defining <math>S_x=\{R,S,T,P\}</math> and <math>S_y=\{R,T,S,P\}</math> as the short-term payoff vectors for the {cc,cd,dc,dd} outcomes (From ''X'' 's point of view), the equilibrium payoffs for ''X'' and ''Y'' can now be specified as <math>s_x=v\cdot S_x</math> and <math>s_y=v\cdot S_y</math>, allowing the two strategies ''P'' and ''Q'' to be compared for their long term payoffs.
  −
  −
One result of stochastic theory is that there exists a stationary vector v for the matrix M such that <math>v\cdot M=v</math>. Without loss of generality, it may be specified that v is normalized so that the sum of its four components is unity. The ij th entry in <math>M^n</math> will give the probability that the outcome of an encounter between X and Y will be j given that the encounter n steps previous is i. In the limit as n approaches infinity, M will converge to a matrix with fixed values, giving the long-term probabilities of an encounter producing j which will be independent of i. In other words, the rows of <math>M^\infty</math> will be identical, giving the long-term equilibrium result probabilities of the iterated prisoners dilemma without the need to explicitly evaluate a large number of interactions. It can be seen that v is a stationary vector for <math>M^n</math> and particularly <math>M^\infty</math>, so that each row of <math>M^\infty</math> will be equal to v. Thus the stationary vector specifies the equilibrium outcome probabilities for X. Defining <math>S_x=\{R,S,T,P\}</math> and <math>S_y=\{R,T,S,P\}</math> as the short-term payoff vectors for the {cc,cd,dc,dd} outcomes (From X 's point of view), the equilibrium payoffs for X and Y can now be specified as <math>s_x=v\cdot S_x</math> and <math>s_y=v\cdot S_y</math>, allowing the two strategies P and Q to be compared for their long term payoffs.
      
随机理论的一个结果是,矩阵''M''存在一个平稳向量''v''使得<math>v\cdot M=v</math>成立。一般地,我们可以指定''v''是标准化的,因此它的4个组成部分之和为1。the equilibrium payoffs for  and  can now be specified as  and, allowing the two strategies ''P'' and ''Q'' to be compared for their long term payoffs.第''ij''项<math>M^n</math>给出了''X''和''Y''相遇的结果的概率为''j'',给定前面相遇''n''步的概率是''i''。当''n''趋于无穷时,''M''收敛于一个具有固定值的矩阵,并且''j''趋向一个长期概率,与''i''独立。换句话说, <math>M^\infty</math>的行将是相同的,从而给出了重复囚徒困境的长期均衡结果概率,而不需要明确地计算大量的相互作用。可以看出,''v''是<math>M^n</math>特别是<math>M^\infty</math>, 的平稳向量,因此<math>M^\infty</math>的每一行都等于''v''。因此平稳向量指定了''X''的均衡结果概率。定义<math>S_x=\{R,S,T,P\}</math>和<math>S_y=\{R,T,S,P\}</math>作为{cc,cd,dc,dd}结果的短期收益向量(从''X''的角度来看) ,现在可以将''X''和''Y''的均衡收益指定为<math>s_x=v\cdot S_x</math>和<math>s_y=v\cdot S_y</math>,使得''P''、''Q''两种策略的长期收益可以比较。
 
随机理论的一个结果是,矩阵''M''存在一个平稳向量''v''使得<math>v\cdot M=v</math>成立。一般地,我们可以指定''v''是标准化的,因此它的4个组成部分之和为1。the equilibrium payoffs for  and  can now be specified as  and, allowing the two strategies ''P'' and ''Q'' to be compared for their long term payoffs.第''ij''项<math>M^n</math>给出了''X''和''Y''相遇的结果的概率为''j'',给定前面相遇''n''步的概率是''i''。当''n''趋于无穷时,''M''收敛于一个具有固定值的矩阵,并且''j''趋向一个长期概率,与''i''独立。换句话说, <math>M^\infty</math>的行将是相同的,从而给出了重复囚徒困境的长期均衡结果概率,而不需要明确地计算大量的相互作用。可以看出,''v''是<math>M^n</math>特别是<math>M^\infty</math>, 的平稳向量,因此<math>M^\infty</math>的每一行都等于''v''。因此平稳向量指定了''X''的均衡结果概率。定义<math>S_x=\{R,S,T,P\}</math>和<math>S_y=\{R,T,S,P\}</math>作为{cc,cd,dc,dd}结果的短期收益向量(从''X''的角度来看) ,现在可以将''X''和''Y''的均衡收益指定为<math>s_x=v\cdot S_x</math>和<math>s_y=v\cdot S_y</math>,使得''P''、''Q''两种策略的长期收益可以比较。
第341行: 第229行:       −
====Zero-determinant strategies====
+
====零决定策略====
<font color="#ff8000">零决定策略 Zero-determinant strategies</font>
      +
[[File:IPD Venn.svg.png|right|thumb|upright=2.5|维恩图 Venn diagram中讨论了重复囚徒困境(IPD)中零决定策略(ZD)、合作策略和背叛策略之间的关系。合作策略总是与其他合作策略相互配合,而背叛策略总是与其他背叛策略相抵触。这两种策略都包都含在强选择下稳健的策略子集,这意味着当它们驻留在一个种群中时,没有选择其他的记忆-1策略来入侵此策略。只有合作策略包含在始终稳健的策略子集,意味着无论选择强项还是弱项,都不会选择其他任何记忆-1策略来入侵和替换此策略。零决定策略和良好的合作策略之间的交集是一组宽松的零决定策略。勒索策略是零决定策略和非稳健背叛策略的交集。针锋相对是合作、背叛和零决定策略的交集。]]
   −
[[File:IPD Venn.svg|right|thumb|upright=2.5|The relationship between zero-determinant (ZD), cooperating and defecting strategies in the iterated  prisoner's dilemma (IPD) illustrated in a [[Venn diagram]]. Cooperating strategies always cooperate with other cooperating strategies, and defecting strategies always defect against other defecting strategies. Both contain subsets of strategies that are robust under strong selection, meaning no other memory-1 strategy is selected to invade such strategies when they are resident in a population. Only cooperating strategies contain a subset that are always robust, meaning that no other memory-1 strategy is selected to invade and replace such strategies, under both strong and [[weak selection]]. The intersection between ZD and good cooperating strategies is the set of generous ZD strategies. Extortion strategies are the intersection between ZD and non-robust defecting strategies. Tit-for-tat lies at the intersection of cooperating, defecting and ZD strategies.]]
  −
  −
The relationship between zero-determinant (ZD), cooperating and defecting strategies in the iterated  prisoner's dilemma (IPD) illustrated in a [[Venn diagram. Cooperating strategies always cooperate with other cooperating strategies, and defecting strategies always defect against other defecting strategies. Both contain subsets of strategies that are robust under strong selection, meaning no other memory-1 strategy is selected to invade such strategies when they are resident in a population. Only cooperating strategies contain a subset that are always robust, meaning that no other memory-1 strategy is selected to invade and replace such strategies, under both strong and weak selection. The intersection between ZD and good cooperating strategies is the set of generous ZD strategies. Extortion strategies are the intersection between ZD and non-robust defecting strategies. Tit-for-tat lies at the intersection of cooperating, defecting and ZD strategies.]]
  −
  −
<font color="#ff8000">维恩图 Venn diagram</font>中讨论了<font color="#ff8000">重复囚徒困境 iterated prisoner's dilemma</font>(IPD)中零决定策略(ZD)、合作策略和背叛策略之间的关系。合作策略总是与其他合作策略相互配合,而背叛策略总是与其他背叛策略相抵触。这两种策略都包都含在强选择下稳健的策略子集,这意味着当它们驻留在一个种群中时,没有选择其他的记忆-1策略来入侵此策略。只有合作策略包含在始终稳健的策略子集,意味着无论选择强项还是弱项,都不会选择其他任何记忆-1策略来入侵和替换此策略。零决定策略和良好的合作策略之间的交集是一组宽松的零决定策略。勒索策略是零决定策略和非稳健背叛策略的交集。针锋相对是合作、背叛和零决定策略的交集。
  −
  −
  −
  −
In 2012, [[William H. Press]] and [[Freeman Dyson]] published a new class of strategies for the stochastic iterated prisoner's dilemma called "zero-determinant" (ZD) strategies.<ref name="Press2012"/> The long term payoffs for encounters between ''X'' and ''Y'' can be expressed as the determinant of a matrix which is a function of the two strategies and the short term payoff vectors: <math>s_x=D(P,Q,S_x)</math> and <math>s_y=D(P,Q,S_y)</math>, which do not involve the stationary vector ''v''. Since the determinant function <math>s_y=D(P,Q,f)</math> is linear in ''f'', it follows that <math>\alpha s_x+\beta s_y+\gamma=D(P,Q,\alpha S_x+\beta S_y+\gamma U)</math> (where ''U''={1,1,1,1}). Any strategies for which <math>D(P,Q,\alpha S_x+\beta S_y+\gamma U)=0</math> is by definition a ZD strategy, and the long term payoffs obey the relation  <math>\alpha s_x+\beta s_y+\gamma=0</math>.
  −
  −
In 2012, William H. Press and Freeman Dyson published a new class of strategies for the stochastic iterated prisoner's dilemma called "zero-determinant" (ZD) strategies. The long term payoffs for encounters between X and Y can be expressed as the determinant of a matrix which is a function of the two strategies and the short term payoff vectors: <math>s_x=D(P,Q,S_x)</math> and <math>s_y=D(P,Q,S_y)</math>, which do not involve the stationary vector v. Since the determinant function <math>s_y=D(P,Q,f)</math> is linear in f, it follows that <math>\alpha s_x+\beta s_y+\gamma=D(P,Q,\alpha S_x+\beta S_y+\gamma U)</math> (where U={1,1,1,1}). Any strategies for which <math>D(P,Q,\alpha S_x+\beta S_y+\gamma U)=0</math> is by definition a ZD strategy, and the long term payoffs obey the relation  <math>\alpha s_x+\beta s_y+\gamma=0</math>.
      
2012年,威廉·H·普莱斯 William H. Press和弗里曼·戴森 Freeman Dyson针对随机重复囚徒困境提出了一类新的策略,称为“零决定”策略。<ref name="Press2012"/>''X''和''Y''之间的长期收益可以表示为一个矩阵的决定因素,它是两个策略和短期收益向量的函数: <math>s_x=D(P,Q,S_x)</math>和<math>s_y=D(P,Q,S_y)</math>,不涉及平稳向量''v''。 由于行列式函数<math>s_y=D(P,Q,f)</math>在''f''中是线性的,因此可以推出<math>\alpha s_x+\beta s_y+\gamma=D(P,Q,\alpha S_x+\beta S_y+\gamma U)</math>(其中''U''={1,1,1,1})。任何策略的<math>D(P,Q,\alpha S_x+\beta S_y+\gamma U)=0</math>被定义为零决定策略,长期收益服从关系式<math>\alpha s_x+\beta s_y+\gamma=0</math>。
 
2012年,威廉·H·普莱斯 William H. Press和弗里曼·戴森 Freeman Dyson针对随机重复囚徒困境提出了一类新的策略,称为“零决定”策略。<ref name="Press2012"/>''X''和''Y''之间的长期收益可以表示为一个矩阵的决定因素,它是两个策略和短期收益向量的函数: <math>s_x=D(P,Q,S_x)</math>和<math>s_y=D(P,Q,S_y)</math>,不涉及平稳向量''v''。 由于行列式函数<math>s_y=D(P,Q,f)</math>在''f''中是线性的,因此可以推出<math>\alpha s_x+\beta s_y+\gamma=D(P,Q,\alpha S_x+\beta S_y+\gamma U)</math>(其中''U''={1,1,1,1})。任何策略的<math>D(P,Q,\alpha S_x+\beta S_y+\gamma U)=0</math>被定义为零决定策略,长期收益服从关系式<math>\alpha s_x+\beta s_y+\gamma=0</math>。
   −
  −
  −
Tit-for-tat is a ZD strategy which is "fair" in the sense of not gaining advantage over the other player. However, the ZD space also contains strategies that, in the case of two players, can allow one player to unilaterally set the other player's score or alternatively, force an evolutionary player to achieve a payoff some percentage lower than his own. The extorted player could defect but would thereby hurt himself by getting a lower payoff. Thus, extortion solutions turn the iterated prisoner's dilemma into a sort of [[ultimatum game]]. Specifically, ''X'' is able to choose a strategy for which <math>D(P,Q,\beta S_y+\gamma U)=0</math>, unilaterally setting <math>s_y</math>  to a specific value within a particular range of values, independent of ''Y'' 's strategy, offering an opportunity for ''X'' to "extort" player ''Y'' (and vice versa). (It turns out that if ''X'' tries to set <math>s_x</math> to a particular value, the range of possibilities is much smaller, only consisting of complete cooperation or complete defection.<ref name="Press2012"/>)
  −
  −
Tit-for-tat is a ZD strategy which is "fair" in the sense of not gaining advantage over the other player. However, the ZD space also contains strategies that, in the case of two players, can allow one player to unilaterally set the other player's score or alternatively, force an evolutionary player to achieve a payoff some percentage lower than his own. The extorted player could defect but would thereby hurt himself by getting a lower payoff. Thus, extortion solutions turn the iterated prisoner's dilemma into a sort of ultimatum game. Specifically, X is able to choose a strategy for which <math>D(P,Q,\beta S_y+\gamma U)=0</math>, unilaterally setting <math>s_y</math>  to a specific value within a particular range of values, independent of Y 's strategy, offering an opportunity for X to "extort" player Y (and vice versa). (It turns out that if X tries to set <math>s_x</math> to a particular value, the range of possibilities is much smaller, only consisting of complete cooperation or complete defection.)
      
针锋相对是一种零决定策略,在不获得超越其他玩家优势的意义下是“公平”的。然而,零决定策略空间还包含这样的策略:在两个玩家的情况下,可以允许一个玩家单方面设置另一个玩家的分数,或者强迫进化的玩家获得比他自己的分数低一些的收益。被勒索的玩家可能会背叛,但会因此获得较低的回报并且受到伤害。因此,勒索的解决方案将重复囚徒困境转化为一种<font color="#ff8000">最后通牒博弈 ultimatum game </font>。具体来说,''X''能够选择一种策略,对于这种策略,<math>D(P,Q,\beta S_y+\gamma U)=0</math>单方面地将<math>s_y</math>设置为一个特定值范围内的特定值,与''Y''的策略无关,为''X''提供了“勒索”玩家''Y''的机会(反之亦然)。(事实证明,如果''X''试图将<math>s_x</math>设置为一个特定的值,那么可能的范围要小得多,只包括完全合作或完全叛变。<ref name="Press2012"/>)
 
针锋相对是一种零决定策略,在不获得超越其他玩家优势的意义下是“公平”的。然而,零决定策略空间还包含这样的策略:在两个玩家的情况下,可以允许一个玩家单方面设置另一个玩家的分数,或者强迫进化的玩家获得比他自己的分数低一些的收益。被勒索的玩家可能会背叛,但会因此获得较低的回报并且受到伤害。因此,勒索的解决方案将重复囚徒困境转化为一种<font color="#ff8000">最后通牒博弈 ultimatum game </font>。具体来说,''X''能够选择一种策略,对于这种策略,<math>D(P,Q,\beta S_y+\gamma U)=0</math>单方面地将<math>s_y</math>设置为一个特定值范围内的特定值,与''Y''的策略无关,为''X''提供了“勒索”玩家''Y''的机会(反之亦然)。(事实证明,如果''X''试图将<math>s_x</math>设置为一个特定的值,那么可能的范围要小得多,只包括完全合作或完全叛变。<ref name="Press2012"/>)
   −
  −
  −
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not [[evolutionarily stable strategy|evolutionarily stable]]. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).<ref>{{cite journal|last=Adami|first=Christoph|author2=Arend Hintze|title=Evolutionary instability of Zero Determinant strategies demonstrates that winning isn't everything|journal=Nature Communications|volume=4|year=2013|page=3|arxiv=1208.2666|doi=10.1038/ncomms3193|pmid=23903782|pmc=3741637|bibcode=2013NatCo...4.2193A}}</ref>
  −
  −
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).
      
重复囚徒困境的一个扩展是进化的随机重复囚徒困境,其中允许特定策略的相对丰度改变,更成功的策略相对增加。这个过程可以通过让不太成功的玩家模仿更成功的策略,或者通过从游戏中淘汰不太成功的玩家,同时让更成功的玩家成倍增加。研究表明,不公平的零决定策略不是进化稳定策略。关键的直觉告诉我们,进化稳定策略不仅要能够入侵另一个群体(这是勒索零决定策略可以做到的) ,而且还要在同类型的其他玩家面前表现良好(勒索零决定策略玩家表现不佳,因为他们减少了彼此的盈余)。<ref>{{cite journal|last=Adami|first=Christoph|author2=Arend Hintze|title=Evolutionary instability of Zero Determinant strategies demonstrates that winning isn't everything|journal=Nature Communications|volume=4|year=2013|page=3|arxiv=1208.2666|doi=10.1038/ncomms3193|pmid=23903782|pmc=3741637|bibcode=2013NatCo...4.2193A}}</ref>
 
重复囚徒困境的一个扩展是进化的随机重复囚徒困境,其中允许特定策略的相对丰度改变,更成功的策略相对增加。这个过程可以通过让不太成功的玩家模仿更成功的策略,或者通过从游戏中淘汰不太成功的玩家,同时让更成功的玩家成倍增加。研究表明,不公平的零决定策略不是进化稳定策略。关键的直觉告诉我们,进化稳定策略不仅要能够入侵另一个群体(这是勒索零决定策略可以做到的) ,而且还要在同类型的其他玩家面前表现良好(勒索零决定策略玩家表现不佳,因为他们减少了彼此的盈余)。<ref>{{cite journal|last=Adami|first=Christoph|author2=Arend Hintze|title=Evolutionary instability of Zero Determinant strategies demonstrates that winning isn't everything|journal=Nature Communications|volume=4|year=2013|page=3|arxiv=1208.2666|doi=10.1038/ncomms3193|pmid=23903782|pmc=3741637|bibcode=2013NatCo...4.2193A}}</ref>
   −
  −
  −
Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is larger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and [[win–stay, lose–switch]] agents.<ref name=Hilbe2013 />
  −
  −
Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is larger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.
      
理论和模拟证实,超过一个临界种群规模,零决定勒索在与更多合作策略的进化竞争中会失败,因此,种群越大,种群的平均收益就越大。此外,在某些情况下,勒索者甚至可能通过帮助打破统一的背叛者与使用“赢-保持-输”策略的转换玩家之间的对峙而促进合作。<ref name=Hilbe2013 />
 
理论和模拟证实,超过一个临界种群规模,零决定勒索在与更多合作策略的进化竞争中会失败,因此,种群越大,种群的平均收益就越大。此外,在某些情况下,勒索者甚至可能通过帮助打破统一的背叛者与使用“赢-保持-输”策略的转换玩家之间的对峙而促进合作。<ref name=Hilbe2013 />
   −
  −
  −
While extortionary ZD strategies are not stable in large populations, another ZD class called "generous" strategies ''is'' both stable and robust.  In fact, when the population is not too small, these strategies can supplant any other ZD strategy and even perform well against a broad array of generic strategies for iterated prisoner's dilemma, including win–stay, lose–switch. This was proven specifically for the [[Prisoner's dilemma#Special case: Donation game|donation game]] by Alexander Stewart and Joshua Plotkin in 2013.<ref name=Stewart2013>{{cite journal|last=Stewart|first=Alexander J.|author2=Joshua B. Plotkin|title=From extortion to generosity, evolution in the Iterated Prisoner's Dilemma|journal=[[Proceedings of the National Academy of Sciences of the United States of America]]|year=2013|doi=10.1073/pnas.1306246110|pmid=24003115|volume=110|issue=38|pages=15348–53|bibcode=2013PNAS..11015348S|pmc=3780848}}</ref> Generous strategies will cooperate with other cooperative players, and in the face of defection, the generous player loses more utility than its rival. Generous strategies are the intersection of ZD strategies and so-called "good" strategies, which were defined by Akin (2013)<ref name=Akin2013>{{cite arxiv|last=Akin|first=Ethan|title=Stable Cooperative Solutions for the Iterated Prisoner's Dilemma|year=2013|page=9|class=math.DS|eprint=1211.0969}} {{bibcode|2012arXiv1211.0969A}}</ref> to be those for which the player responds to past mutual cooperation with future cooperation and splits expected payoffs equally if he receives at least the cooperative expected payoff. Among good strategies, the generous (ZD) subset performs well when the population is not too small. If the population is very small, defection strategies tend to dominate.<ref name=Stewart2013 />
  −
  −
While extortionary ZD strategies are not stable in large populations, another ZD class called "generous" strategies is both stable and robust.  In fact, when the population is not too small, these strategies can supplant any other ZD strategy and even perform well against a broad array of generic strategies for iterated prisoner's dilemma, including win–stay, lose–switch. This was proven specifically for the donation game by Alexander Stewart and Joshua Plotkin in 2013. Generous strategies will cooperate with other cooperative players, and in the face of defection, the generous player loses more utility than its rival. Generous strategies are the intersection of ZD strategies and so-called "good" strategies, which were defined by Akin (2013) to be those for which the player responds to past mutual cooperation with future cooperation and splits expected payoffs equally if he receives at least the cooperative expected payoff. Among good strategies, the generous (ZD) subset performs well when the population is not too small. If the population is very small, defection strategies tend to dominate.
      
虽然勒索零决定策略在人口众多的情况下并不稳定,但另一种宽松的零决定策略既稳定又稳健。事实上,当人口不算太少的时候,这些策略可以取代任何其他零决定策略,甚至在一系列针对重复囚徒困境的广泛通用策略(包括“获胜-保持-输”的转换策略)中表现良好。亚历山大·斯图尔特 Alexander Stewart和约书亚·普洛特金 Joshua Plotkin在2013年的捐赠博弈中证明了这一点。<ref name=Stewart2013>{{cite journal|last=Stewart|first=Alexander J.|author2=Joshua B. Plotkin|title=From extortion to generosity, evolution in the Iterated Prisoner's Dilemma|journal=[[Proceedings of the National Academy of Sciences of the United States of America]]|year=2013|doi=10.1073/pnas.1306246110|pmid=24003115|volume=110|issue=38|pages=15348–53|bibcode=2013PNAS..11015348S|pmc=3780848}}</ref>宽松的策略会与其他合作的玩家合作,面对背叛,慷慨的玩家比他的对手失去更多的效用。宽松策略是零决定策略和所谓的“好”策略的交集,阿金(2013) <ref name=Akin2013>{{cite arxiv|last=Akin|first=Ethan|title=Stable Cooperative Solutions for the Iterated Prisoner's Dilemma|year=2013|page=9|class=math.DS|eprint=1211.0969}} {{bibcode|2012arXiv1211.0969A}}</ref> Among good strategies, the generous (ZD) subset performs well when the population is not too small. If the population is very small, defection strategies tend to dominate.将这两种策略定义为玩家对过去的相互合作作出回应,并在至少获得合作预期收益的情况下平均分配预期收益的策略。在好的策略中,当总体不太小时,宽松(零决定)子集表现良好。如果总体很少,背叛策略往往占主导地位。<ref name=Stewart2013 />
 
虽然勒索零决定策略在人口众多的情况下并不稳定,但另一种宽松的零决定策略既稳定又稳健。事实上,当人口不算太少的时候,这些策略可以取代任何其他零决定策略,甚至在一系列针对重复囚徒困境的广泛通用策略(包括“获胜-保持-输”的转换策略)中表现良好。亚历山大·斯图尔特 Alexander Stewart和约书亚·普洛特金 Joshua Plotkin在2013年的捐赠博弈中证明了这一点。<ref name=Stewart2013>{{cite journal|last=Stewart|first=Alexander J.|author2=Joshua B. Plotkin|title=From extortion to generosity, evolution in the Iterated Prisoner's Dilemma|journal=[[Proceedings of the National Academy of Sciences of the United States of America]]|year=2013|doi=10.1073/pnas.1306246110|pmid=24003115|volume=110|issue=38|pages=15348–53|bibcode=2013PNAS..11015348S|pmc=3780848}}</ref>宽松的策略会与其他合作的玩家合作,面对背叛,慷慨的玩家比他的对手失去更多的效用。宽松策略是零决定策略和所谓的“好”策略的交集,阿金(2013) <ref name=Akin2013>{{cite arxiv|last=Akin|first=Ethan|title=Stable Cooperative Solutions for the Iterated Prisoner's Dilemma|year=2013|page=9|class=math.DS|eprint=1211.0969}} {{bibcode|2012arXiv1211.0969A}}</ref> Among good strategies, the generous (ZD) subset performs well when the population is not too small. If the population is very small, defection strategies tend to dominate.将这两种策略定义为玩家对过去的相互合作作出回应,并在至少获得合作预期收益的情况下平均分配预期收益的策略。在好的策略中,当总体不太小时,宽松(零决定)子集表现良好。如果总体很少,背叛策略往往占主导地位。<ref name=Stewart2013 />
   −
===Continuous iterated prisoner's dilemma===
+
===连续重复囚徒困境===
<font color="#ff8000">连续重复囚徒困境 Continuous iterated prisoner's dilemma </font>
  −
Most work on the iterated prisoner's dilemma has focused on the discrete case, in which players either cooperate or defect, because this model is relatively simple to analyze. However, some researchers have looked at models of the continuous iterated prisoner's dilemma, in which players are able to make a variable contribution to the other player. Le and Boyd<ref>{{cite journal | last1 = Le | first1 = S. | last2 = Boyd | first2 = R. |name-list-format=vanc| year = 2007 | title = Evolutionary Dynamics of the Continuous Iterated Prisoner's Dilemma | url = | journal = Journal of Theoretical Biology | volume = 245 | issue = 2| pages = 258–67 | doi = 10.1016/j.jtbi.2006.09.016 | pmid = 17125798 }}</ref> found that in such situations, cooperation is much harder to evolve than in the discrete iterated prisoner's dilemma. The basic intuition for this result is straightforward: in a continuous prisoner's dilemma, if a population starts off in a non-cooperative equilibrium, players who are only marginally more cooperative than non-cooperators get little benefit from [[Assortative mating|assorting]] with one another. By contrast, in a discrete prisoner's dilemma, tit for tat cooperators get a big payoff boost from assorting with one another in a non-cooperative equilibrium, relative to non-cooperators. Since nature arguably offers more opportunities for variable cooperation rather than a strict dichotomy of cooperation or defection, the continuous prisoner's dilemma may help explain why real-life examples of tit for tat-like cooperation are extremely rare in nature (ex. Hammerstein<ref>Hammerstein, P. (2003). Why is reciprocity so rare in social animals? A protestant appeal. In: P. Hammerstein, Editor, Genetic and Cultural Evolution of Cooperation, MIT Press. pp. 83–94. </ref>)
  −
 
  −
Most work on the iterated prisoner's dilemma has focused on the discrete case, in which players either cooperate or defect, because this model is relatively simple to analyze. However, some researchers have looked at models of the continuous iterated prisoner's dilemma, in which players are able to make a variable contribution to the other player. Le and Boyd found that in such situations, cooperation is much harder to evolve than in the discrete iterated prisoner's dilemma. The basic intuition for this result is straightforward: in a continuous prisoner's dilemma, if a population starts off in a non-cooperative equilibrium, players who are only marginally more cooperative than non-cooperators get little benefit from assorting with one another. By contrast, in a discrete prisoner's dilemma, tit for tat cooperators get a big payoff boost from assorting with one another in a non-cooperative equilibrium, relative to non-cooperators. Since nature arguably offers more opportunities for variable cooperation rather than a strict dichotomy of cooperation or defection, the continuous prisoner's dilemma may help explain why real-life examples of tit for tat-like cooperation are extremely rare in nature (ex. Hammerstein<ref>Hammerstein, P. (2003). Why is reciprocity so rare in social animals? A protestant appeal. In: P. Hammerstein, Editor, Genetic and Cultural Evolution of Cooperation, MIT Press. pp. 83–94. </ref>)
      
关于重复囚徒困境的研究大多集中在离散情况下,在这种情况下,参与者要么合作,要么背叛,因为这个模型分析起来比较简单。然而,一些研究人员已经研究了连续重复囚徒困境模型,在这个模型中,玩家能够对另一个玩家做出可变的贡献。乐 Le和博伊德 Boyd<ref>{{cite journal | last1 = Le | first1 = S. | last2 = Boyd | first2 = R. |name-list-format=vanc| year = 2007 | title = Evolutionary Dynamics of the Continuous Iterated Prisoner's Dilemma | url = | journal = Journal of Theoretical Biology | volume = 245 | issue = 2| pages = 258–67 | doi = 10.1016/j.jtbi.2006.09.016 | pmid = 17125798 }}</ref>发现,在这种情况下,合作比离散重复的囚徒困境更难发展。这个结果的基本直觉很简单: 在一个持续的囚徒困境中,如果一个人群开始处于非合作均衡状态,那么与非合作者相比,合作程度稍高的玩家不会从相互配合中获益。相比之下,在离散的囚徒困境中,相对于非合作者,针锋相对的合作者在非合作均衡中相互配合会获得巨大的回报。由于自然界可以提供更多的机会来进行各种各样的合作,而不是严格地将合作或背叛分为两类,因此连续的囚徒困境可以帮助解释为什么现实生活中针锋相对的合作的例子在自然界中极其罕见。(例如,哈默斯坦 Hammerstein <ref>Hammerstein, P. (2003). Why is reciprocity so rare in social animals? A protestant appeal. In: P. Hammerstein, Editor, Genetic and Cultural Evolution of Cooperation, MIT Press. pp. 83–94. </ref>)。
 
关于重复囚徒困境的研究大多集中在离散情况下,在这种情况下,参与者要么合作,要么背叛,因为这个模型分析起来比较简单。然而,一些研究人员已经研究了连续重复囚徒困境模型,在这个模型中,玩家能够对另一个玩家做出可变的贡献。乐 Le和博伊德 Boyd<ref>{{cite journal | last1 = Le | first1 = S. | last2 = Boyd | first2 = R. |name-list-format=vanc| year = 2007 | title = Evolutionary Dynamics of the Continuous Iterated Prisoner's Dilemma | url = | journal = Journal of Theoretical Biology | volume = 245 | issue = 2| pages = 258–67 | doi = 10.1016/j.jtbi.2006.09.016 | pmid = 17125798 }}</ref>发现,在这种情况下,合作比离散重复的囚徒困境更难发展。这个结果的基本直觉很简单: 在一个持续的囚徒困境中,如果一个人群开始处于非合作均衡状态,那么与非合作者相比,合作程度稍高的玩家不会从相互配合中获益。相比之下,在离散的囚徒困境中,相对于非合作者,针锋相对的合作者在非合作均衡中相互配合会获得巨大的回报。由于自然界可以提供更多的机会来进行各种各样的合作,而不是严格地将合作或背叛分为两类,因此连续的囚徒困境可以帮助解释为什么现实生活中针锋相对的合作的例子在自然界中极其罕见。(例如,哈默斯坦 Hammerstein <ref>Hammerstein, P. (2003). Why is reciprocity so rare in social animals? A protestant appeal. In: P. Hammerstein, Editor, Genetic and Cultural Evolution of Cooperation, MIT Press. pp. 83–94. </ref>)。
   −
even though tit for tat seems robust in theoretical models.
  −
  −
even though tit for tat seems robust in theoretical models.
      
尽管在理论模型中,针锋相对策略似乎是稳健的。
 
尽管在理论模型中,针锋相对策略似乎是稳健的。
第407行: 第257行:       −
===Emergence of stable strategies===
+
===稳定策略的出现===
<font color="#ff8000">稳定策略的出现 Emergence of stable strategies </font>
  −
Players cannot seem to coordinate mutual cooperation, thus often get locked into the inferior yet stable strategy of defection.  In this way, iterated rounds facilitate the evolution of stable strategies.<ref>{{cite book|last=Spaniel|first=William|title=Game Theory 101: The Complete Textbook|year=2011}}</ref> Iterated rounds often produce novel strategies, which have implications to complex social interaction. One such strategy is win-stay lose-shift. This strategy outperforms a simple Tit-For-Tat strategy&nbsp;– that is, if you can get away with cheating, repeat that behavior, however if you get caught, switch.<ref>{{cite journal|last=Nowak|first=Martin|author2=Karl Sigmund|title=A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner's Dilemma game|journal=Nature|year=1993|volume=364|issue=6432|doi=10.1038/364056a0|pages=56–58|pmid=8316296|bibcode=1993Natur.364...56N}}</ref>
  −
 
  −
Players cannot seem to coordinate mutual cooperation, thus often get locked into the inferior yet stable strategy of defection.  In this way, iterated rounds facilitate the evolution of stable strategies. Iterated rounds often produce novel strategies, which have implications to complex social interaction. One such strategy is win-stay lose-shift. This strategy outperforms a simple Tit-For-Tat strategy&nbsp;– that is, if you can get away with cheating, repeat that behavior, however if you get caught, switch.
      
玩家似乎不能协调相互合作,因此常常陷入劣等而稳定的背叛策略。这样,重复回合可以促进稳定策略的发展。<ref>{{cite book|last=Spaniel|first=William|title=Game Theory 101: The Complete Textbook|year=2011}}</ref>重复回合往往产生新颖的策略,这对复杂的社会互动有影响。其中一个策略就是“赢-保持-输”的转变。这个策略比一个简单的针锋相对策略要好&nbsp;–也就是说,如果你能逃脱作弊的惩罚,就重复这个行为,如果你被抓住了,就改变策略。<ref>{{cite journal|last=Nowak|first=Martin|author2=Karl Sigmund|title=A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner's Dilemma game|journal=Nature|year=1993|volume=364|issue=6432|doi=10.1038/364056a0|pages=56–58|pmid=8316296|bibcode=1993Natur.364...56N}}</ref>
 
玩家似乎不能协调相互合作,因此常常陷入劣等而稳定的背叛策略。这样,重复回合可以促进稳定策略的发展。<ref>{{cite book|last=Spaniel|first=William|title=Game Theory 101: The Complete Textbook|year=2011}}</ref>重复回合往往产生新颖的策略,这对复杂的社会互动有影响。其中一个策略就是“赢-保持-输”的转变。这个策略比一个简单的针锋相对策略要好&nbsp;–也就是说,如果你能逃脱作弊的惩罚,就重复这个行为,如果你被抓住了,就改变策略。<ref>{{cite journal|last=Nowak|first=Martin|author2=Karl Sigmund|title=A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner's Dilemma game|journal=Nature|year=1993|volume=364|issue=6432|doi=10.1038/364056a0|pages=56–58|pmid=8316296|bibcode=1993Natur.364...56N}}</ref>
   −
  −
  −
The only problem of this tit-for-tat strategy is that they are vulnerable to signal error. The problem arises when one individual cheats in retaliation but the other interprets it as cheating. As a result of this, the second individual now cheats and then it starts a see-saw pattern of cheating in a chain reaction.
  −
  −
The only problem of this tit-for-tat strategy is that they are vulnerable to signal error. The problem arises when one individual cheats in retaliation but the other interprets it as cheating. As a result of this, the second individual now cheats and then it starts a see-saw pattern of cheating in a chain reaction.
      
这种针锋相对策略的唯一问题是它们很容易出现信号错误。当一个人因报复而作弊,而另一个人将其单纯解释为欺骗时,就会出现问题。结果,第二个人现在作弊,然后在接下来的连锁反应中开始了反复交替的作弊模式。
 
这种针锋相对策略的唯一问题是它们很容易出现信号错误。当一个人因报复而作弊,而另一个人将其单纯解释为欺骗时,就会出现问题。结果,第二个人现在作弊,然后在接下来的连锁反应中开始了反复交替的作弊模式。
   −
 
+
<br>
    
==Real-life examples==
 
==Real-life examples==
7,129

个编辑

导航菜单