| In a stochastic iterated prisoner's dilemma game, strategies are specified by in terms of "cooperation probabilities". In an encounter between player X and player Y, X 's strategy is specified by a set of probabilities P of cooperating with Y. P is a function of the outcomes of their previous encounters or some subset thereof. If P is a function of only their most recent n encounters, it is called a "memory-n" strategy. A memory-1 strategy is then specified by four cooperation probabilities: <math>P=\{P_{cc},P_{cd},P_{dc},P_{dd}\}</math>, where <math>P_{ab}</math> is the probability that X will cooperate in the present encounter given that the previous encounter was characterized by (ab). For example, if the previous encounter was one in which X cooperated and Y defected, then <math>P_{cd}</math> is the probability that X will cooperate in the present encounter. If each of the probabilities are either 1 or 0, the strategy is called deterministic. An example of a deterministic strategy is the tit for tat strategy written as P={1,0,1,0}, in which X responds as Y did in the previous encounter. Another is the win–stay, lose–switch strategy written as P={1,0,0,1}, in which X responds as in the previous encounter, if it was a "win" (i.e. cc or dc) but changes strategy if it was a loss (i.e. cd or dd). It has been shown that for any memory-n strategy there is a corresponding memory-1 strategy which gives the same statistical results, so that only memory-1 strategies need be considered. | | In a stochastic iterated prisoner's dilemma game, strategies are specified by in terms of "cooperation probabilities". In an encounter between player X and player Y, X 's strategy is specified by a set of probabilities P of cooperating with Y. P is a function of the outcomes of their previous encounters or some subset thereof. If P is a function of only their most recent n encounters, it is called a "memory-n" strategy. A memory-1 strategy is then specified by four cooperation probabilities: <math>P=\{P_{cc},P_{cd},P_{dc},P_{dd}\}</math>, where <math>P_{ab}</math> is the probability that X will cooperate in the present encounter given that the previous encounter was characterized by (ab). For example, if the previous encounter was one in which X cooperated and Y defected, then <math>P_{cd}</math> is the probability that X will cooperate in the present encounter. If each of the probabilities are either 1 or 0, the strategy is called deterministic. An example of a deterministic strategy is the tit for tat strategy written as P={1,0,1,0}, in which X responds as Y did in the previous encounter. Another is the win–stay, lose–switch strategy written as P={1,0,0,1}, in which X responds as in the previous encounter, if it was a "win" (i.e. cc or dc) but changes strategy if it was a loss (i.e. cd or dd). It has been shown that for any memory-n strategy there is a corresponding memory-1 strategy which gives the same statistical results, so that only memory-1 strategies need be considered. |
− | 在随机重复<font color="#ff8000"> 囚徒困境prisoner's dilemma</font>博弈中,策略由“合作概率”来确定。<ref name=Press2012>{{cite journal|last1=Press|first1=WH|last2=Dyson|first2=FJ|title=Iterated Prisoner's Dilemma contains strategies that dominate any evolutionary opponent|journal=[[Proceedings of the National Academy of Sciences of the United States of America]]|date=26 June 2012|volume=109|issue=26|pages=10409–13|doi=10.1073/pnas.1206569109|pmid=22615375|pmc=3387070|bibcode=2012PNAS..10910409P}}</ref>在玩家''X''和玩家''Y''之间的遭遇中,''X''‘s的策略由一组与''Y''合作的概率''P''确定,''P''是他们之前遭遇的结果的函数,或者是其中的一些子集。如果''P''只是它们最近遇到次数 ''n''的函数,那么它被称为“记忆-n”策略。我们可以由四个联合概率指定一个记忆-1策略: <math>P=\{P_{cc},P_{cd},P_{dc},P_{dd}\}</math>,其中<math>P_{ab}</math>是在当前遭遇中基于先前联合的概率。如果每个概率都是1或0,这种策略称为确定性策略。确定性策略的一个例子是针锋相对策略,写成 p {1,0,1,0} ,其中 x 的反应和 y 在前一次遭遇中的反应一样。另一种是胜-保持-败-转换策略,它被写成 p {1,0,0,1} ,在这种策略中,如果 x 获得胜利(即:cc 或 dc),x会做出与上一次遭遇一样的反应 ,但如果失败,x会改变策略(即cd 或 dd)。研究表明,对于任何一种记忆-n 策略,存在一个相应的记忆-1策略,这个策略给出相同的统计结果,因此只需要考虑记忆-1策略。<ref name="Press2012"/> | + | 在随机重复<font color="#ff8000"> 囚徒困境prisoner's dilemma</font>博弈中,策略由“合作概率”来确定。<ref name=Press2012>{{cite journal|last1=Press|first1=WH|last2=Dyson|first2=FJ|title=Iterated Prisoner's Dilemma contains strategies that dominate any evolutionary opponent|journal=[[Proceedings of the National Academy of Sciences of the United States of America]]|date=26 June 2012|volume=109|issue=26|pages=10409–13|doi=10.1073/pnas.1206569109|pmid=22615375|pmc=3387070|bibcode=2012PNAS..10910409P}}</ref>在玩家''X''和玩家''Y''之间的遭遇中,''X''的策略由一组与''Y''合作的概率''P''确定,''P''是他们之前遭遇的结果的函数,或者是其中的一些子集。如果''P''只是它们最近遇到次数 ''n''的函数,那么它被称为“记忆-n”策略。我们可以由四个联合概率指定一个记忆-1策略: <math>P=\{P_{cc},P_{cd},P_{dc},P_{dd}\}</math>,其中<math>P_{ab}</math>是在当前遭遇中基于先前联合的概率。如果每个概率都是1或0,这种策略称为确定性策略。确定性策略的一个例子是针锋相对策略,写成 p {1,0,1,0} ,其中 x 的反应和 y 在前一次遭遇中的反应一样。另一种是胜-保持-败-转换策略,它被写成 p {1,0,0,1} ,在这种策略中,如果 x 获得胜利(即:cc 或 dc),x会做出与上一次遭遇一样的反应 ,但如果失败,x会改变策略(即cd 或 dd)。研究表明,对于任何一种记忆-n 策略,存在一个相应的记忆-1策略,这个策略给出相同的统计结果,因此只需要考虑记忆-1策略。<ref name="Press2012"/> |