“Many scenarios can be represented using so-called game theory…”
Shutterstock/Anne Kosolapova
In a world where survival favors the strongest, the question arises: how do cooperative behaviors develop?
From the realm of evolutionary biology to the complexities of international diplomacy, numerous scenarios can be analyzed through the lens of game theory. These games consider not only the various actions and strategies available to each participant but also the corresponding payoffs—positive or negative outcomes that each player receives based on various results. Some games are classified as “zero-sum,” meaning one player’s gain directly translates to another player’s loss, while others are not.
A notable example of a non-zero-sum game is the Prisoner’s Dilemma, which presents a compelling situation. The basic scenario involves two “criminals” held in separate cells, unable to communicate with each other.
While there isn’t sufficient evidence to charge them with the most serious offenses, there’s enough to convict both on lesser charges. The authorities simultaneously present each prisoner with a deal: if one testifies against the other while the other stays silent, the betrayer walks free while the silent one serves three years. However, if both betray each other, they each face two years in prison. If they both choose to remain silent, they will each serve just one year for the lesser offense.
The “reward” each player receives can be viewed in terms of years served: if both stay silent, the outcome results in a payoff of -1 for each. If player A betrays player B, A’s payoff is 0 while B’s is -3. In the case of mutual betrayal, both players incur a payoff of -2. Therefore, how can players optimize their outcomes?
In certain scenarios, each participant’s strategy emerges as the optimal response to the other’s actions, leading to a concept known as Nash equilibrium. Both players act in a way that maximizes their individual benefits, resulting in a favorable outcome.
The challenge lies in how actions interact without prior knowledge of the other player’s intentions. Consider if you decide to remain silent; if your counterpart shares that thought, betrayal will yield a greater return for you. Conversely, if they plan to betray you, it’s in your best interest to do the same. Thus, the most logical option appears to be betrayal. This reasoning applies universally, leading both players to defect, resulting in a total payoff of -4.
Should both players trust one another and remain silent, their total payoff would be -2. This implication—that the so-called survival of the fittest can yield suboptimal results compared to cooperative strategies—hints at the potential for collaboration.
A famous experiment from the 1980s involved 62 computer programs engaging in 200 rounds of Prisoner’s Dilemma. Crucially, these programs could adapt their strategies based on their opponent’s previous actions. Interestingly, self-serving strategies proved less successful compared to those grounded in altruism. A successful algorithm would cooperate initially but choose to defect only when the opponent had done so in prior rounds. Furthermore, these programs exhibited a forgiving nature, often returning to cooperation after prior acts of betrayal.
Thus, while “pure” game theory may lead to unfavorable outcomes, incorporating a touch of kindness can pave the way for improved results. Be generous, but remain vigilant against exploitation. Such findings lend credence to game theory.
These articles will be published weekly at:
newscientist.com/maker
Topic:
Source: www.newscientist.com












