
How did humans become the most successful species? It might be because we are able to cooperate.
In 1984, Robert Axelrod published the groundbreaking book “The Evolution of Cooperation“. In this book, the political scientist explained how cooperation (working with one another for a mutual benefit) and even altruism (voluntarily yielding a benefit to a non-relative) is possible in a world where every single person follows their own interests.
Here is how.
There is no doubt that cooperation is of great advantage if you want to achieve great things. But there is a fundamental problem with cooperation: you have to rely on others.
The prisoner’s dilemma (invented around 1950 by the American mathematicians Merrill M. Flood and Melvin Dresher) shows why two completely rational individuals might not cooperate, even if it appears that it is in their best interests to do so.
How can that be?
(I took the following explanation from here and here; here a1-minute video version of the dilemma.)
Imagine that the police arrested two suspects of a crime. Both suspects are held in different cells and they cannot communicate with each other. The police officer offers both suspects the opportunity to either remain silent or blame the other suspect. If both suspects remain silent, they both will serve only one year in prison. If they both blame each other, they both will serve two years in prison. If one of the suspects blames the other and the other remains silent, the suspect who remained silent would serve three years in prison, while the blaming suspect would be set free. The table below shows the possible payoffs:

In such a setting, both suspects do not know the decision chosen by the other suspect. Therefore, the most rational decision from the perspective of self-interest is to blame the other suspect.
For example, suspect A is afraid of remaining silent because in such a case, he can receive three years in prison if suspect B blames him. If suspect A chooses to blame suspect B, he can be set free if suspect B remains silent. However, that is not likely, because suspect B is using the same rationale and he is also going to blame suspect A.
The prisoner’s dilemma game can be used as a model for many real-world situations involving cooperative behaviour.
The merit of Robert Axelrod (and evolutionary biologist W.D. Hamilton; both had published a highly influential paper before Axelrod’s book was published) is to understand why there is steady cooperation despite the prisoner’s dilemma.
In a first step, the static prisoner’s dilemma was transferred to repetition. Two players play prisoner’s dilemma more than once in succession and they remember previous actions of their opponent and change their strategy accordingly. The game is called iterated prisoner’s dilemma.
This iterated prisoner’s dilemma game became fundamental to theories of human cooperation and trust. It shows that reciprocal altruism can evolve between unrelated individuals.
What Axelrod showed beyond that was that such a strategy of cooperation not only CAN make sense to both sides but actually DOES make sense.
Axelrod had set up a tournament with many different strategies from other game theorists to compete in that tournament. The winner was a straightforward strategy submitted by the American mathematical psychologist Anatol Rapoport called tit-for-tat.
This is the tit-for-tat strategy: an agent using this strategy will first cooperate, then subsequently replicate an opponent’s previous action. If the opponent previously was cooperative, the agent is cooperative. If not, the agent is not.
The results of the first tournament were analyzed and published, and a second tournament was held to see if anyone could find a better strategy. Tit-for-tat won again.
By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful.
The most important condition is that the strategy must be “nice“, that is, it will not defect before its opponent does. However, Axelrod contended, the successful strategy must not be a blind optimist. Successful strategies must sometimes retaliate. They must also be forgiving. And the last quality is being non-envious, which is not striving to score more than the opponent.
It is more than interesting that our society is heavily based on these qualities.
Richard Dawkins, the British evolutionary biologist, told a moving example of that tit-for-tat strategy in a 1986 documentary called “Nice Guys Finish First“. An unspoken understanding evolved during trench warfare in the First World War. Troops were dug in only a few hundred feet from each other. If a sniper killed a soldier on one side, the other expected an equal retaliation. Conversely, if no one was killed for a time, the other side would acknowledge this implied “truce” and act accordingly. This created a “separate peace” between the trenches.
But there is a problem with the tit-for-tat strategy. Since every side replicates an opponent’s previous action with tit-for-tat, the strategy can also lead to an unending “death spiral”. Because not only is cooperation always answered with cooperation, but non-cooperation is with non-cooperation, too.
The latter situation frequently arises in real-world conflicts, ranging from schoolyard fights to wars which brings us back to the topic of these days, the war in Ukraine.
For Foreign Affairs, an American magazine of international relations, Emma Ashford (Adjunct Assistant Professor at Georgetown University) and Joshua Shifrinson (Associate Professor of International Relations at Boston University) have written down what such a deadly spiral could lead to. It is scary.
The text makes me realize that despite all the necessarily tough steps by the western world towards Putin and Russia, our heads must remain clear for smart politics. We must do everything possible not to become part of such an escalation spiral. Tit-for-tat brings out the best in us, cooperation, it can also lead to the worst.
sources:
https://www.foreignaffairs.com/articles/ukraine/2022-03-08/how-war-ukraine-could-get-much-worse
https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation
https://en.wikipedia.org/wiki/Reciprocal_altruism
https://en.wikipedia.org/wiki/Prisoner%27s_dilemma
One thought on “The two sides of tit-for-tat: What game theory knows about war”