Logo

Indirect reciprocity and other ways to initiate cooperation

An interesting way of how cooperation can be promoted has been suggested by Martin Nowak and Karl Sigmund in 1998. They reasoned that actors will behave prosocially in order to develop an altruistic reputation and consequently receive future benefits from third parties. To test this assumption they suggested a model of a population of computer-generated agents facing the choice between helping (cooperating) and not helping other agents. Agents are made to randomly either receive help or to provide it to others.

As pay-off, a donor-agent carries the cost c of his action, and the acceptor receives the benefits b, where b > c. If the donor does not help, both receive 0 points. The pay-off determines the fitness of an agent.

Each player has a reputation score s, known by every other player. If an agent is chosen to be the donor and decides to help, s will increase by one point. If the agent decides not to help, s will decrease by one point. The score of the acceptor does not change.

The simulation starts with 100 agents with randomly assigned reputation scores between -5 and +5. Additionally agents are assigned a strategy k, which also randomly varies between -5 and +6. Agents with strategy k only cooperate if their opponent, the acceptor, has a reputation that is at least as high as k. Agents with the strategy -5 will therefore unconditionally cooperate all the time, while agents with the strategy +6 will always "deceive".

Agents are made to reproduce proportionally to their payoff-fitness. Offspring inherit the strategy k of their "parent" but start with a reputation score of zero. In every generation, the number of confrontations is limited. The chance that an agent meets an opponent twice therefore is low. The willingness to cooperate cannot be directly conveyed; reciprocity works only indirectly.

Nevertheless, depending on the initial number and constellation of strategies, cooperation usually establishes after a couple of generations.

Nowak, Martin A. / Sigmund, Karl (1998): Evolution of indirect reciprocity by image scoring. Nature 393, p. 573.577].

Indirect reciprocity among humans

In order to test the result with human subjects, Claus Wedekind and Manfred Milinsky confronted students with each other in public goods-game-like donor-acceptor games, which were arranged so that no player could meet any opponent twice. Probands played under a pseudonym and were able to decide whether or not to "help" their opponent by covertly flipping a switch. The reputation score they gained was visibly listed for all players and constantly updated.
The results showed that the students on average behaved "cooperative" and their investments were higher, the higher the reputation score of the opponent was.

Wedekind, Claus / Milinsky, Manfred (2000) Cooperation through image scoring in humans. Science 288, p.850-852.

In an expansion of the experiment, the students were divided into two groups. One of the groups played the ordinary public goods-game for eight rounds first, and afterwards played the version where reputation could be gained for another 8 rounds. The other group switched between the ordinary and the reputation-gain version of the public good-game.
As expected, the cooperativeness of the first group significantly decreased during the first few rounds, and then was restored during the reputation-gaining version. In the second group, where both versions were played alternately, the initial level of cooperativeness was preserved during the whole course of the game.

Milinski, Manfred / Semmann, Dirk / Krambeck, Hans-Jürgen (2002) Reputation helps solve the "tragedy of the commons". Nature 415, p. 424-426].

Cooperation by punishment

A way to make cooperation more likely is to punish free-riders (defectors). Since this necessitates some kind of executive, punishment is costly in itself and thus brings up the question who bears these costs. As a consequence another social dilemma emerges:
Everyone benefits when fraud is prevented, but will the individual try to free-ride on the collective efforts for it?

Hence, punishment of free-riders represents a sort of  "second-order-public good".

Altruistic punishment

In respect to this problem Ernst Fehr and Simon Gächter (2002) investigated "altruistic punishment" - punishment that is performed even if it is costly and does not yield material benefits for the punishers.

They suggested a Public goods game-experiment with two groups, of which one group was allowed to punish a player when they believed that the player invested too little. The other group did not punish.
The confrontations were repeated six times with different players, in a way that nobody met the same players twice. The punishment was performed anonymously.
For one penalty point, the punished player had to pay 3 Euros, and the punisher had to pay 1 Euro, which made punished players increase their investments in subsequent rounds. However, since these rounds were played with new partners, punishers did not profit directly from punishing a defector. Instead they faced costs for the punishment (1 Euro). The effect of punishment was to increase the payoff in future groups. For the punisher, it remained a sheerly altruistic act.

Despite this, players turned out to intensively punish others. The lower the investment of a player (compared to the average investment), the higher the punishment. As a consequence, players with investment-behavior closest to average had highest profits.

A comparison of both groups revealed, that the possibility to punish significantly increases the willingness to invest:

punishment

Fehr and Gächter ascribed the "costly punishment behavior" to "negative emotions" towards betrayers. Since betrayers are aware of this, they already react to the existing possibility of being punished by increasing their investments.

Fehr, Ernst / Gächter, Simon (2002) Altruistic punishment in humans. Nature 415, p.137-140.