Canalblog
Editer l'article Suivre ce blog Administration + Créer mon blog
Publicité
marlaham2011' blog!
24 août 2011

Game theorists offer a surprising insight into the evolution of fair play

These questions of altruism, reciprocity, and competition are grist for the mill in game theory, a branch of mathematics applied to human behavior. Participants in game-theory experiments play pared-down games, with varying degrees of communication among the players, and are given differing rewards for differing outcomes. Players must decide when to cooperate and when--to use a highly technical game-theory term--to "cheat." Game theory gets taught in all sorts of academic programs. And it turns out that social animals, even without M.B.A.'s, have often evolved strategies for deciding when to cooperate and when to cheat. According to Joan E. Strassmann, of Rice University, even social bacteria have evolved optimal strategies for stabbing each other in the back.


Suppose you have an ongoing game, a round-robin tournament that involves two participants playing against each other in each round. The rules of the game are such that if both cooperate with each other, they both get a reward. And if both cheat, they both do poorly. On the other hand, if one cheats and the other cooperates, the cheater gets the biggest possible reward, and the cooperator loses big-time. Another condition is that the players in the tournament can't communicate with one another and therefore cannot work out some sort of collective strategy. Given these constraints, the only logical course is to avoid being a sucker and to cheat every time. Now suppose some players nonetheless figure out methods of cooperating. If enough of them do so--and especially if the cooperators can somehow quickly find one another--cooperation would soon become the better strategy. To use the jargon of evolutionary biologists who think about such things, it would drive noncooperation into extinction.


Get cooperation going among a group of individuals, and the group is eventually going to be in great shape. But whoever starts that trend (the first to spontaneously introduce cooperation) is going to be mathematically disadvantaged forever after. This might be termed the what-a-chump scenario. In an every-bacterium-for-himself world, when one addled soul does something spontaneously cooperative, all the other bacteria in the colony chortle, "What a chump!" and go back to competing--now one point ahead of that utopian dreamer. In this situation, a random act of altruism doesn't pay.


Yet systems of reciprocal altruism do emerge in various social species, even among us humans. Thus, the central question in game theory is: What circumstances bias a system toward cooperation?


One well-studied factor that biases toward cooperation is genetic relatedness. Familial ties are the driving force behind a large proportion of cooperative behaviors in animals. For example, individuals of some social insect species display such an outlandishly high degree of cooperation and altruism that most of them forgo the chance to reproduce and instead aid another individual (the queen) to do so. The late W.D. Hamilton, one of the giants of science, revolutionized thinking in evolutionary biology by explaining such cooperation in terms of the astoundingly high degree of relatedness among an insect colony's members. And a similar logic runs through the multitudinous, if less extreme, examples of cooperation among relatives in plenty of other social species, such as packs of wild dogs that are all sisters and cousins and that regurgitate food for one another's pups.


Another way to jump-start cooperation is to make the players feel related. This fostering of pseudokinship is a human specialty. All sorts of psychological studies have shown that when you arbitrarily divide a bunch of people into competing groups (the way kids in summer camp are stuck into, say, the red team and the blue team), even when you make sure they understand that their grouping is arbitrary, they'll soon begin to perceive shared and commendable traits among themselves and a distinct lack of them on the other side. The military exploits this tendency to the extreme, keeping recruits in cohesive units from basic training to frontline battle and making them feel so much like siblings that they're more likely to perform the ultimate cooperative act. And the flip side, pseudospeciation, is exploited in those circumstances as well: making the members of the other side seem so different, so unrelated, so un-human, that killing them barely counts.


One more way of facilitating cooperation in game-theory experiments is to have participants play repeated rounds with the same individuals. By introducing this prospect of a future, you introduce the potential for payback, for someone to be retaliated against by the person she cheated in a previous round. This is what deters cheaters. It's why reciprocity rarely occurs in species without cohesive social groups: no brine shrimp will lend another shrimp five dollars if, by next Tuesday, when the loan is to be repaid, the debtor will be long gone. And this is why reciprocity also demands a lot of social intelligence--if you can't tell one brine shrimp from another, it doesn't do you any good if the debtor will actually still be around next Tuesday. Zoologist Robin Dunbar, based at University College London, has shown that among the social primates, the bigger the social group (that is, the more individuals you have to keep track of), the larger the relative size of the brain. Of related interest is the finding that vampire bats, which wind up feeding one another's babies in a complex system involving vigilance against cheaters, have among the largest brains of any bat species.


An additional factor that biases toward cooperation in games is "open book" play--that is, a player facing someone in one round of a game has access to the history of that opponent's gaming behavior. In this scenario, the same individuals needn't play against each other repeatedly in order to produce cooperation. Instead, in what game theorists call sequential altruism, cooperation comes from the introduction of reputation. This becomes a pay-it-forward scenario, in which A is altruistic to B, who is then altruistic to C, and so on.


So game theory shows that at least three things facilitate the emergence of cooperation: playing with relatives or pseudorelatives, repeated rounds with the same individual, and open-book play. And this is where Fehr and Gachter's new study, a "public goods experiment," comes in. The authors set up a game in which all the rules seemed to be stacked against the emergence of cooperation. In a "one-shot, perfect-stranger" design, two individuals played each round, and while there were many rounds to the game, no one ever played against the same person twice. Moreover, all interactions were anonymous: no chance of getting to know cheaters by their reputations.

Publicité
Publicité
Commentaires
marlaham2011' blog!
Publicité
Publicité