Chance Is All: How To Evolve the Social Contract

Interview by Richard Marshall

'A naturalized social contract theory should try to model man as he really is. I see evolutionary game theory as the best tool we have for that job. Evolutionary dynamics may or may not lead to what rational choice would consider an equilibrium.'

'I like the Stag Hunt and I think it deserves more attention than Prisoner’s Dilemma, but I would not rest social contract theory on one game. A real social contract is a complex web of social norms and conventions. We can try to gain insight by isolating some classes of important social interactions, modeling them as games, and studying the games.'

'Simple signaling networks can do computations and perform inferences. Complex signaling networks driven by reinforcement learning have beaten human experts at chess, go and poker. Fred Dretske argued for a reorientation of Epistemology, away from defining knowledge to the study of the flow of information. Anyone who takes the Dretske program seriously needs to think about signaling networks.'

Brian Skyrms is an expert in the evolution of conventions and the social contract now, inductive logic, decision theory, rational deliberation, the metaphysics of logical atomism, causality, and Truth. Here he discusses social dynamics /cultural evolution vs rational choice traditions, replicator dynamics modelling, Darwin vs rational choice theories,  the role of correlation, game theories and Stag Hunts, equilibrium and the need to keep things real, signalling, David Lewis, Shannon and Kullback, learning and learning with invention, signalling networks, plus some issues of utilitarianism.  

3:16: What made you become a philosopher?

Brian Skyrms:  If I go back far enough, it was reading science fiction as a kid. Some of it is very philosophical. I first encountered the Liar paradox in science fiction. But I went to college intending to study Electrical Engineering, hoping to get into cybernetics. What got me hooked on Philosophy in college, was the influence of Adolf Grünbaum and Nick Rescher. Grüubaum gave an exciting, wide- ranging Philosophy of Science course, and also ran a campus-wide lecture series that brought in distinguished speakers. I remember Feyerabend and Scriven as speakers. Rescher taught logic, which I liked very much, as well as Leibniz and an intellectual history course that was composed of readings from original sources, some of which I met for the first time.(e.g Ibn Khaldun). I followed them both to Pitt. 

3:16: You’re the go-to guy on the evolution of the existing implicit social contract, asking the questions how did it evolve and how might it continue to evolve. These aren’t the same issues as Hobbes, Harsanyi and Rawls discuss which are about what sort of contract rational decision makers would make in a preexisting state of nature, so perhaps it will be helpful for you to sketch for us why you have chosen the issues regarding the social contract that you have, a tradition that is more like the one including Hegel, Marx and, centrally, Darwin? 

BS: There is the rational choice tradition and there is social dynamics-cultural evolution tradition. This is one aspect of Spinoza’s contrast between theories of man as we would like to see him, and man as he really is. A naturalized social contract theory should try to model man as he really is. I see evolutionary game theory as the best tool we have for that job. Evolutionary dynamics may or may not lead to what rational choice would consider an equilibrium. 

3:16: Tim Williamson has been arguing for philosophy to be more aware of the use of modeling to attack philosophical problems and you are a contemporary philosopher who has been using modeling from the get-go. You draw on ‘replicator dynamics’ to study the social contract from a fresh perspective – so what is the model of replicator dynamics and why isn’t it right to think that given that natural selection will weed out irrationality a Rawlsian and a Skyrmian will reach the same conclusions?

BS: Replicator dynamics is a pure model of differential reproduction (in biological evolution) or differential imitation (in cultural evolution). It also has close connections to trial and error learning. So it is a good place to start dynamical investigations. But there are modifications and alternatives that can also come into play in the real world. Really good results are those that are robust over a large class of adaptive dynamics. If you look at it closely, natural selection need not weed out irrationality. If members of a population do not meet randomly to interact, evolutionary dynamics can allow irrational behavior to persist, or even take over the population. Here rational choice based game theory and evolutionary game theory come apart. 

The second part of your question seems to presume that rational choice would give you Rawls. Rawls picks extreme risk-aversion (Maximin) as his theory of rational choice. This is quite different from the orthodox view, used by Harsanyi, that takes rationality as maximizing expected payoff. Evolutionary dynamics has even less respect for Maximin, than for maximizing expected payoff. One should not expect evolution to deliver either Rawls or Harsanyi. 

3:16: What does the contrast between biological evolution of the sex ratio and rational decision theory illustrate what’s important to the actual evolution of our social contract? Is a key difference the way Darwinian evolution can deal with symmetrical deadlocks in a way that rational choice can’t – and does this help understand how notions of ‘ownership’ and ‘property’ have been developed?

BS:  In “Sex and Justice” I was interested evolution of the norm of the equal split. 

3:16: That's the first chapter of the Evolution of the Social Contract right?

BS: Right. In the symmetric bargaining situation there are an infinite number of informed rational choice (Nash) equilibria. So rational choice does not explain this norm. Evolutionary dynamics makes a lot of progress here. The same is true for sex ratio. The second part of your question is about resource competition. The standard evolutionary biologist’s model game for resource competition is Hawk-Dove (also known as “Chicken”). If this is the game, then evolutionary dynamics leads to a mixed equilibrium, where lots of individuals get hurt. Maynard-Smith pointed out that ownership behavior (fight if owner, yield if intruder) broke the symmetry by creating an expanded game that was better for all. Such ownership behavior is common in biology. This can be seen as a correlated equilibrium convention (a la Vanderschraaf). The correlating device is just put in by the modeler. So the question remains as to how focus on correlating factors may evolve. Daniel Herrmann and I have a paper forthcoming on this:  Invention and Evolution of Correlated Conventions.  

3:16: What’s the importance of ‘correlation’ in your approach to understanding a Darwinian categorical imperative and how does this help us understand the emergence of altruism and mutual aid?

BS: This comes back to where evolution parts company with rational choice. Suppose the game is just a one-shot Prisoner’s Dilemma. Rational choice says everyone chooses the strictly dominant strategy, to defect. Now suppose a big population, and individuals are randomly matched to play Prisoner’s Dilemma, get paid off in offspring, and repeat. Then evolution eliminates cooperators and agrees with rational choice. But, if by some mechanism, matching isn’t random, but rather cooperators meet cooperators and defectors meet defectors, then it is the defectors who are eliminated, leaving only cooperators. So correlation is the key to evolutionary emergence of altruism. Likewise, negative correlation can explain evolution of spite. This was all seen clearly by William Hamilton a long time ago. It is something that is missed in a pure rational choice approach.

3:16: Game theory lies behind your evolutionary approach to the social contract, but which one? I’d have said prisoner’s dilemma was a good candidate but you say that I’d be wrong. Why not the prisoner’s dilemma, and why instead the stag hunt?

BS: I like the Stag Hunt and I think it deserves more attention than Prisoner’s Dilemma, but I would not rest social contract theory on one game. A real social contract is a complex web of social norms and conventions. We can try to gain insight by isolating some classes of important social interactions, modeling them as games, and studying the games. There are quite a few interesting simple games. We can first study them singly, and then study more complex games gotten by combining them. For instance, combining Division of Labor (to produce a good) with Bargaining (to determine share in the fruits of production), or Stag Hunt with Bargaining, or combining Signaling games with all sorts of other games.

3:16: So how does the stag hunt, and a couple of other games – the ‘bargaining game’ and the ‘division of labour’ game – enable correlated interactions and what’s the effect of these correlations on the evolution of cooperative behaviour?

BS: Cooperation is a lot more general than altruism. As Adam Smith says, “ It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest.”  Most cooperation in society is for mutual advantage. We are not playing Prisoner’s Dilemma games all the time. In fact, they are a minor factor in social organization. Getting mutual advantage going in a complex society with complex and diverse interactions requires various social networks to correlate appropriate participants for a social interaction. 

3:16:  What do considerations of location, signaling and network show us about social cooperation, and can they be applied to non-human organisms – would it help us to understand the complex behaviours of animals without backbones who you might have thought had such small brains as to make cooperation and strategy impossible? 

BS: Yes, indeed. Biology is full or examples. I recommend a wonderful review article “Biofilms, City of Microbes” by Paula Watnick and Roberto Kolter  Journal of Bacteriology  (2000). 

3:16: Why is it important to understanding real behaviour that we acknowedge the importance of transient phenomena in all this?

BS: It may be theoretically attractive to just solve for equilibrium, but we may never get to equilibrium. In some cases, this may be because the evolutionary dynamics just doesn’t converge to equilibrium, either because it cycles or because it is genuinely chaotic. In other cases, it just takes too long to get to equilibrium, and in the lifetime of a society, it is never at a genuine equilibrium. Another, more subtle reason, is that even if we do get to an equilibrium,  which equilibrium that we get to may be influenced by transient phenomena. I wrote a paper illustrating this with games where play is preceded by pre-play cost-free exchange of signals.

3:16: Signalling, as you’ve already touched on, is important. Your approach to signaling, communication and learning begins by denying any division based on natural and non-natural meanings (the sort of thing Grice , Plato and Descartes thought very important) and puts you in with a tradition including Democritus, Aristotle, Adam Smith,, Hume, Darwin, Russell, Wittgenstein, Lewis and Millikan. Why is this important?

BS: Grice did emphasize the pragmatic side of communication with his theory of conversational implicature. I wouldn’t group him with the Platonists. But this was set in contrast with literal meaning. This is a real distinction. But I want to say that literal meaning arose out of Pragmatics that gradually got conventionalized and codified in dictionaries and so forth. Pragmatics comes first, it is not just an add-on. Man is just a part of nature, and meaning co-evolved with natural processes of communication. 

3:16: What are David Lewis’s ‘signalling game’ theory and Shannon and Kullback’s theory of information and how are they useful in making all meanings natural?

BS: The basic Lewis set-up goes like this: There are states of the world, that the sender can observe, signals that the sender can send to a receiver to communicate what the state of the world is, and acts that the receiver can do, which succeed in one state of the world and fail in the others. Sender and receiver have common interest in getting the act right for the state. So, basically, the receiver is trying to guess the state on the basis of the signal. If they get to an equilibrium where the guess is always right, this is a convention. There are multiple such conventions possible, and meaning is determined by the equilibrium into which the sender and receiver have settled. That is the basic idea. Lewis signaling games can be extended and generalized in many ways, and a lot has been done along these lines since Lewis. Lewis gave a static, rational choice, equilibrium analysis of these games. Meaning is a function of the equilibrium. I wanted to ask how we get to equilibrium, and gave an answers in terms of dynamics of evolution and learning. Lewis’ analysis was in the spirit of a naturalistic theory of meaning, so a naturalistic dynamics that explains how we get to equilibrium is quite compatible with that spirit. Lewis agreed with this, and saw my work as complementing his. 

But what can we say about communication on the way to equilibrium? We can say that there is some information transfer that gets better as we approach equilibrium. That is intuitively correct, but it also can be made precise using the tools of information theory. This is where Shannon and Kullback-Leibler come in. They quantify the information transmitted by a signal. So information transfer comes first, and meaning is something we get in ideal cases. 

3:16: Another couple of important theories you use are Darwin’s models of replicator dynamics which you’ve already discussed, but also theories of trial and error learning. Your approach is to replace the evolutionary question as a learning question. Can you explain what you’re up to here and why this helps to establish that ‘Democritus was right’ as you put it?

BS: If you are interested in the questions “How did signaling get started?” “How did it develop?” you want to start with dynamics that does not presuppose much. For biological evolution, over evolutionary time, replicator dynamics, replicator-mutator dynamics, and so forth are good places to start. If you want to see if signaling can spontaneously arise during a human’s lifetime, low rationality learning dynamics are good places to start. I started with a kind of reinforcement dynamics. It turned out that individuals repeatedly playing the simplest Lewis signaling game spontaneously learned to signal, and the learning was quite fast. This was what I observed in simulations. Through the good offices of Persi Diaconis, I had already gotten hooked up with Robin Pemantle on dynamics of social network formation. Robin and friends were able to prove that in this game, reinforcement learning converges to a perfect signaling system equilibrium with probability one. Positive results hold good for other low-rationality learning dynamics, such as Probe-and-Adjust, and Win-Stay, Lose-Randomize. Spontaneous emergence of information transfer is a robust result. Democritus held that meanings are conventional, and that they arise by a chance process. These results support his position. 

3:16: How does your model of ‘learning with invention’ help us grasp how signals grow – and is this something that can be extended from humans into the rest of nature ? 

BS: Thank you for asking this. In the learning that we have talked about, the Lewis signaling game is fixed. You have these states, acts, and signals, and the learning is all about what to do with them in order to communicate. A deeper question is how we learn the game itself. How does the game evolve? One step in this direction is to give a model of inventing new signals. To do this, I modified the basic reinforcement dynamics to allow for invention, using the Hoppe urn model from evolutionary biology. This allows for occasionally trying out something new. If it pays off, it becomes established as an option. This was put to work in a model of inventing new signals. How this applies to non-human species depends on how inventive they can be. One can now start the game with no signals at all. Gradually signals get invented and added to the game, until we get enough (perhaps more than enough) for perfect signaling. On simulations, the dynamics always learns to signal. Furthermore, if one starts the system in a suboptimal equilibrium, it invents its way out and goes on to learn a perfect signaling equilibrium. 

3:16: What are signaling networks and how do they give us a richer view of the dual aspect of signals that you discuss – and what’s the relation of your signaling theory to philosophy?

BS: In a Lewis signaling game, there are essentially 2 players, a sender and a receiver. Lewis allows the receiver to be a group – the audience – but the group acts like a single player. We can have multiple senders and receivers connected in various ways, to form signaling networks. An especially simple one is a chain, where an initial sender sends a signal to another player who in turn sends a signal to a third, and so on until the end of the chain. But there are other simple networks. For instance there may be many senders who gather information and send it to one receiver, or one sender giving instructions to a team of receivers, on to arbitrary complex signaling networks. Simple signaling networks can do computations and perform inferences. Complex signaling networks driven by reinforcement learning have beaten human experts at chess, go and poker. Fred Dretske argued for a reorientation of Epistemology, away from defining knowledge to the study of the flow of information. Anyone who takes the Dretske program seriously needs to think about signaling networks. 

3:16: How does your general framework help us understand happiness and some of the key issues of utilitarianism?

BS: I think that you are referring to my book with Louis Narens,  The Pursuit of Happiness. This has a different character from what we have been discussing. It is about measurement of utility, and whether Utilitarianism is meaningful at all. Early Utilitarians worried about these problems, but we think that many contemporary philosophers, both advocates and critics of Utilitarianism, have not taken them seriously enough. There is a connection to game theory and learning dynamics that comes at the end of the book. We investigate how learning dynamics might deliver  conventions for interpersonal comparisons of utility in small groups. As in the case of meaning, there are multiple possible equilibria. Such conventions can spontaneously arise, but they are conventional, like the meaning of signals. Nevertheless, they can allow Utilitarians to coordinate their actions. 

3:16: What is the role of chance and luck in all of this?

BS: Chance rules all.

3:16: You’ve written many case studies using your adaptive dynamics theories of cultural evolution and individual learning to social theory – you’ve looked at altruism, spite, fairness, trust, division of labour and signaling and the fundamental importance of correlation. As a take home could you take one of these and show how fruitful it is to understand one of these using your tools.

BS: This is asking for a lot in a short answer. I wrote a whole book on the Stag Hunt. In a Stag Hunt game, there are two equilibria, a cooperative equilibrium and an uncooperative equilibrium. Likewise, in a generalized multiplayer Stag Hunt, if the other players cooperate, it is best for you to cooperate and if the other players defect, it is best for you to defect. As a first approximation, this is a typical pattern for social norms (as Cristina Bicchieri points out). Now, instead of a Stag Hunt, consider a variation, Three in a Rowboat. You can row or not. If the other two don’t row, there is no point in you rowing, so this is one equilibrium – no one rows. But 3 rowing doesn’t work well, and neither does 1 rowing. There are equilibria where 2 row. So like the Stag Hunt, there are both cooperative and non-cooperative equilibria, but there this a tension: each would prefer that the other two row. There is a n-person game somewhat like this, called a public goods provision game with a threshold. Say there are 20 players, and if at least 15 pitch in everyone benefits. More pitching in doesn’t help. If less than 15 pitch in, the project fails. So there is a non-cooperative equilibrium, where no one pitches in. And there are cooperative equilibria where 15 pitch in. Lots of joint projects have this structure. As in the Stag Hunt, equilibrium analysis just tells us that they can succeed and that they can fail. So it is important here to have dynamic analyses that tell us more. Quite a bit is known, too much to try to tell you here. 

3:16: And for the readers here at 3:16, are there five books other than your own that you can recommend that will take us further into your philosophical world?

BS:

William Hamilton  Narrow Roads of Geneland

 

Ken Binmore  Game Theory and the Social Contract 

H. Peyton Young  Individual Strategy and Social Structure


 J. Mackenzie Alexander  The Structural Evolution of Morality


 Peter Vanderschraaf  Strategic Justice

To help follow up on his interview Jason McKenzie Alexander has a simulator for many evolutionary game theoretic models of interest to philosophers and social scientists. It can be found by clicking here.


ABOUT THE INTERVIEWER

Richard Marshall is biding his time.

Buy his second book here or his first book here to keep him biding!

End Time series: the themes

Huw Price's Flickering Shadows series.

Steven DeLay's Finding meaning series

Josef Mitterer's The Beyond of Philosophy serialised

NEW: Art from 3:16am Exhibition  - details here