Interview by Richard Marshall.

Michael Strevensworks on the philosophy of science, where his interests include explanation, complex systems, probability, confirmation, the social structure of science, and causation; the psychology of concepts; and the philosophical applications of cognitive science. He is currently writing a book about philosophical methodology. Here he's brooding on complexity theory and what it must do, the role of probability in such a theory, whether such a theory might apply to social as well as physical science, how complexity relates to chaos, about the nature of scientific explanation, on Cartwright's dappled world,about whether children are Platonic or Aristotelian essentialists, on whether we are all born with a physics genius in our head and on whether objective evidence in science is possible. This is a big beast from the deepest depths of the deepest depths...

3:AM:What made you become a philosopher?

Michael Strevens:I took a standard route, beginning with physics, on to mathematics, and then up the great chain of abstraction from the study of things you can bump into to the study of things whose very existence is perpetually in question… with a long detour through computer science, I suppose because virtual reality is poised delicately in between.

Why the impulse to keep climbing higher? Only Plato knows.

3:AM:You’re interested in complexity theory. Can you start by sketching what complexity theories are trying to do?

MS:They try to predict or at least to understand the behavior of complex systems—where complex systems include everything from cities to brains to (considered from the statistical physicists’ point of view) lumps of homogeneous, inert matter composed of multitudes of vibrating molecules.

3:AM:How is it possible for complexity theory to achieve these aims?

MS:A meaningful science of complexity must, I think, describe trends or tendencies or regularities of some sort. That doesn’t mean that it should be able to predict the behavior of particular systems. But it must find some patterns in some classes of systems—or else it simply has nothing to say. The whole enterprise of scientific inquiry into complexity, then, hinges on the existence of trends.

Fortunately, there are a lot of trends to be found. Some are very broad: the second law of thermodynamics is the queen of them all. Many exist, by contrast, only within a very specific context, such as the differences in rates of reproduction that drive evolution by natural selection in particular habitats.

It’s not just the scientists who are lucky to have all of these regularities close by. The possibility of life itself depends on regularities, both to ensure some sort of stability in local environmental conditions and to underwrite those small but persistent differences in reproductive success that make natural selection possible.

3:AM:What role does probability play in complexity theory?

MS:Probability quantifies tendencies; it is the principal tool used to represent tendencies in scientific domains ranging from physics to biology to sociology and economics (or at least, one of the principal tools, and I think first among equals). A simple example is the one-half probability of getting heads on a coin toss, which represents a tendency for the coin, when tossed over and over, to produce heads about one-half of the time. Statistical physics and evolutionary biology use probabilities in much the same way—but instead of producing heads, it’s about (say) producing babies.

It’s tempting to think of these probabilities not just as representational tools but as things in themselves, out there producing the regularities we see—so that there is a property, the probability one-half for heads, inhering in the tossed coin setup and doing the work of generating (typically) about one half heads. I think it is just fine to yield to this temptation. Then the probabilities are not only in the theory but in the world, and by understanding what sorts of things they are—by understanding what physical or biological or psychological properties they are built from—we understand the ultimate basis for the trends and regularities in complex systems that make science, life, and a lot of other things we care about possible. That is the project I undertake in my 2003 book Bigger than Chaos: The Probabilistic Structure of Complex Systems.

3:AM:Are social systems understandable from this point of view using the same probability tools as the natural sciences involving the laws of large numbers? So can we use the same approach with, say, statistical physics and population genetics and areas of economics?

MS:That is a great unsolved question. In the nineteenth century, scientists and government statisticians began to find fairly stable social trends: rates of marriage, suicide, undeliverable letters and other unfortunate events tended to stay much the same from year to year (though the rates differed from place to place). Further, these patterns could be captured quite well using the mathematics of probability, which was fast maturing at the time. There was great hope for a science of society that would replicate the success of the science of inert matter—a “social physics”.

That hope turned out to be premature. Pinning down social and economic trends in the detail we’d like has turned out to be incredibly difficult. Maybe that’s in part because we want more detail from our theories of people than from our theories of molecules. Maybe because social trends change too fast or depend in too complicated a way on environmental factors. Or maybe there are, in some cases at least, no statistical trends at all. Maybe we need another kind of mathematics, different from probability mathematics, to understand these systems. It’s a wonderful topic. I don’t know if I will ever contribute substantially to it myself—time is running out!—but I hope at the very least to make it a more central topic of philosophical discussion.

3:AM:How does this approach to complex systems relate to chaos? Is it kind of what chaos is?

MS:In a sequence of coin tosses, you have short term unpredictability—you never know whether the next toss is going to be heads or tails—and long term predictability—if you toss the coin for long enough, you can be pretty sure that you will get about one half of each side. It’s the same story for all the complex systems whose behavior can be represented using probabilities. You can’t predict, for example, which rabbit will be eaten by which fox the day after tomorrow, but you can predict the approximate rate of rabbit predation (and this number plays a crucial role in ecological and evolutionary models). Here’s an interesting thought: might short-term unpredictability and long-term stability be linked? Might the source of the unpredictability also be a source of the stability? In my book, I show that the answer is yes. The “chaos” (in the loose sense) that explains why it is so difficult to foresee the outcomes of individual coin tosses or ecological events is a crucial constituent of the package of properties that underwrites long-term stability, hence the viability of the sciences of complex systems and of life itself. All hail Eris!

3:AM:How is your work on complexity connected to your work on scientific explanation?

MS:Explanation was my first interest. I was wrestling with the special role that probability plays in models of natural selection, where it represents that often small yet persistent and so ultimately decisive tendency for one variant to “out-reproduce” another. Such evolutionary models leave out almost all the details of the causal process whose consequences they describe—all the drama of mating, foraging, predation, disease—boiling it down to a few numbers representing statistical tendencies: birth rates, predation rates, death rates. If explanation is about giving causes, as most of us believe, does that mean that evolutionary models, which pass over so much causal information, are not explanatory?

Setting out to answer this question, I pondered the role of abstraction in scientific explanation. When is it permissible, even obligatory, to omit a causal detail from an explanation? At what point has abstraction gone too far? (Why did the dinosaurs die out? Something killed them. True, but that observation alone does not supply enough causal information to be a proper explanation.) The result was the theory of explanation I present in my book Depth, which turns on the thesis that a completely satisfying explanation of a phenomenon must include only those aspects of the relevant underlying causal process that make a difference to the phenomenon’s occurrence. Abstract away from all the non-difference-makers, then, but stop just before you begin to erase the difference-makers.

A large part of the book is given over to characterizing the appropriate sense of “difference-making”. It culminates by showing how the statistical representations in evolutionary models and other explanatory probabilistic models in science do just what explanations are supposed to do: they extract from a complex underlying causal process the properties of the process that made a difference to the thing to be explained. A model explaining why the famous dark moths replaced the light ones in sooty nineteenth-century forests will exclude details determining which particular moths reproduced or got eaten when and where, citing only the factors that affected rates of reproduction and death, which are what make a difference to a trait’s taking over the population. The probabilities and their dependences—for example, the predation probability and its dependence on coloration—encode precisely this information.

In accounting for the explanatory power of probabilistic models of complex systems, I draw heavily on my earlier work on probability and complexity. Without that earlier work, I would not have been able to show that in the systems in question, probabilistic information just is causal difference-making information.

3:AM:Why doesn’t Cartwright’s ‘dappled world’ of science appeal to you?

MS:If Nancy Cartwright’s picture of the way the world works were correct, then science would look like… pretty much what it does look like: a large number of more or less autonomous disciplines or epistemic enterprises, each with its own particular concerns, vocabularies, organizational structures, and methodological practices. According to Cartwright, science is structured in this way because the world is structured in this way. She is arguing against the traditional nineteenth and twentieth century view that every phenomenon at every level is the consequence of a single, unified set of laws—the laws of fundamental physics. Cartwright proposes instead that more or less entirely independent principles are at work in every domain, with things becoming rather “unprincipled” at the borders. The laws of fundamental physics, in particular, hold only in very special experimental conditions of the sort created in physics labs.

My program for understanding the behavior of complex systems is premised on the traditional assumption, which entails that trends and regularities in complex systems are consequences of that single set of fundamental laws operating on the various special and rather complex physical structures that constitute biological and social systems.

Why do I stick to the assumption that everything can be traced back to the fundamental laws? I can’t prove it, but to me it is such a compelling vision—to derive everything, or at least everything interesting and important, from this same small set of laws apparently designed solely to police interactions between fundamental particles. I realize, though, that to others my dream is a nightmare which ends with Steven Weinberg shutting down the biology and sociology departments and sending the faculty to eke out their days dusting the superconducting magnets in his Texas-sized supercollider. The unified vision of science entails nothing of the sort, however: biological regularities (for example) owe their existence and nature far more to the complex physical structure of biological systems than to the form of the fundamental laws, and the experts on that structure are biologists. So we will always need specialists in biology to explain biological behavior. Even if the world is not dappled, scientific practice is dappled through and through.

3:AM:Are children Platonic (or maybe Aristotelian) essentialists about plants and animals?

MS:On one view, conceptual developmental in children tends to recapitulate conceptual development in human history (ceasing, in the absence of higher education, some time before the present). This is not simply idle speculation: you find high-school kids in physics classes applying something that looks much like medieval impetus theory, and younger children struggling to distinguish mass and quantity or heat and temperature in much the same way that early scientific thinkers did. Just as biology went through a long period of essentialist thinking, then, you might wonder whether children, too, go through an essentialist phase, in which they believe that members of each species share a single property, an essence, that causes their distinctive appearances and behaviors—so that all members of the tiger species, say, have an essence, identical in each tiger, that causes stripes, a ferocious disposition, and so on. (Perhaps this essentialist phase lasts throughout adulthood if you neglect to read Stephen Jay Gould.)

But I think that the case of biology is different. Our “naive” species concepts—the concepts that we inherit from our youthful selves—attribute a more wide open causal structure to organisms, a structure that is compatible with modern anti-essentialist biology. It was a big deal to find out that essentialism was false, but our species concepts were already ready for the discovery. They did not have to be discarded or transmogrified; without alteration, they were simply used to think, quite coherently, a surprising new thought: “There is no essence of tigerhood“. In general, I think that many more of our “naive” concepts are open in this way than most psychologists or philosophers have supposed. Our minds are equipped to make radical new discoveries without radical conceptual change.

3:AM:Are we born with a physics genius in our head? Is this the other part of the Kahneman story about all the stupid mistakes we make when thinking, that we do have inbuilt heuristics but also a bundle of experts working in our subconscious as well?

MS:That’s about right, both in physics and everywhere else: we have a bunch of heuristics that do excellent work but also get tripped up or lost when they’re not on home ground. In the case of probabilistic thinking, in particular, there is a striking dichotomy: we are just terrible at some kinds of statistical inference (as Kahneman and others have shown), but we are incredibly good at estimating basic probabilities in many physical, biological, and social systems—at estimating the probability of heads on a coin toss, for example, but also at estimating (defeasibly!) the survival value of various biological adaptations. Scientists leverage this ability, this “physical intuition”, all the time in deciding how to build models of complex systems, that is, in deciding which factors or properties are likely to be relevant and which not to the behavior they’re trying to model—that is, in distinguishing difference-makers from non-difference-makers.

In my recent book Tychomancy, I propose a theory of the psychological mechanisms underlying this ability and put it to work to explain some spectacular applications of our intuitive but incredibly savvy probabilistic thinking in physics, biology, and statistics, where it made possible (I think) some of the discoveries of James Clerk Maxwell, Charles Darwin, and the founders of the statistics of measurement error. The book is a mix of philosophy of science, history of science, and cognitive psychology, and as you might guess, a companion piece to my earlier book on complexity.

3:AM:Can the notion of objective evidence in science be sustained?

MS:The question of what counts as evidence is, I think, objective. But the question of what weight that evidence should have—the degree to which it should confirm or disconfirm a hypothesis—is not objective, but varies with temperament, institutional culture, and zeitgeist. (Subjective Bayesians are not far off in saying that it all depends on your prior probabilities.) An objective answer to the first question might seem useless without an objective answer to the second. But just that little bit of preliminary objectivity is enough—so I believe—to explain why science is such a powerful method of inquiry.

I haven’t published these thoughts yet; I’ve decided to set them out in what is a new format for me, a book for a general audience, arriving (I hope) fairly soon.

3:AM:And for the readers here at 3:AM, are there five books you could recommend that will take us further into your philosophical world?

MS:Let me list some books that are not philosophy in the traditional sense, but which had a big impact on some of the ideas I have discussed in this interview.
1. Ian Hacking, The Taming of Chance. A history: probability and science—”social physics” in particular—meet in the nineteenth century.
2. Albert Lotka, Mathematical Principles of Biology. An unorthodox but inspiring investigation of the foundations of statistical thinking in population ecology and evolutionary biology.
3. Susan Carey, The Origin of Concepts. How (some of) our concepts got to be so much more sophisticated than their juvenile antecedents.
4. Robert Merton, The Sociology of Science. This volume assembles some of Merton’s most important papers, making a powerful case that science cannot be fully understood as an epistemic institution until it is understood as a social institution.
5. Nietzsche, The Genealogy of Morals. If it didn’t bring about, it certainly resonated with and exacerbated my taste for inquiring into psychological origins.

Photo on 2015-09-05 at 23.25

ABOUT THE INTERVIEWER
Richard Marshallis still biding his time.

Buy his book hereto keep him biding!