Law

Interview by Richard Marshall.


'If the laws of nature prohibit some kind of event (such as increasing the total quantity of energy in the universe, or accelerating a body from rest to beyond the speed of light -- or improving a lightning rod by making it blunter), then it is not merely the case that such an event does not happen. Such an event cannot happen; it is impossible. The laws are not merely true; they could not have been false.'

'Subjunctive facts are not spooky or exotic. We discover them in exactly the same way as we discover various facts “about the actual world”: by engaging in ampliative reasoning from our observations. A subjunctive fact about how the water in my glass would have behaved, if it had a given temperature and pressure, is arguably even less remote from my observations than a non-subjunctive fact about how some water actually behaves in some spatiotemporally very distant intergalactic region with that temperature and pressure. This comparison is obscured if we refer to the former water as existing in a merely possible world, whereas the latter water is our neighbor here in the actual world.'

'Here is a simple example of one kind of non-causal explanation. Why is it that Mother fails every time she tries to divide her 23 strawberries evenly among her 3 children without cutting any (strawberries – or children!)? The answer does not have to do with the particular causal mechanisms that she used to distribute her strawberries. Rather, the explanation is that her success was mathematically impossible: 3 does not go into 23 evenly.'

'A given game may be worth playing at least partly by virtue of its relations to other mathematical games that are independently worthwhile. One of the features that can make a game worthwhile is that it enables mathematical explanations to be given (or demanded) that could not be given (or demanded) before.'

'In both mathematics and science, there are tight connections between being an explanation, being able to render similarities non-coincidental, and being a natural property. These connections contribute toward making all of these varieties of explanation into species of the same genus.'

Marc Lange specializes in philosophy of science and related areas of metaphysics and epistemology, including parts of the philosophy of physics, philosophy of biology, and philosophy of mathematics. Here he discusses the necessity of laws of nature, why their necessity is contingent, whether these laws are immutable, what meta-laws are and what they're for, laws and objective chance, why laws are laws because they are necessary rather than because they are laws, non-causal explanations in science and maths, explanation by constraint and why we don't find them in maths, really statistical and dimensional explanations, why non-causal explanations are important in maths, and why despite their diversity non-causal explanations really are all explanations. This one needs time and a steady minds eye...

3:AM: What made you become a philosopher?

Marc Lange: As a child, I was interested in anything to do with history (adults kept reminding me that I was born on the day that President Kennedy was assassinated) as well as in anything that managed to find its way into the Sunday New York Times, especially science. I still have my family’s Moon-landing issue of the Times, with its headline in famously large type. I remember that when NASA announced that iron had been found in moon rocks, I was very impressed that a category that scientists had developed on Earth turned out to apply even on the Moon. I read a lot of popular history of science (such as non-fiction by Isaac Asimov) and I enjoyed doing science-fair projects, where I could work on my own. My high-school physics teacher was very encouraging (thank you, Mr. Neff!) and even took me once to hear Dirac give a public lecture at Bard College (on how he had predicted the existence of antimatter). When various scientists recommended certain works of philosophy, I tried to read them. For example, Einstein seemed to admire Spinoza, so I tried reading The Ethics, with a lot of help from Stuart Hampshire’s little book. I also liked reading Bertrand Russell, Michael Harrington, and Douglas Hofstadter. Martin Gardner’s articles on mathematics and on the logic of scientific reasoning fascinated me; I was disturbed by the underdetermination of theory by observation. Under these logical circumstances, I wondered, how could science achieve confident, universal knowledge (such as anticipating antimatter and discovering categories applicable even on distant worlds)?


The summer before I went to Princeton, intending to major in physics, I read Nozick’s Philosophical Explanations, which had just come out (it must have been reviewed or advertised in the Times), and I knew that I wanted to take a philosophy class. I gradually began to find my physics classes increasingly frustrating: I liked learning the cute mathematical tricks for solving various equations, but the classes did not seem to take seriously any of the “ultimate questions” that I wanted to hear more about. I admired the textbook for my first “Electricity and Magnetism” class, so I was delighted when I reached the remark, “What is an electric field? Is it something real, or is it merely a name for a factor in an equation which has to be multiplied by something else to give the numerical value of the force we measure in an experiment?” Finally, I remember thinking, here at last was the sort of question that I wanted us to pursue. But the textbook went on to say that “since it works, it doesn’t make any difference. That is not a frivolous answer, but a serious one.” I felt ashamed of my obvious intellectual immaturity and bad taste. (I tell the full story in the Preface to my An Introduction to the Philosophy of Physics: Locality, Fields, Energy, and Mass.) Somehow I mustered up the courage to express my unhappiness to my physics preceptor, Joseph H. Taylor. He suggested that I go up the street and “speak to the philosophers”; this is the sort of question that they look into, he said. I was very pleased to hear this, since I was already taking (and loving) Bas van Fraassen’s introductory course in the philosophy of science. I am very grateful to my teachers – especially to Bas van Fraassen and David Lewis at Princeton, and to Robert Brandom and Wesley Salmon later at Pittsburgh. I was very fortunate to have encountered them and I continue to be deeply inspired by them.

3:AM:You’ve looked at laws of nature, laws of science and so forth. They seem to be necessary in that they’re not just accidents – but they’re not as necessary as logical truths, say. Do you think this? How do you characterise this degree of necessity which I think you dub ‘natural necessity?

ML: In 1772, a distinguished committee of the Royal Society concluded that pointed lightning rods work better than blunt ones. However, in 1777, King George III decided to reject the committee’s recommendation; he had sharp conductors replaced by blunt ones on his palace, and he lobbied the Royal Society to rescind its conclusion – at least partly because the committee had been chaired by Benjamin Franklin, and after the Battle of Bunker Hill, the King was reluctant to demonstrate respect for any of Franklin’s views, even about lightning rods. But when the King appealed to the President of the Royal Society, Sir John Pringle, to retract the committee’s decision, Pringle is said to have replied, “Sire, I cannot reverse the laws of nature.”

Pringle’s remark gestures towards not just the objectivity but also the necessity of the natural laws. If the laws of nature prohibit some kind of event (such as increasing the total quantity of energy in the universe, or accelerating a body from rest to beyond the speed of light -- or improving a lightning rod by making it blunter), then it is not merely the case that such an event does not happen. Such an event cannot happen; it is impossible. The laws are not merely true; they could not have been false. Their necessity is variously called “natural”, “physical”, “nomological”, or “nomic” necessity in order to distinguish it from logical, conceptual, metaphysical, mathematical, and other species of necessity. Natural necessity is not possessed by contingent facts that are not laws (that is, by “accidents”), such as (in Reichenbach’s example) the fact (presuming it to be a fact) that all gold cubes in the entire history of the universe are smaller than one cubic meter. There could have been a gold cube exceeding one cubic meter, but as it turns out, none ever exists.

That the laws are naturally necessary, whereas the accidents are not, is associated with another difference between laws and accidents – namely, in their relation to the facts expressed by “counterfactual conditionals”: if-then statements that concern what would have happened under circumstances that never actually come to pass. The laws, being necessary, would still have been true even if other things had been different, whereas an accident has less perseverance under counterfactual antecedents. For instance, since it is a law that no body is accelerated from rest to beyond the speed of light, this cosmic speed limit would not have been broken even if the Stanford Linear Accelerator had now been cranked up to full power. On the other hand, since it is an accident that every gold cube is smaller than a cubic meter, this pattern would (presumably) have been broken if Bill Gates had wanted to have a gold cube constructed exceeding a cubic meter.

Of course, laws would not have remained true under counterfactual suppositions with which they are logically inconsistent (such as “Had a body been accelerated from rest to beyond the speed of light”). But presumably, the laws would still have held under any counterfactual antecedent that is logically consistent with all of the laws. On the other hand, trivially no fact that is an accident is preserved under all of those antecedents (since one such antecedent posits the accident’s negation, and the accident is obviously not preserved under that antecedent!). Making a few details more explicit, we arrive at the following thesis:


It is a law that m if and only if in any conversational context, for any circumstance p that is logically consistent with all of the facts n (taken together) where it is a law that n, it is true that if p had been the case, then m would still have been the case (that is, p •→ m).

(I’m reserving lower-case letters for “sub-nomic” claims -- that is, for claims such as “The emerald at spatiotemporal location … is 5 grams” and “All emeralds are green”, as contrasted with “nomic” claims such as “It is a law that all emeralds are green” and “It is an accident that the emerald at spatiotemporal location… is 5 grams”.)

Even if the thesis above is correct, there is an obvious limitation on how enlightening it can be. That is because the laws appear in this thesis on both sides of the “if and only if.” That is, the thesis picks out the laws by their invariance under a certain range of counterfactual antecedents p (namely, under any p that is logically consistent with all of the facts n where it is a law that n) but this range of antecedents, in turn, is picked out by the laws. However, we can avoid this deficiency by tweaking the thesis. It says roughly that the laws form a set of truths that would still have held under every antecedent with which the set is logically consistent. In contrast, take the set containing exactly the logical consequences of the accident that all gold cubes are smaller than a cubic meter. This set’s members are not all preserved under every antecedent that is logically consistent with this set’s members. For instance, had Bill Gates wanted to have a gold cube constructed exceeding a cubic meter, then such a cube might well have existed and so it might not have been the case that all gold cubes are smaller than a cubic meter. Yet the antecedent p that Bill Gates wants such a cube constructed is logically consistent with all gold cubes being smaller than a cubic meter.

This idea motivates the definition of “sub-nomic stability”:
Consider a non-empty set Γ of sub-nomic truths containing every sub-nomic logical consequence of its members. Γ possesses sub-nomic stability if and only if for each member m of Γ and for any p where Γ∪{p} is logically consistent (and in every conversational context), it is not the case that if p had held, then m might NOT have held (i.e., then m’s negation might have held) -- that is, ~ (p ◊→ ~ m).
Notice that ~ (p ◊→ ~ m) logically entails p •→ m. (In other words, that it is not the case that ~m might have held, if p had held, logically entails that m would have held, if p had held.) Therefore, a set of truths is sub-nomically stable exactly when its members would all still have held -- indeed, not one of their negations even might have held -- under any counterfactual antecedent with which they are all logically consistent.

I have proposed that the set of truths m that are laws forms a sub-nomically stable set, whereas no set of sub-nomic truths that contains an accident (except for the set of all sub-nomic truths) possesses sub-nomic stability. (For example, as we saw, the set containing exactly the logical consequences of all gold cubes being smaller than a cubic meter is unstable since that set is not invariant under the antecedent positing that Bill Gates wanted to have a gold cube constructed exceeding a cubic meter.) By the definition of “sub-nomic stability”, the members of a sub-nomically stable set would all still have held under any sub-nomic counterfactual antecedent with which they are collectively logically consistent. That is, a sub-nomically stable set’s members would all still have held under any sub-nomic counterfactual antecedent under which they could (i.e., without contradiction) all still have held. In other words, a stable set’s members are collectively as resilient under sub-nomic counterfactual antecedents as they could collectively be. They are maximally resilient. That is, I suggest, they are necessary. The laws’ natural necessity consists in their forming a sub-nomically stable set.

3:AM: Isn’t there a paradox in this idea of necessity that is contingent? How do you crush that idea?

ML: Among the sub-nomic truths, the natural laws are necessary in that they collectively have maximal resilience under counterfactual perturbations. But they are contingent in that their necessity is not as strong as the necessity possessed by broadly logical truths (such as logical, mathematical, and metaphysical truths). The sub-nomic broadly logical truths form a stable set since they would still have held under any broadly logical possibility. But the range of counterfactual antecedents under which the broadly logical truths would still have held, in connection with their stability (i.e., their necessity), is broader than the range of counterfactual antecedents under which the laws would still have held, in connection with their stability (i.e., their natural necessity). So broadly logical necessity is stronger than natural necessity.

The strengths of the various species of necessity can be compared because the stable sets must form a natural ordering: for any two sub-nomically stable sets, one must be a proper subset of the other. This can be demonstrated by reductio:

1. Suppose (for reductio) that Γ and Σ are sub-nomically stable, t is a member of Γ but not of Σ, and s is a member of Σ but not of Γ.
2. Then (~s or ~t) is logically consistent with Γ.
3. Since Γ is sub-nomically stable, every member of Γ would still have been true, had (~s or ~t) been the case.
4. In particular, t would still have been true, had (~s or ~t) been the case. That is, (~s or ~t) •→ t.
5. So t & (~s or ~t) would have held, had (~s or ~t). Hence, (~s or ~t) •→ ~s.
6. Since (~s or ~t) is logically consistent with Σ, and Σ is sub-nomically stable, no member of Σ would have been false had (~s or ~t) been the case.
7. In particular, s would not have been false, had (~s or ~t) been the case. That is, ~((~s or ~t) •→ ~s).
8. Contradiction from 5 and 7.
Thus, the sub-nomically stable sets must form a nested hierarchy. The natural laws are just one stable set among others, and natural necessity is just one variety of necessity among others. I believe that some scientists have even taken certain laws to have a stronger variety of natural necessity than other laws possess. For instance, some scientists have regarded the great conservation laws (of energy, momentum, etc.) as modally stronger than the individual force laws (gravitational, electrostatic, etc.) and thus as constraining the particular kinds of forces there could have been. In being modally stronger, the conservation laws belong to a stable proper subset of the set of all natural laws, which is also stable. The hierarchy of natural laws is an important tool in understanding scientific practice.

This picture of necessity as associated with sub-nomic stability identifies what is common to logical necessity and to the various grades of natural necessity, in virtue of which they are all species of the same genus. The identification of necessity with stability explains how the laws could possess a variety of necessity and yet be contingent.

3:AM: Are natural laws immutable on your account?

ML: The laws of nature have long been described as immutable. If the laws are immutable, then I think that their immutability should turn out to be a consequence of whatever distinguishes laws from accidents. I’ll briefly sketch why, on my account, the laws cannot change. As an example, suppose for the sake of reductio that


(1) Between any two electrons that have been at rest, separated by r centimeters for at least r/c seconds, there is an electrostatic repulsion of F dynes
is a law for the universe’s first 10-10 seconds, whereas (for f ≠ F)
(2) Between any two electrons that have been at rest, separated by r centimeters for at least r/c seconds, there is an electrostatic repulsion of f dynes


is a law after the universe’s first 10-10 seconds. Suppose that each of these “laws” m is true of the period when it is a law (in that the universe’s history during that period is logically consistent with m). How, then, would (1)’s lawhood for the universe’s first 10-10 seconds bump into (2)’s lawhood for the rest of the universe’s history? The counterfactuals required for (1) to belong to a stable set conflict with the counterfactuals required for (2) to belong to a stable set, since (1)’s lawhood requires that had two electrons been at rest, exactly r centimeters apart for at least r/c seconds, at some moment when the universe is more than 10-10 seconds old, then the electrons would have experienced at that moment an electrostatic repulsion of F dynes. This is inconsistent with the counterfactual required for (2) to belong to a stable set. So the laws of a given period must be laws forever. (Nothing precludes its being a law that (1) holds for the universe’s first 10-10 seconds and (2) holds thereafter. But then the laws never really change.)

3:AM: What are meta-laws and what are they for?

ML: One of the great innovations of 20th-century fundamental physics, beginning with Einstein’s special theory of relativity, was to reinterpret the fact that the fundamental laws exhibit various symmetries – for instance, that the fundamental laws privilege no particular time or place or spatial direction. These symmetries had previously been interpreted merely as byproducts of what those fundamental laws happen to be. The innovation was to regard the symmetries instead as independent of the fundamental laws by virtue of being imposed on the fundamental laws “from above” – in other words, as being meta-laws governing those fundamental, first-order laws in the same way as those first-order laws govern the world’s sub-nomic facts.

On my view, a symmetry that constitutes a meta-law is invariant under a different range of counterfactuals than a symmetry that is a byproduct of the first-order laws. In particular, a meta-law would still have held, had the first-order laws been different (in any way that is logically consistent with the meta-laws), whereas a byproduct might not still have held, had the first-order laws been different. For example, suppose it is a meta-law that the first-order laws privilege no particular moment. Then had there been an additional fundamental force besides the actual ones (and so had there been an additional fundamental force law besides the laws specifying the electromagnetic force, the nuclear forces, and so forth), then the first-order laws would still have privileged no particular moment. I argue that the meta-laws form “nomically stable” sets -- that is, sets of nomic and sub-nomic truths that are as invariant under nomic and sub-nomic counterfactual antecedents as they could (without contradiction) be. The upshot is then that for any nomically stable set, its sub-nomic members must form a sub-nomically stable set – a set appearing high in the hierarchy of sub-nomically stable sets that I mentioned earlier. Its members possess a strong variety of natural necessity.

In this way, the meta-laws constrain the first-order laws. For example, the great conservation laws follow (within a Hamiltonian dynamical framework) from spacetime symmetries. If the symmetry principles are meta-laws, then the conservation laws (as sub-nomic members of a nomically stable set) will belong to a sub-nomically stable set that does not include the various force laws, for instance. The natural necessity possessed by the conservation laws will then be modally stronger than the natural necessity possessed by the force laws. In this way, the conservation laws become able to constrain the kinds of forces there could have been.

3:AM: How do you explain laws and objective chances?

ML: Many well-confirmed scientific theories posit objective chances, and these chances (like anything else) are governed by various laws. For instance, a given atom at a given moment may have a 50% chance of undergoing radioactive decay in the next 102 seconds, and this chance is explained by the numbers of protons and neutrons in the atom’s nucleus and a law specifying that any such atom has a half-life of 102 seconds. The relation between laws and objective chances is a datum that any philosophical account of natural law must account for. What is this relation? Suppose that there is a rare radioactive isotope – let’s call it “Is”. Each Is atom has a half-life of 7 microseconds. So it could happen that each Is atom decays within 7 microseconds of being created. But it cannot be a law that they all do; it could only be an accident. Why is that? We have here a chancy fact:
C: Each Is atom has a 50% chance of decaying within 7 microseconds of being created.
Why does C’s nonvacuous truth permit a certain regularity R to hold
R: Each Is atom decays within 7 microseconds of being created while C’s nonvacuous truth precludes R from holding as a law?

On my view, R’s lawhood would be connected with R’s membership in a sub-nomically stable set. Sub-nomic stability is a collective affair, not an individual achievement. A set is sub-nomically stable. Each member of the set depends on the others to limit the range of invariance that it must have in order to qualify as a law. That the laws must form a system helps to account for the relation I’ve just mentioned between laws and chancy facts.

Suppose R is a law. C’s nonvacuous truth is logically consistent with R. Suppose for the sake of reductio that C’s nonvacuous truth is logically consistent not just with R, but with all of the laws. Then because the laws form a sub-nomically stable set, the laws have to be preserved under the counterfactual supposition that C is nonvacuously true. But had C been nonvacuously true, R might not still have held. Had C been nonvacuously true, each of the Is atoms might have decayed within 7 microseconds, but then again, some might just as well not have done so. Thus we have our reductio: if R is a law, there must be some other law that is logically inconsistent with C’s nonvacuous truth. So if R is a law, C must be vacuous or false. Contrapositively, if C is nonvacuously true, then R cannot be a law.

3:AM: In answering your own question: ‘Are the laws necessary by virtue of being laws, or are they laws by virtue of being necessary?’ you want to argue that laws are laws because they are necessary, and this necessity comes from ‘subjunctive facts’, but that these are tricky candidates for this job. So can you sketch why you say subjunctive facts are ontologically primitive and responsible for laws?

ML: Various proposed accounts of natural lawhood disagree sharply over how they purport to answer the Euthyphro-style question: ‘Are the laws necessary by virtue of being laws, or are they laws by virtue of being necessary?’ A “Humean” approach, according to which the laws are fixed (somehow) by the sub-nomic facts (which the laws do not help to constitute), tends to regard the laws as deriving their necessity from their lawhood; a fact’s natural necessity just is its being entailed by the laws. But this approach fails to make clear why there is a species of necessity associated with lawhood. By virtue of what do the laws bestow necessity on the facts that logically follow from them? On my view, by contrast, lawhood derives from necessity, which is identified with membership in a non-maximal sub-nomically stable set – and a set’s sub-nomic stability, in turn, is fixed by the subjunctive facts: the facts expressed by subjunctive (including counterfactual) conditionals. This approach makes clear why sub-nomic stability is associated with a species of necessity: because a set’s being sub-nomically stable consists of its being as invariant under counterfactual antecedents as it could (logically possibly) be. This sort of maximal inevitability (i.e., maximal unavoidability) is plausibly what necessity amounts to.

This picture reverses the usual view that the laws help to make various counterfactual conditionals true. On my view, it is the subjunctive facts that are the lawmakers. With facts about what is necessary (and facts about what is a law) riding on the subjunctive facts, the facts about necessities (and laws) become unavailable to help constitute the subjunctive facts. It then begins to look tempting to regard the subjunctive facts as ontological bedrock. This approach becomes even more attractive when various putatively “Humean” facts (such as facts about the instantaneous velocities of various bodies) are revealed to have an irreducibly subjunctive component. Subjunctive facts turn up smack dab in the middle of the “Humean” heartland.

Subjunctive facts are not spooky or exotic. We discover them in exactly the same way as we discover various facts “about the actual world”: by engaging in ampliative reasoning from our observations. A subjunctive fact about how the water in my glass would have behaved, if it had a given temperature and pressure, is arguably even less remote from my observations than a non-subjunctive fact about how some water actually behaves in some spatiotemporally very distant intergalactic region with that temperature and pressure. This comparison is obscured if we refer to the former water as existing in a merely possible world, whereas the latter water is our neighbor here in the actual world.

3:AM: What advantage has your approach saying that that subjunctive facts are the lawmakers got over rival approaches explaining laws?

ML: The laws’ status as necessary yet contingent presents a significant challenge to many approaches to understanding lawhood. “Humeans”, such as David Lewis, have embraced the laws’ contingency. Indeed, where the laws are deterministic, Lewis is obliged to say that under some ordinary counterfactual antecedent, the laws of nature would have been different. (For instance, if I had arrived at my office a minute earlier, then there would have been a small “miracle” – a small violation of the actual laws.) This strikes me as intuitively implausible and contrary to scientific practice. This view also makes it difficult to see how the laws deserve to be termed “necessary” considering that they have such measly counterfactual resilience. Other accounts of law go to the opposite extreme, embracing the laws’ necessity but failing to do justice to their contingency. Dispositional essentialists, for example, regard the laws as expressing the essences of various natural properties or kinds. Accordingly, the laws are metaphysically necessary. But then the laws fail to occupy a place somewhere between the broadly logical necessities and the accidents. Once again, this seems to me both intuitively implausible and contrary to scientific practice. This view also makes it difficult to see how some laws can possibly transcend others in having a stronger variety of necessity, when necessity has already been “maxed out” right from the start in that all laws possess metaphysical necessity.

In short, if an account purports to understand lawhood in terms that do not themselves essentially involve subjunctive facts, then it is difficult to see how that approach will be able to account for the precise relation between lawhood and counterfactual invariance. I take that relation to be that the laws form a sub-nomically stable set larger than the set of broadly logical necessities but smaller than the set of all sub-nomic truths. If lawhood is cashed out as “F-hood”, where F-hood does not essentially involve subjunctive facts, then I fear that the only way to make F-hood connect to sub-nomic stability will be to insert this connection by hand. This problem afflicts the Armstrong-Dretske-Tooley view of laws according to which a law consists of a “nomic necessitation” relation among universals. How does that relation’s holding generate just the right amount of counterfactual invariance, neither too much nor too little? Any solution is likely to be ad hoc. Any account that does not take subjunctive facts themselves to be the lawmakers is likely to have to resort to adhocery in order to capture the laws’ special relation to counterfactuals.

3:AM: You’ve also thought about non-causal explanations in both science and mathematics. These explanations are contrasted with causal explanations aren’t they? Can you sketch what kinds of explanation we’re looking at here because don’t some philosophers think there aren’t any? Why do you say they are important – and what kinds of non-causal explanation aren’t you looking at?

ML: Scientific explanations that are “causal” derive their explanatory power by virtue of providing some (contextually relevant) information about the causes of the event being explained or, more broadly, about the world’s causal network. Even a law of nature, which is of the wrong ontological category to be caused by anything, can have a causal explanation. For example, take the law that the electrostatic field at various distances r from a long, thin, straight wire with uniform charge density λ is proportional to λ/r. An explanation that derives this law from Coulomb’s law (giving the force between two point charges) is causal because the derivation acquires its explanatory power by tracing the way that the charge elements in the wire causally contribute to the electric field.

However, although many scientific explanations qualify as “causal” in this broad sense, some do not. What makes such an explanation non-causal is not that it abstracts from the hurly-burly of particular causal interactions. Many causal explanations do that; they remain “causal” because they derive their explanatory power by describing (perhaps in abstract or general terms) the world’s causal relations. For instance, if we say that one trait rather than an alternative increased in frequency in a population because it was biologically fitter, then we have abstracted away from the particular matings, predations, reproductions, and so forth that occurred. Nevertheless, we have explained by describing the causal landscape responsible for the outcome. Some explanations do not work that way.

Here is a simple example of one kind of non-causal explanation. Why is it that Mother fails every time she tries to divide her 23 strawberries evenly among her 3 children without cutting any (strawberries – or children!)? The answer does not have to do with the particular causal mechanisms that she used to distribute her strawberries. Rather, the explanation is that her success was mathematically impossible: 3 does not go into 23 evenly. The same considerations apply to explaining why no one ever managed to traverse all of the bridges of Königsberg (in Euler’s famous example) or why no one ever untied a trefoil knot. These attempts never succeeded because they couldn’t succeed; success at these tasks is mathematically (or topologically) impossible. Failure is no reflection of what kinds of causal interactions happen to be permitted by the ordinary laws of nature.

These examples illustrate only one variety of non-causal scientific explanation. I believe that there are several varieties.

3:AM: You discuss a type of non-causal scientific explanation as ‘explanation by constraint’. How do these explanations work?

ML: The examples that I just given are all “explanations by constraint” because each works by showing how the fact being explained derives from facts having a stronger variety of necessity than the ordinary laws of nature possess. In the above examples, the facts doing the explaining have mathematical necessity. Other explanations by constraint employ laws of nature having an especially strong variety of natural necessity. For instance, as I mentioned earlier, some scientists have regarded the great conservation laws as modally stronger than the individual force laws and thus as constraining the particular kinds of forces there could have been. On this view, the reason why all of the various kinds of forces are alike in conserving energy, despite their diversity in other respects, is not that the electromagnetic force conserves energy (because of its force law), the nuclear force conserves energy (because of its force law), and so on for each of the various actual kinds of forces – as if it were a giant coincidence that all of the various kinds of force are alike in conserving energy. Rather, there is a common reason why they all conserve energy -- namely, because the law of energy conservation limits the kinds of forces there could have been.

Another notable explanation by constraint uses the spacetime symmetry meta-laws to explain why the conservation laws hold. One of Einstein’s insights (which is obscured by treating light as if it plays a special role in relativity) was that the Lorentz transformations do not arise from electrodynamics (which is concerned with one particular kind of causal interaction), but rather are explained by constraints imposed by spacetime. Relativity, as Einstein emphasized, is a “theory of principle”, not a “constructive theory”.

3:AM: Do we find explanations by constraint in maths as well?

ML: No. Explanations by constraint are scientific explanations. Some of them feature mathematics, but they must be sharply distinguished from explanations in mathematics (rather than science), which have mathematical theorems (or other kinds of mathematical facts) as their targets.

Of course, explanations by constraint and explanations in mathematics are alike in being non-causal explanations. They have other similarities as well. For example, consider the fact that the solutions to the polynomial equation z3 + 6z – 20 = 0 are 2, –1 + 3i, and –1 – 3i, whereas the solutions to z2 – 2z + 2 = 0 are 1 – i and 1 + i. In both examples, the solutions that are not real numbers form pairs of complex conjugates (that is, pairs of the form a + bi and a – bi). We might ask whether this is a coincidence of these two examples or no coincidence at all. A proof of the solutions in the two examples that treats the two examples separately would not explain why the solutions in both of these examples are alike in forming complex-conjugate pairs. An explanation of this fact must treat the two examples together and alike – just as an explanation of the fact that two separate kinds of force both conserve energy must treat the two kinds together and alike if their similarity is no coincidence. This is one respect in which an explanation in mathematics can be similar to an explanation by constraint in science.

3:AM: Other kinds of non-causal scientific explanation are the ‘really statistical’ and ‘dimensional’ explanations. What are these?

ML: A “really statistical” explanation works by identifying the fact to be explained as an instance of some characteristically statistical phenomenon such as regression toward the mean. As an example, consider the fact that although the students who did well on some course’s first exam tended to do well on its second exam, the students who did the very best on the first exam were by and large not the students who did the very best on the second exam. The mere fact that there exists a statistical relation rather than a perfect correlation between the outcomes of the two exams ensures that extreme scores on the first exam tend to be associated with less extreme scores on the second. This is regression toward the mean. This explanation does not identify any particular causes (or even chances), although undoubtedly, in each student’s case, there were causes of that student’s exam performances. Rather, this explanation explains the result as just a statistical fact of life. Unlike a causal explanation, regression toward the mean gives this fact an explanation common to many other facts, such as the fact that the very worst performers on the first exam are by and large not the students who did the very worst on the second exam (and the fact that baseball’s Rookies of the Year tend to undergo a “sophomore slump”). This explanation can thereby unite these cases --precisely because it fails to engage with their causes.

I think that the difference between natural selection and random genetic drift is not that they produce different outcomes (there is some finite likelihood that natural selection will result in the less fit alternative increasing in frequency) or that they involve different causal mechanisms (since discriminate selection can occur even in a case of drift). Rather, explanations by natural selection are causal explanations whereas explanations appealing to drift are “really statistical” explanations.

A “dimensional explanation” works by showing how the fact to be explained is a consequence merely of the fact that there exists some (unspecified) relation among some subset of various quantities, a relation that holds regardless of the units in terms of which those quantities are measured. For instance, consider a mass m lying on a smooth table and attached by a spring to a fixed point. Suppose we pull the body, stretching the spring (with spring constant k) to a distance x beyond its equilibrium length, and then let go. The body then oscillates. Why is its period of oscillation independent of x? (Notice that this question does not ask why the body oscillates. We would need a causal explanation to answer that question.) From the fact that the period depends only on x, m, and k, it follows (if the period’s relation to these other quantities holds whatever the units) that the period must be independent of x. That is because x is the only one of these quantities that involves a dimension of length. So if x figured in the relation, then there would be no other quantity available to cancel out its length dimension, leaving us only with the dimension of cycles per unit time (the period’s dimension). This explanation does not work by describing the causes operating. Dimensional explanations thus have the power to unite various cases that, in causal terms, are very dissimilar, but are alike in their dimensional architecture.

3:AM: Why are non-causal explanations significant in maths?

ML: There are many virtues that mathematicians seek and value in proofs. Some of these virtues are pragmatic or aesthetic, such as brevity and visualizability. Other proofs are fertile in that they reveal methods or results that can be used to prove other theorems. Other virtues that some proofs possess have to do with the role that mathematical proofs play in constituting important mathematical discoveries themselves (as when a proof reveals a connection between heretofore unrelated mathematical fields). I believe explanatory power to be such a virtue.

My first undergraduate mathematics course used Michael Spivak’s marvelous calculus textbook. In Chapter 23, Spivak pointed out two Taylor series with the same convergence behavior. He asked why they are alike in this way when they are otherwise so dissimilar. He said that sometimes, this sort of why question has no answer, but in this case, there is an explanation – but it will have to wait until Chapter 26, when complex numbers will be introduced. I was thunderstruck that facts about imaginary numbers could help to explain facts about the real numbers. For an expansion in mathematical ontology to pay explanatory dividends like that turns out not to be unusual. Points at infinity in projective geometry likewise help to explain facts that Euclidean geometry mistakenly portrays as coincidental. (I sometimes feel like I am still just working through the mathematics and science that I encountered in my first year of college!)

If we are Platonists, then we might regard the fact that some mathematical entity’s existence would explain a given mathematical fact – a fact that would otherwise have to be a mysterious coincidence -- as some evidence for the entity’s existence. Our search for mathematical explanations thereby leads us to discover mathematical entities (just as in science, the search for explanations leads us to discover new entities). On a non-Platonist view, on the other hand, perhaps mathematics consists merely of various interrelated “games” of symbolic manipulation. Pure mathematicians have extended their “games” when doing so has appeared to them to be worthwhile on purely mathematical grounds. A given game may be worth playing at least partly by virtue of its relations to other mathematical games that are independently worthwhile. One of the features that can make a game worthwhile is that it enables mathematical explanations to be given (or demanded) that could not be given (or demanded) before.

3:AM: Given that there is a great diversity of non-causal explanations why would it be wrong to suppose that what we had here was nothing more than things that had nothing in common except that they were called ‘explanations’?

ML: I do not think that all scientific explanations (whether causal or non-causal) and all explanations in mathematics acquire their explanatory power in the same way. Nevertheless, there remain many elements common to many kinds of explanation, knitting them together.

For instance, many kinds of explanation are able to make various similarities non-coincidental. This may be done by the explanation’s identifying a common cause or a common explainer that is not a cause. It may be done by the explanation’s appealing to a common dimensional architecture or a common characteristically statistical phenomenon. In mathematics, a result may be rendered non-coincidental by its components having a common mathematical proof or by separate mathematical proofs of a common sort.

Likewise, various properties qualify as natural (that is, as genuine respects of similarity) by virtue of their roles in explanations, whether those explanations are causal or non-causal or even explanations in mathematics. For example, mathematicians sometimes regard all pairs of co-planar lines as alike in having a point of intersection, whether that point is on the Euclidean plane or (in the case of parallel lines) at infinity. What makes this property of having a point of intersection a genuine respect in which all pairs of lines are similar rather than a wildly disjunctive, gerrymandered, grue-like “property” that papers over a real difference between parallel and non-parallel lines? The answer, in my view, is that what makes this property natural is its role in explanations in mathematics. In this way, explanations in mathematics are like scientific explanations, which also render certain properties natural, as when dimensional explanations render natural the property of having a given Reynolds number, for instance.

In both mathematics and science, there are tight connections between being an explanation, being able to render similarities non-coincidental, and being a natural property. These connections contribute toward making all of these varieties of explanation into species of the same genus.

3:AM:And finally are there five books you could recommend to readers here at 3:AM (other than your own) that will take us further into your philosophical world?

ML:

I have loved Nelson Goodman’s Fact, Fiction, and Forecast from the moment I first encountered it.

The essays in Wilfrid Sellars’s Science, Perception, and Reality have influenced my thinking in many ways.

David Armstrong’s What is a Law of Nature? remains the best introduction to its title question.

Imre Lakatos’s Proofs and Refutations is a bracing antidote to many mathematics textbooks. I gain a lot from thinking about the history of science and so I cannot refrain from recommending two scientific biographies in particular:

Janet Browne’s two-volume biography of Charles Darwin (Charles Darwin, Vol. 1: Voyaging;

(Vol. 2: The Power of Place)

and Paul Nahin’s Oliver Heaviside: Sage in Solitude.

ABOUT THE INTERVIEWER
Richard Marshall is still biding his time.

Buy his new book here or his first book here to keep him biding!