Interview by Richard Marshall
'When people argue that the idealised nature of scientific models is an obstacle to taking them to be true, or approximately so, my immediate reaction is to say that we can accommodate this feature, if we shift from truth-as-correspondence to this form of pragmatic or ‘quasi-‘truth.'
'In answering the question, ‘what are theories?’, we have to go back a further step and answer the question ‘what are propositions?’. One account takes them to be abstract entities in some sense. But that raises a version of the ‘Benacerraf problem’: if propositions are abstract and if abstract entities cannot stand in causal relations with concrete entities (such as ourselves) and if all knowledge is grounded in such causal relationships then how can we have knowledge of propositions? But yet we do seem to have knowledge of theories!'
'Generally I am interested in the extent to which certain moves and devices from the philosophy of art might be usefully exported to the philosophy of science (and vice versa!).'
'Yes, my rather brutal response to this debate about what sorts of things theories and models are, what identity conditions they satisfy and so on, is to chop right through it and insist that the whole discussion got off on the wrong foot by taking theories and models to be things, of some sort, to begin with. Literally, on my view … There Are No Such Things As Theories! So, they are not abstract artefacts, inhabiting World 3, nor are they fictions, nor anything else.'
'That core, then, lies in the claim that the world, as described by modern physics, should be understood not in terms of objects that ‘possess’, in some sense, properties but in terms of the structures from which those properties may be obtained.'
'What I’m saying is that the structure of the world, or, better, just the world, is, in part, a nexus of symmetries and associated laws represented by the relevant mathematics (such as group theory). This is not to say that the structure of the world is mathematical, any more than it is when I assert that I have two custard cream biscuits in front of me.'
'The Viking Approach ... rather than dismiss current, so-called ‘analytic’ metaphysics out of hand for not paying close enough attention to modern science, we should regard it as a kind of ‘toolbox’ containing various devices and moves that we can exploit. Not all of those will be ‘fit for purpose’ but even those that are not, in one particular form, might be refashioned into new shape. '
'In recent years we have seen scientists heeding philosophers again – Wendy Parker regularly gets invited to climate science conferences, for example, and interacts closely with scientists and Samir Okasha has been acknowledged by biologists as having helped clarify issues to do with levels of selection, for example, in the foundations of evolutionary biology. But then in my experience biologists have always been more receptive to philosophical ideas than physicists! Even so, with the resurgence of interest by practising physicists in the foundations and interpretation of quantum theory, following all the work on Bell’s Theorem and the revival of interest in the Many Worlds view and Bohm’s theory and so on, there has been more interaction between physicists and philosophers, certainly much more than when I started my PhD at Chelsea.'
Steven French is interested in Philosophy of science; philosophy of quantum physics and metaphysics. Here he discusses what scientific theories and models are, what propositions are, the 'Semantic' approach, whether theories and models are like artworks or useful fictions, how we can rely on things that don't exist, scientific practice, whether his approach holds policy dangers, ontic structural realism, what a structure is, the Viking Approach, how can Ontic Structural Realism deend on theories if they don't exist, counterfactuals and laws, and whether scientists should heed the philosophers.
3:16: What made you become a philosopher?
Steven French: The short answer is, physics made me a philosopher! I was studying physics at the University of Newcastle upon Tyne where the department was heavily biased towards applied research (particularly geophysics). So, the classes in quantum mechanics were very much along the lines that Nancy Cartwright portrays in How The Laws of Physics Lie– basically we learned which type of Hamiltonian to ‘take down off the shelf’ and apply to a given situation. But I remember asking the lecturer what it could mean for something to be in a superposition and he directed me to a colleague in the philosophy department, who told me about Michael Redhead, one of the world’s leading philosophers of physics but who at that time worked part-time in the Dept. of History and Philosophy of Science at Chelsea College, University of London. The dept was set up by Heinz Post, himself a physicist by trade who insisted on only admitting to the PhD programme students who had degrees in science or mathematics.
So my first year in the programme was spent taking a crash course on everything from mathematical logic and foundations of probability theory to epistemology and, of course, history and philosophy of science. Basically that one year is the extent of my philosophical education (some might say it shows!).
3:16: A puzzle you’ve explored is about what scientific theories and models are. The current pandemic crisis has brought to everyone’s mind the idea of scientific models as authorities try and work out how to make informed decisions. How best do we assess models and theories – is it about whether they’re true or whether they achieve their policy goal of reducing uncertainty in a more fine-grained way than?
SF: There’s been a lot of really interesting work on how best to evaluate models in recent years. Wendy Parker, for example, has a great paper coming out in Philosophy of Science in which she analyses the plausible idea that models should be assessed with respect to their adequacy or fitness for particular purposes. She argues that for a model to be considered as adequate in this sense, we should take into account a range of factors involving not only the relationship with the target system but also the user, the relevant methodology, background circumstances and so on.
One of Wendy’s main interests has been climate change models but it seems to me that her framework could be more or less straightforwardly extended to accommodate the kinds of modelling that have been considered in the current crisis. As Wendy notes, adequacy for purpose differs from representational adequacy in that it involves a broader range of considerations and various aspects of the model typically can’t be assessed independently of one another. Now, it’s the latter sense of adequacy that I’ve mostly been interested in.
Again to get a bit autobiographical, when I moved to Brazil for my first full-time job, I started working with Newton da Costa, a well-known Brazilian logician who, together with a couple of colleagues from Chile, had recently developed a formalisation of the notion of pragmatic truth, broadly along the lines of what Tarski did for the correspondence account. Following Peirce’s insistence that we should '... consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have’ they articulated this notion of pragmatic or ‘quasi-‘ truth through the technical device of a ‘partial structure’, which Newton and I claimed could also be used to formally represent scientific models. And together with Otávio Bueno and James Ladyman, we argued that it could accommodate the way idealisations work in modelling as well as capturing the various ways that models, both theoretical and empirical, could be related.
So, when people argue that the idealised nature of scientific models is an obstacle to taking them to be true, or approximately so, my immediate reaction is to say that we can accommodate this feature, if we shift from truth-as-correspondence to this form of pragmatic or ‘quasi-‘truth. And I think that formal framework could also be extended to embrace the kind of analysis that Wendy Parker presents, so that we can appropriately assess models as they are deployed by epidemiologists and the like, without losing sight of an appropriate notion of truth.
3:16: So one answer to the question what models and theories are is that they’re sets of propositions closed under logical deduction. Can you flesh out this response and say what problems you have with it – in particular issues that arise around the follow up question: so what do you mean by proposition?
SF: For many years that was the standard, or so-called ‘Syntactic’, view of theories and is often associated with the logical positivists (although as Sebastian Lutz and others have pointed out, the likes of Carnap, for example, had a much more extensive set of formal tools at their disposal than just (classical) first order predicatelogic). At the core of this view is the idea that a theory is the kind of thing where you have a limited set of fundamental claims or principles which function as axioms, from which all else follows by (typically classical) deduction. But as Bas van Fraassen noted, in his now classic book, The Scientific Image, the ‘axioms’ that you find in scientific textbooks and papers don’t look anything like these kinds of fundamental claims. It’s also not clear how modelling can be captured on this view – at best models come out as ‘mini-theories’ or ‘theoruncula’ as Braithwaite called them. And finally, van Fraassen argued, the way in which theories and data are related in actual scientific practice is not best captured by what was then the standard view.
Granted the point that we can now appreciate that it does have the resources to appropriately represent these relationships, at the time, for someone like me, coming from a science background, van Fraassen’s points packed a lot of punch! And of course, within that Syntactic View the claims involved are, again standardly, understood in terms of propositions, not least because Einstein’s theory of General Relativity, say, is the same, in some sense, whether it is presented via sentences of English or Portuguese. So, following this line, in answering the question, ‘what are theories?’, we have to go back a further step and answer the question ‘what are propositions?’. One account takes them to be abstract entities in some sense. But that raises a version of the ‘Benacerraf problem’: if propositions are abstract and if abstract entities cannot stand in causal relations with concrete entities (such as ourselves) and if all knowledge is grounded in such causal relationships then how can we have knowledge of propositions? But yet we do seem to have knowledge of theories! An alternative is to adopt some form of ‘fictionalism’, where the idea is to adopt the same stance towards propositions as we do towards entities that feature in novels, tv shows or movies for example. Claims involving such fictional entities are strictly false, as in ‘Buffy is a reluctant hero who wants to live a normal life’, but these entities can serve a useful purpose by functioning as representational aids. This is an interesting approach, not least in the way that some of its advocates have drawn on accounts of fiction from aesthetics and exported them into the philosophy of science.
But it seems to require that the relevant claims about theories should be taken to be strictly false; so when the physicist Paul Davies writes that ‘General Relativity is the cornerstone of cosmology and astrophysics’, he is actually stating a falsehood. That seems a hard pill to swallow, not least because one of the lessons we are supposed to have learned from the downfall of logical positivism is that we should be wary of views that insist that we should not take scientists’ talk literally.
And finally, of course, this view of theories as sets of propositions invites us to see the philosophy of science as just part of the philosophy of language, taking us ‘un mille milles de toute habitation scientifique, [leaving us] isolated in our own abstract dreams’, as van Fraassen, again, so aptly put it.
3:16: Another answer to the question which you say is becoming the new hegemony is the view that theories are families of models – what you call the ‘Semantic Approach’. Can you explain what this alternative is and why you think that despite its critics you think this is the best way of answering your question about what theories and models are even though you don’t think by characterizing that theories are partial structures they really are such structures?
SF: So, as I noted above, what are called ‘axioms’ in a scientific text don’t seem to correspond to what a logician would call axioms. Instead van Fraassen, Ron Giere and others, suggest that they should be understood as describing certain models and according to the Semantic or Model-Theoretic Approach, a theory should be understood in terms of such a family of models. How philosophers of science formalise this notion of ‘model’ then varies. Patrick Suppes who first presented this view in his studies of mechanics, used set theory, famously declaiming that ‘philosophy of science should use mathematics, not meta-mathematics’. van Fraassen himself prefers to use the framework of ‘state spaces’, following a line initiated by Weyl and further developed by Beth. On the Suppesian account, which da Costa and I followed, a model can be (crudely, perhaps) represented as , where A is the relevant set of entities, such as electrons, genes or whatever, and R is the set of relations that hold between them (and you might want to stick other things in there, like functions and so on …). You can then use the resources of set theory to characterise the relationship between a theory and the relevant data, say, in terms of the appropriate empirical substructures being embedded in the theoretical structures (instead of talking about theoretical claims being related to empirical ones via ‘correspondence rules’ or ‘bridge principles’, as on the Syntactic View). da Costa’s ‘partial structures’ take the relations to be ‘partial’ in the sense that the R is effectively decomposed into those relations that we know hold between the elements in the set A, those we know don’t hold and those for which we don’t know whether they hold or not.
He and I, together with Otávio and James then argued that this gives us an even better way of capturing a whole range of features of scientific practice, from the way that theories develop and evolve, to the complex hierarchy of structures that take us from the empirical up to the most theoretical. We’ve also looked at various cases from the history of science, such as the development of the first theory of superconductivity, which we’ve suggested can be nicely understood using partial structures. And more recently Otávio and I have argued that the supposed mystery of the applicability of mathematics to science, particularly physics, simply evaporates once we cast the relevant episodes – drawn from the history of quantum theory – into this framework. Because of all this, and granted that the ‘Syntactic’ alternative has more resources than it is often credited with, I still think that Semantic Approach represents a range of core features of scientific practice more perspicuously, perhaps because as James Ladyman once put it, it ‘wears the structures on its sleeve’, and physics, at least, is all about the structures!
Having said that, from the very beginning of our work together, Newton and I were wary of reifying them, of taking theories to be set theoretic structures, whether partial or otherwise. Rather, we suggested, they should be regarded as simply a formal device that allows us to better characterise and represent the above features. In our book, Science and Partial Truth (OUP 2003) we even sketched the outlines of an alternative approach based on category theory, the point being that philosophers should chose whatever device works, in terms of appropriately capturing those features of scientific practice they’re interested in. My most recent book, There Are No Such Things As Theories (OUP 2020) follows that general line.
3:16: So is it helpful to think of models and theories as artworks or abstract artifacts to try and understand their ontological status?
SF: I think we have to be careful thinking of them that way as the comparison can be fruitful in some respects but also misleading in others. Generally I am interested in the extent to which certain moves and devices from the philosophy of art might be usefully exported to the philosophy of science (and vice versa!). You can see this very clearly in the extensive discussions of representation in science, where Otávio and I have tried to apply the partial structures approach and where models and theories are quite explicitly compared to certain kinds of depictive paintings so that analyses of the representational role of the latter from aesthetics are then carried more or less directly across. But I also think that not enough consideration is given to the issue of when, exactly, such transfer is appropriate and when it is not. Sometimes, for example, certain cases are drawn on from art and put forward as counter-examples to particular accounts of representation in science, such as the partial structures approach.
But then it’s not always clear that these purported counter-examples are relevant to the scientific case. More interestingly, I think, artworks also differ from scientific theories and models in crucial ways, such as their use of symbolism, for example. So, in the novel Jane Eyre, Jane is locked in a red room which can be taken to symbolize a womb. In Dutch ‘vanitas’ art a peeled lemon symbolizes the bitterness of life. And in a particularly striking example, Holbein’s The Ambassadors incorporates a skull, anamorphically rendered in the foreground, again reminding the viewer of the inevitably of death. (Even more interestingly, some have suggested that Holbein painted it this way to force the viewer to shift perspective on the scene as a whole, encouraging speculation that there is some hidden meaning.) Of course, models and theories use symbols – ‘v’ for velocity, ‘s’ for spin and so on – but not in this sense, of deliberately hiding some deeper meaning.
Accounts of representation in art that accommodate the former use of symbolism might not be so appropriate when it comes to representation in science. Theories are also often compared to musical and literary works – just as Beethoven’s Fifth Symphony is not to be identified with Beethoven’s actual score, so the theory of General Relativity is not to be identified with the actual paper that Einstein presented to the Prussian Academy of Science. This has encouraged some philosophers to identify both theories and artworks with abstract objects, in some sense. Popper, for example, famously placed theories alongside musical and literary works, such as Beethoven’s symphony and Hamlet, in his ‘World 3’ of ‘intelligibles’, or ‘ideas in the objective sense’. But then, with creativity now understood as discovery, the nature of the relationship between the artist or scientist and ‘their’ work becomes problematic.
Amy Thomasson argues that such works should be regarded as ‘abstract artefacts’, brought into being and sustained by the intentions of their ‘creators’. I think that’s a really interesting view but it is still difficult to see how to accommodate the various heuristic processes involved in both cases – Beethoven produced numerous ‘sketches’ in the development of his symphony and Einstein’s detours and false starts on the way to his General Theory are well-known. If all of these proto-works ‘live’ in World 3 then not only is it going to be really cluttered (!) but something more needs to be said about how the heuristic moves followed in practice correspond to the relevant inter-relationships in the abstract realm. Ultimately I think these sorts of accounts raise too many problems and we should adopt an entirely different stance towards the ontological status of both artistic and scientific works.
3:16: Why not look at them as useful fictions?
SF: As I mentioned above, this is another example of an interesting and potentially very useful export from the philosophy of art. A number of philosophers of science, such as Roman Frigg, James Nguyen and Adam Toon have drawn on Kendall Walton’s view of literary fictions as props in games of make believe to argue that model descriptions should be understood in the same way. On this view, to say that a scientific model possesses certain properties is just to claim that within a certain game of make believe we should imagine the system being modelled as having these properties.
This sort of view has been developed in some detail over the past few years, giving rise to some insightful and sophisticated accounts that are pretty ‘light’ when it comes to their metaphysical baggage. Still, I am uncomfortable with it, not least because it requires us to understand scientists’ apparently straightforward talk about models and theories in terms of their engaging with a game of pretence and as I’ve already indicated, we’ve long since learned to be cautious about any view that enjoins us not to take scientists’ talk literally.
I’m also mindful of Michael Weisberg’s point (in his Simulation and Similarity, OUP 2013) that scientists often use models involving very general properties of infinite populations, as in population biology and it is difficult to see how these can be understood as games of make believe involving the imagination (but having said that, I also think there is more to say about the nature of imagination in science – my student Alice Murphy has a nice paper - ‘Towards a Pluralist Account of the Imagination in Science’- coming out in Philosophy of Science on this).
3:16: So you conclude that they don’t actually exist – which seems to be a very disturbing claim given that, for example, all the politicians fighting the pandemic are saying they’re relying on the scientific theories and models about the virus. How do you explain what we’re doing when we’re relying on something you say doesn’t exist?
SF: Yes, my rather brutal response to this debate about what sorts of things theories and models are, what identity conditions they satisfy and so on, is to chop right through it and insist that the whole discussion got off on the wrong foot by taking theories and models to be things, of some sort, to begin with. Literally, on my view … There Are No Such Things As Theories! So, they are not abstract artefacts, inhabiting World 3, nor are they fictions, nor anything else. This all goes back to some work I did a while ago with Pete Vickers (his book Understanding Inconsistent Science (OUP 2013) takes a similar stance towards supposedly inconsistent theories) and we also imported a device, this time from metaphysics, which Ross Cameron used to support his view that there are no such things as musical works: this is a form of truthmaker theory which allows that ‘x exists’ might be made true by something other than x and so you can take ‘a exists’ to be true, without being ontologically committed to a.
Metaphysical nihilists use this to claim that ‘this table exists’ is true, whilst also maintaining that, ultimately, there is no table, just the relevant metaphysical simples – in this case elementary particles, say – arranged in a ‘table shaped’ way (obviously that particular example needs some unpacking to take account of the relevant physics). Likewise, scientists’ talk about theories is made true, not by the properties of some abstract entity or their engagement with a game of make believe but by the relevant practices. So, for example, the claim that ‘quantum mechanics is empirically adequate’can still be taken to be true but what makes it so is not some feature of quantum mechanics regarded as some thing or entity, whether abstract artefact or fiction or whatever, but rather the relevant features of scientific practice. In this case, these practices will obviously have to do with what goes on in the laboratory as well as everything that is involved in bringing theoretical claims and data together. Likewise, the claim that ‘Einstein’s General Relativity is the most beautiful of all theories in modern physics’ should be understood not in terms of attributing the aesthetic quality of beauty to some thing but as being made true by the relevant practices; that is, the manner in which a certain set of symbols are set down, are interpreted, and so on.
I think that could offer a new perspective on the attribution of aesthetic qualities to scientific theories in general, something I’ve become interested in through the work of Milena Ivanova and others.So, just as the metaphysical eliminativist insists that there are no tables, just ‘table shaped’ collections of elementary particles so I argue that there are no theories (or models), just theory shaped bits of practice! However, I don’t think that should disturb anyone (except maybe Popperians and fictionalists), because to say that there are no theories in the above sense shouldn’t be taken to undermine the efforts of scientists in developing and presenting theories and models, including those used to model the current pandemic. What makes it true that a given model is adequate-for-purpose, say, are the relevant practices involved in developing and testing and applying the model and insofar as we can rely on those practices, we can rely ‘on’ the model (even though there is no such thing!).
3:16: How does looking at scientific practice help us understand your approach, and how can models and theories represent if they aren’t there, and how can they be true?
SF: Just to be clear, I’m not saying that theories and models are to be identified with ‘collages’ of practices, as some have suggested. Nevertheless, by taking these practices to be the truthmakers of claims (purportedly) about theories our consideration of these claims then leads us to a closer examination of these practices. And that should bring the philosophy of science into even closer alignment with the history of science.
When you look at the relevant history you can appreciate just how complex and diverse those practices are. So, not surprisingly, I’ve long been interested in the history of quantum mechanics and when you look closely at that you can see that the issue of what counts as ‘the’ theory runs throughout, with formulations by the likes of Dirac and Weyl vying with those of Heisenberg and Schrödinger, not to mention the on-going disputes over which interpretation to choose. But even in the case of classical mechanics, Mark Wilson has argued for years now that we shouldn’t take it to be a homogenous, monolithic block but rather a set of ‘quilt-like’ assemblages, that he calls ‘theory facades’, linked by various ‘ladders’ and ‘lifts’. By focusing on the theories and how they should be delineated and identified we generally lose sight of this richness and complexity.
Now of course, the question comes up: what sense can we make of talk of a theory or model representing a system if theories and models don’t actually exist?! Well, where is such talk situated? Typically, in the discourse of philosophers of science and (broadly) reflective scientists and what’s going on here is that a particular meta-level construction is undertaken, which is taken to be ‘the theory’ and also taken to stand in a particular representational relationship with the relevant system, also appropriately represented. As far as philosophers of science are concerned, that meta-level construction might be put together in terms of either the Syntactic or Semantic Approaches and if the latter, in its partial structures form, the representational relationship might then be characterised in terms of partial isomorphisms which strictly only hold between the relevant formal structures. So, when it comes to the claim that, say, ‘Model M is a faithful representation of system S’, the relevant truth-makers should be understood as including certain philosophical or, again, broadly reflective practices as well as scientific ones.
Likewise, when it comes to truth. At the level of the discourse of science, truth – if you’re a ‘standard’ realist – should be cashed out in correspondence terms. (Note that to say theories don’t exist is of course not to say that the relevant entities, such as electrons, don’t exist either!). But at the level of philosophical discourse, we need to be careful not to beg the question against the anti-realist (about entities). So, we shouldn’t say that the relevant truthmakers for the claim that ‘quantum mechanics is true’ are the relevant scientific practices; rather we should expand the set of truthmakers to include the practices of realist inclined philosophers of science, which will include such moves as inference to the best explanation and so on. So, again, we can still talk about truth and representation when it comes to theories and models – we just have to be a little careful about what makes that talk itself true.
3:16: It’s interesting to see how different cultures seem to approach theories and models differently: in the USA there seems to be quite a bit of science denying which, arguably, has characterized some policy responses - whereas in Norway and Germany the approach towards the science has been more flexible and seemingly more understanding of what science theories can and can’t do. Why do you think that your question about what models and theories are matters – and do you think that maybe there are political and policy dangers in boldly saying that there aren’t any – a bit like when Nancy Cartwright claimed that all science lies?
SF: Actually, I’m not sure it does matter that much for that debate. I don’t think Cartwright’s claim that the laws of physics lie (in the sense that they shouldn’t be regarded as ‘telling the truth’, which, she argued, is what the lower-level ‘phenomenological’ laws do) actually held any policy dangers. On the contrary, I think she would claim that shifting our emphasis away from the high-level laws that we philosophers typically focus on to the messy lower-level principles that, she maintains, do the bulk of the explanatory work, is a good thing! Likewise, my denial of the existence of theories shouldn’t offer any comfort to the science deniers, given my insistence on practices as the truthmakers of claims purportedly about those theories. That’s where all the action is, and those practices can be used to slap the science deniers about the head just as easily and forcefully as the theories and models themselves, if not more so!
3:16: Boldly – again – you have defended ontic structural realism. This is a rather cool but at the same time wildly counter-intuitive position. Can you sketch for us what the claim is and why your version is better than rival structural realisms such as Humean structuralism which takes structures to be categorical or Chakravartty’s semi-realism?
SF: I honestly don’t find it ‘wildly counter-intuitive’! Maybe that’s because of my physics background and the significance of symmetry and group-theoretic structure for my PhD research. Let me drop into autobiographical mode again: before I moved to Leeds I received a letter from James Ladyman, asking me if I’d supervise his PhD based on the idea that we shouldn’t just claim that all we know is structure but that all there is, is structure. Now at the time I was collaborating with Décio Krause, another student of Newton da Costa’s, who was developing a logical and set-theoretic framework to accommodate the claim that quantum particles should not be regarded as individuals, in some sense. This claim is supposedly supported by a certain principle, fundamental to quantum physics, known as Permutation Symmetry. However, Michael Redhead and I had argued that, on a certain understanding, Permutation Symmetry could also be taken to support the alternative view of these particles as individuals (albeit, as we also maintained, Leibniz’s Principle of Identity of Indiscernibles can’t be taken to apply to them – I’ll come back to this).
So here it seems we had a particularly acute case of the underdetermination of metaphysics by physics. For me, James’ idea in that letter offered the basis of a response to this underdetermination: we shift our ontological focus away from the idea that quantum particles are objects, with the attendant issue of whether they should be metaphysically understood as individuals or not, to the underlying structures, such as that captured by Permutation Symmetry, which is typically expressed using the mathematics of group theory. And roaring down the A1 in James’ car to the regular philosophy of physics seminars at Cambridge (where Michael had moved to), he and I talked and argued about this (and partial structures and a whole bunch of other stuff!), back and forth, and even though our views have diverged since, I like to think that the core of ontic structural realism, as I see it, was laid down in those car trips (James of course then published his classic 1998 paper, ‘What is Structural Realism?’ and his subsequent book with Don Ross, Everything Must Go, OUP 2007).
That core, then, lies in the claim that the world, as described by modern physics, should be understood not in terms of objects that ‘possess’, in some sense, properties but in terms of the structures from which those properties may be obtained. Consider perhaps the two most fundamental kind properties into which all elementary particles can be divided: bosons and fermions (it turns out there are other theoretical possibilities – I’ll also come back to that). Photons, for example, belong to the former as do the other fundamental force carriers; electrons, on the other hand, are fermions and the structure of atoms and indeed of matter in general ultimately comes down to that fermionic nature. These correspond to two of the representations of the permutation group – there are others, as I just noted, corresponding to so-called ‘paraparticles’ which don’t seem to be realized in nature (this theoretical possibility looms large in my PhD thesis) – and so there is a sense in which these properties ‘drop out’ of Permutation Symmetry.
Or consider the fundamental properties of charge and spin – these likewise ‘drop out’ of the relevant symmetry for relativistic field theories, described by the Poincaré group, whose representations can be used to classify all the elementary particles. The point is, it is the structure, as represented by these symmetry groups, that yields these fundamental properties. Of course, just as the object-oriented realist should say something about what is meant by objects ‘possessing’ properties or properties being ‘instantiated’ in objects so the structural realist should explicate the sense in which symmetry structures yield properties or the latter ‘drop out of’ the former. Kerry McKenzie and Jo Wolff have argued that the usual notions of dependence aren’t up to the metaphysical job and I’ve suggested that we might appropriate Jessica Wilson’s work on determinability to both allow for symmetry structures as determinables to be in our fundamental base and to capture the relationship between such structures and the relevant properties.
Now, when it comes to the comparison with other forms of structural realism, in The Structure of the World I tried to adopt what I call a ‘big tent’ approach, embracing the kind of Humean version that Holger Lyre, for example, has developed. But in the end I just don’t think we should take structures to be categorical in that way. First of all, going back to James’ 1998 paper, there he emphasized the modal nature of relations among phenomena and how this established what might be called ‘clear blue water’ between structural realism and van Fraassen’s constructive empiricism. Secondly, Kerry McKenzie has argued that the kinds of properties that are described by modern physics, properties such as charge and spin, cannot be regarded as categorical, if by this we mean a property that is independent of its nomic role and that role as defined, in part, by the functional form of the law because the laws of quantum physics cannot be, as she puts it, ‘shoe horned into functional form’. And finally, I think the Humean struggles to accommodate symmetry principles as fundamental, although my former student Callum Duguid has made some useful moves in that direction. Ontic structural realism is perhaps closer to Anjan Chakravartty’s ‘semi-realism’, at least insofar as both positions take modality to be ‘in’ the world. Where they differ is that semi-realism still retains a notion of object as the seat of causal powers.
Now I think that the notion of structure can also serve as such a ‘seat’ but I struggle with this Aristotelian idea that things have powers. Certainly, the standard account of these powers or dispositions in terms of stimulus and manifestation conditions just doesn’t mesh well with current physics, despite what some of the adherents of this view maintain (the very idea of such conditions seems wedded to a classical view of the world and it’s hard to see how a symmetry could be ‘stimulated’!). Nevertheless, Barbara Vetter has an interesting account of potentiality that I think could be appropriated by the structural realist and Anjan’s own view is certainly more sophisticated than the standard dispositionalist account. Indeed, he and I have gone back and forth about this over the years and as I say in the book, you can see ontic structural realism as a kind of ‘reverse-engineered’ dispositionalism: whereas the latter starts with the properties, imbues them with modally informed powers, which then yield the relevant laws, structural realism takes the laws and symmetries as fundamental and inherently modal and obtains the properties from them.
3:16: So what is it you’re saying a structure is? If it’s purely a mathematical/logical abstract equation or set of equations then surely it leaves out the ‘whatever-it-is but not-abstract- reality?’ How does what you call the Semantic Approach help?
SF: What I’m saying is that the structure of the world, or, better, just the world, is, in part, a nexus of symmetries and associated laws represented by the relevant mathematics (such as group theory). This is not to say that the structure of the world is mathematical, any more than it is when I assert that I have two custard cream biscuits in front of me. James and I tried to knock such guilt-by-association allegations of Platonism on the head years ago! There simply is no ‘whatever-it-is but not-abstract- reality’ left out on this view – what we are doing is describing the world as it fundamentally is and allowing our metaphysics to follow the physics. Now, I just said that the structure of the world is in part just these symmetries and laws, appropriately inter-related. I also noted above that Permutation Symmetry, for example, gives us more than we need, as it were, in the form of particle kinds that don’t seem to be realized in this, the actual world (my one and only publication in a physical journal looked at one aspect of the statistical behaviour of these ‘paraparticle’ kinds).
So we need to pin the structure down, as it were, to the ‘outcomes’ we actually observe. Jessica Wilson, again, has a nice phrase to describe these outcomes – ‘existential witnesses’. So, the bosonic and fermionic representations of the permutation group characterize the existential witnesses that effectively tell us which of the worlds corresponding to all the different permissible particle kinds we actually live in. That means that ‘the structure’ has to go beyond the symmetries and laws in order to be tied down or actualized in this way. Here again I’ve indulged in a spot of appropriation, taking out of its neo-Kantian context Cassirer’s idea of structure as a non-hierarchical, mutually informing and interlocking ‘Parmenidean sphere’ of symmetries, laws and measurement outcomes.
As for the Semantic Approach, on the one hand, as James said and as I mentioned above, it seems the appropriate framework in which to articulate ontic structural realism because it effectively wears the structure ‘on its sleeve’. On the other, it still seems to incorporate a commitment to objects, in the form of the set A in the simple formulation . Here’s where we see the Big Split between structuralists! Those in the ‘moderate’ faction prefer to retain objects in their metaphysical pantheon, albeit in a minimalist, ‘thin’ form, in the sense that all the relevant properties, and even the objects’ identity, are given by the structure. So the ‘A’ become just placeholders. Those, like me, on the so-called ‘radical’, eliminativist side prefer to dispense with the objects entirely – after all, a placeholder is only standing in for something else and metaphysically they’re otiose. We can still talk about elementary particles, say, as if they were objects, using, again, Ross Cameron’s truth-maker device – in this case what makes it true that ‘the electron has charge e’ are the relevant features of the structure, taken as fundamental (in the case of charge those features will have to do with Poincaré symmetry for example).
But for the eliminativist, the Semantic Approach now may not seem so perspicuous. One option is to come up with an entirely new formal framework for the expression and presentation of ‘pure’ structure that avoids any mention of objects, metaphysically or formally, from the word go. Now that’s a big ask and although several folk have tried, from Arthur Eddington, back in the 1930s, to Fred Muller more recently, none have been successful so far as I can see. Another possibility is to use category theory as our meta-level representational framework, since this also shifts the focus from objects to relations. Elaine Landry, Dean Rickles and Jonathan Bain have pushed for such a shift for many years now but even though I’m sympathetic, I’m just not convinced it’s got the resources to do what we want such a framework to do, or at least not straightforwardly and sufficiently clearly and at an appropriately fine-grained level (Vincent Lam and Christian Wüthrich have come out with some pretty convincing criticisms of the category theoretic approach).
That leaves us with set theory but we can maybe sidestep the accusation of question begging by simply insisting that although we write the presentation of structure from left to right, in the form <A, R>, it should be read ontologically, from right to left. In other words, don’t let orthographic convention dictate your metaphysics! So, despite appearances, the A can then be understood as metaphysically secondary to the R, where the relationship between them can perhaps be captured in terms of determinability or maybe some form of dependence that gets around McKenzie’s and Wolff’s objections.
But let me just say a couple more things: if you’re a scientific realist, I don’t think it’s enough just to say ‘Oh yes, I believe there are electrons’. You owe some account of what they are, metaphysically speaking. But then if you adopt the metaphysics of objecthood, you run into this problem that your putative objects’ ‘individuality profile’, as Katherine Brading nicely put it – that is whether the objects should be regarded as individuals or non-individuals – is underdetermined by the physics. Again, as James Ladyman said, a realism that recommends belief in entities with such a metaphysically ambiguous status is really an ‘ersatz’ form of realism!
So, here’s where the to and fro begins: first, you need to make the naturalistic move and look to current science, in this case, modern physics. And I think you need to look at it in its ‘au naturel’ form, as presented in the relevant papers and textbooks or whatever, not formalized or otherwise gussied up à la Quine, because that formalization itself may be metaphysically loaded in certain ways, e.g. by reintroducing objects again, even if only ‘thinly’, and you’ll be off on the wrong foot once more. And as I’ve said, once you look at quantum field theory and the so-called Standard Model of elementary particle physics, the fundamental role of symmetries looms large. Then you need to make the appropriation move and both look to relevant historical antecedents – for me those can be found in the work of Cassirer and Eddington – and rummage through current metaphysics for useful devices, such as dependence or determinability to ‘flesh out’ that physical picture.
So, although there’s a sense in which the structure of the world is indeed presented in the relevant theories and models, if we want to go beyond simply pointing to the laws and symmetries and insisting ‘That! That’s the structure of the world!’, we need to articulate those interlocking laws and symmetries in appropriate metaphysical terms. The issue then is where do we find those terms. One option is to go completely naturalistic, eschew all current metaphysics and come up with an entirely new structuralist metaphysics. But that route is fraught with danger. As I already mentioned, Eddington tried to do something along those lines and the book he was working on when he died, Fundamental Theory, is pretty much incomprehensible. The alternative is to pick and choose from what’s currently available, perhaps adjusting and beating into shape where necessary.
3:16: What’s the Viking Approach – and how does it help mitigate what looks like the disastrous consequences of eliminating objects? How can anything we say about objects be true if you’ve eliminated them?
SF: The Viking Approach is basically that move of appropriation. I called it that because of the popular image of the Vikings as plundering and pillaging, with the metaphysicians cast in the hapless role of the local peasants (the Vikings had a big impact here in the North of England)! But in subsequent work with Kerry McKenzie she persuaded me to call it the ‘Toolbox Approach’, the idea being that that rather than dismiss current, so-called ‘analytic’ metaphysics out of hand for not paying close enough attention to modern science, we should regard it as a kind of ‘toolbox’ containing various devices and moves that we can exploit. Not all of those will be ‘fit for purpose’ but even those that are not, in one particular form, might be refashioned into new shape.
So, take the example of the Principle of Identity of Indiscernibles again – despite what Redhead and I argued, Simon Saunders and Fred Muller have recently shown that a form of the Principle, suitably modified, can in fact be made to work, albeit subject to certain caveats.And the form of truth-maker theory that Ross Cameron has used so effectively can also be regarded as just such a tool by the eliminativist structural realist: there are no objects in our metaphysical pantheon but what we say ‘about’ them is made true by features of the fundamental structure. So, the claim that ‘electrons are fermions’ is made true by Permutation Symmetry as is the claim ‘this table is solid’ (since solidity is ultimately due to Pauli’s Exclusion Principle which drops out of Permutation Symmetry).
3:16: If ontic structural realism depends on scientific models and theories, in particular physics, and according to you those theories and models themselves don’t exist, then how can they be appropriately metaphysically understood via notions of dependence, which you say they must be, unless by ‘appropriately metaphysically understood via notions of dependence’ refers to something non abstract, concrete, stuff? I guess this is asking why you think abstract laws are acceptable as our ‘fundamental base’?
SF: Ok, so ontic structural realism ‘depends’ on scientific theories only in the sense I indicated above; that is not in a metaphysical sense, but in the sense that philosophers should look to modern science for our best view of how the world is, if you’re a realist, or how the world could be, if you’re of the anti-realist tendency.
But that doesn’t mean you should reify the theories themselves, of course – again, our talk about them, as in claims like ‘The Standard Model is underpinned by certain fundamental symmetries’ is made true by certain features of the relevant scientific practices. What the ontic structural realist urges is the reification of the laws and symmetries described by and presented in those theories. And what we call ‘objects’, in the form of elementary particles and so on are then dependent on these, in some sense, or related to them via determinability or whatever. It’s ultimately because of the role they play in modern physics that I think these laws and symmetries belong in our fundamental base.
However, I wouldn’t say that those laws and symmetries are abstract! I think we have to be careful with the whole abstract-concrete distinction here. On the one hand, I don’t think the world as structure is mathematical, as I’ve already emphasized, or abstract. On the other, I think ‘concrete’ too often gets explicated in terms that beg the question against structuralism. One way of approaching this issue might be through Humeanism: the Humean takes the laws as presented in our ‘best’ theories as descriptive of the regularities that there are ‘in’ the world (or better, that make up the world). The ontic structural realist says something similar, extending this move to symmetries as well.
Where they differ, as I’ve already indicated, is with regard to the supposed categorical nature of properties and the nature of modality. For me (and James and Nora Berenstain as well) the structure of the world is modal. However, unlike in the case of dispositionalism, as I said above, that modality doesn’t reside in certain powers but in the laws and symmetries themselves, regarded, perhaps, as primitively modal. In that sense this view of laws sits somewhere between the Humean and dispositionalist views.
3:16: How do you think the understanding of counterfactuals and laws as characterized in contemporary physics, and perhaps even chemistry and biology too, supports the ontic structural realist view?
SF: Whatever view of laws you hold, something has to be said about their apparent modal nature. The Human effectively outsources that to some account of possible worlds, perhaps Lewisian or at least an ‘ersatz’ version. The dispositionalist grounds it, as I said, in the fundamental powers or dispositions in terms of which properties are characterized. The ontic structural realist takes it to be a primitive feature of the structure of the world, including both laws and symmetries of course – Permutation Symmetry, to use that example again, encodes the possibility of these other particle kinds. And however you understand such symmetries, whether from a structuralist perspective or not, that ‘encoding’ of counterfactual possibilities helps us understand , for example, how symmetry principles can explain certain phenomena, as Juha Saatsi and I have argued. Coming back, briefly, to the nature of theories and models, I also think there are interesting connections here with recent work by Michela Massimi in which she argues that models are ‘perspectival’ in this sense, I think, of encoding further possibilities that can then be explored.
The bottom line is that the nature of laws and symmetries as given in modern physics is hard to accommodate within either the Humean or dispositionalist views but I think it meshes well with ontic structural realism and in that way supports the latter. As for chemistry and biology, in The Structure of the World, I sketched how structural realism might be extended into those areas but it is tricky as you don’t have that nexus of laws and symmetries in biology, for example, as you do in physics. Still, I think you have analogs in the form of the Price Equation, for example, and certain kinds of models on the back of which a form of ontic structuralism fit for biology could be constructed – there are issues to do with the identity conditions for organisms that I think could likewise motivate that sort of position, although my colleague Ellen Clarke disagrees and she’s the real expert on those issues!
There’s also been some interesting work on extending a structuralist account to computational neuroscience by Majid Davoody Beni although again I’m not at all an expert in that area.
3:16: Ok, so all of this is, as I said, very bold and very cool metaphysics. It certainly pushes against those who are suspicious of metaphysics and philosophy generally having anything useful to say about the world now that science is so well developed. It’s clear that much of your metaphysics is done with a great deal of knowledge of contemporary science. Philosophy as you’re doing it is taking heed of science. As a take home can you say why science should heed philosophy?
SF: I think many would acknowledge that science used to heed philosophy much more, and certainly in the first few decades of the twentieth century when the foundations of modern physics were being laid down. Some of that influence has been ‘effaced’, as Tom Ryckman puts it, as in the case of Hermann Weyl and his work in the foundations of space-time theory or Fritz London and the foundations of quantum mechanics and superconductivity, both motivated by Husserlian phenomenology (London’s phenomenological approach to quantum theory is a current interest of mine).
But in recent years we have seen scientists heeding philosophers again – Wendy Parker regularly gets invited to climate science conferences, for example, and interacts closely with scientists and Samir Okasha has been acknowledged by biologists as having helped clarify issues to do with levels of selection, for example, in the foundations of evolutionary biology. But then in my experience biologists have always been more receptive to philosophical ideas than physicists! Even so, with the resurgence of interest by practising physicists in the foundations and interpretation of quantum theory, following all the work on Bell’s Theorem and the revival of interest in the Many Worlds view and Bohm’s theory and so on, there has been more interaction between physicists and philosophers, certainly much more than when I started my PhD at Chelsea. It’s quite common now to see scientists and philosophers of science together in conference symposia - Dean Rickles, Juha Saatsi and I put one together at the Biennial Meeting of the Philosophy of Science Association on structuralist approaches to quantum gravity with mathematical physicist John Baez and Lee Smolin, with the aim of trying to indicate new ways forward in that area (although I’m not sure how successful we were, to be honest!).
More recently, Sean Carroll, a well-known theoretical physicist at Caltech has collaborated on a paper about the Many Worlds interpretation with ‘Chip’ Sebens, a philosopher of physics, that was published in The British Journal for the Philosophy of Science. That’s a really encouraging example of how fruitful this sort of interaction can be. So I think science should heed philosophy primarily to help gain clarity on a range of issues, whether to do with the evaluation of climate change models or shedding new light on the foundations of physics – in effect, I think scientists should adopt the Viking Approach and look at philosophy as a toolbox offering an array of moves and devices to help them understand the way the world is! And philosophers of course, should be a part of that, helping them select those tools as well as developing new ones.
3:16: And are there five books you can recommend other than your own that will take us further into your philosophical world.
Ernst Cassirer, Determinism and Indeterminism in Modern Physics, Yale University Press 1956 (1936)
Arthur Eddington,The Nature of the Physical World. Gifford Lectures, Edinburgh University, 1927. Cambridge: Cambridge University Press., 1928
Bas van Fraassen, The Scientific Image, Oxford University Press 1980
Jorge J.E. Gracia, Individuality. SUNY Press, 1988.
Hermann Weyl, Philosophy of Mathematics and Natural Science, Princeton University Press 1949
Gracia’s book opened my eyes to the complex history of debate over issues to do with identity and individuality and van Fraassen’s not only introduced me to the Semantic Approach but also of course challenged me as a realist (his subsequent book Quantum Mechanics: An Empiricist View ties together all three themes – the Semantic Approach, identity of quantum particles and the underdetermination challenge to scientific realism). The works by Cassirer, Eddington and Weyl all offer different structuralist perspectives (Eddington is just so ‘out there’!) and all take the physics very seriously. Weyl’s in particular is an absolute tour-de-force.
ABOUT THE INTERVIEWER
Richard Marshall is biding his time.
End Times Series: the index of interviewees
End Time series: the themes
Huw Price's Flickering Shadows series.
NEW: Steven DeLay's Finding meaning series