Interview by Richard Marshall.

VZGcYbIt_400x400

Stathis Psillosis the philosopher of science who thinks about why it's best to be a scientific realist, about structural realism, about John Worrall, Frank Ramsey, Henri Poincaré, about scientific theories as growing existential statements, about the metaphysical, semantic and epistemic aspects, about epistemic optimism, about Larry Laudan, Willfred Sellars, about truth, about Nancy Cartwright's version of realism, about causation and explanation, about the problem with powers, about causal descriptivism and about why we should heed what philosophers say. Let's go...

3:AM:What made you become a philosopher?

Stathis Psillos:It was a combination of science and politics. I wanted to study philosophy, but in Greece philosophy was (and in many quarters still is) tied to philology with anti-science overtones. I opted to study physics instead. From early on in my studies I was engaged in various philosophical projects—mostly concerning scientific method (I was struggling with a copy of Hume’s first Enquiry) and the transition from classical mechanics to quantum mechanics (I was fascinated with Aristotle’s account of potentiality and its possible use in QM). This was the early 1980s and in the European south there was lots of hope for progressive change of the society. I was (and still is) a political mammal. I was motivated by Karl Marx’s Thesis 11: “The philosophers have only interpreted the world in various ways; the point, however, is to change it”. But soon I realized that Marx put the word ‘only’ in this statement for a purpose: the world should be interpreted and changed. This was (and still is) philosophy for me: the space in which human knowledge and praxis meet, interact and get criticised. Philosophy is essentially emancipatory. It is against intellectual passivity; against the tyranny of authority. By the late 1980s, I knew what I kind of intellectual life I wanted to have and I was extremely fortunate to get it. I couldn’t be more thankful to all those who have made this adventure possible.

3:AM:You defend a version of scientific realism. You say there’s a pedigree to the debate between scientific realists and anti-realists that goes back to Pierre Duhem, Henri Poincaré at the end of the 19th century. They were discussing atomism and were structuralists weren’t they ie they thought a scientific theory represents the structure of the world. Is some kind of structural realism where you stand with a Ramsey-inspired epistemic structuralism which says theories are ‘growing existential statements’?

SP:I have been a critic of structural realism because I think it draws an epistemic dichotomy where there is an epistemic continuum. The claim, as is now well known, is that of the unobservable world we (can) know only its structure. This little word ‘only’ (again!) creates an epistemic barrier. There is supposed to be something beyond this barrier to which there is no epistemic access. But what exactly is this? There are various suggestions, but none of them is very convincing for they presuppose that the two sides of the alleged barrier (the structure and what is non-structure) are in disconnection. So what is this allegedly inaccessible non-structure? Is it things qua bare particulars? I see no motivation in positing them. And if we do (for some reason or other) they are inaccessible by default. Is it the intrinsic properties of various entities posited by theories? For them to be inaccessible it must be the case that the relations to which various entities stand to each other do not supervene on the intrinsic properties of these entities. But again, even if some relations do not supervene on intrinsic properties (eg entangled states in Quantum Mechanics), it does not follow that all of relations do not supervene on intrinsic properties. To argue that all relations are non-supervenient on intrinsic properties is, in effect, to go for the existence of things-in-themselves; hence for a version of neo-Kantianism.

In the version of structural realism resuscitated by John Worrall in the late 1980s, the emphasis was on mathematical structure. But you do not get any serious predictions from mathematical structure in and of itself; hence if credit is to be given to a theory in virtue of its predictive success, the credit should go (at least partly) to the theoretical assertions of the theory (including its mathematical structure). Ramsey-sentence structural realism became a contender in the middle 1990s, though this approach had been discussed in the 1930s after the posthumous publication of Frank Ramsey’s paper on theories. The advantage of a Ramsey-sentence structural realism is that the structure of the theory is equated with its logical-deductive structure. To present a theory, then, is to assert that there are entities which realise this structure. But as the mathematician Max Newman proved back in the late 1920s, and as Richard Braithwaite noted in the late 1930s, the mere claim that there are entities which realise the (logical-deductive) structure of an empirically adequate theory is trivial, in the sense that the only information it offers is that the set of realisers has (and should have) the right cardinality.

In my recent attempts to better understand Henri Poincaré’s structuralism—which is very different from a Ramsey-sentence structuralism—I realised that Poincaré, like many of his contemporaries, was a relationist. That is, he thought that scientific theories can at best represent mathematically relations among things and that though relations require relata, the properties of the relata do not fix the relations. This view, as I noted above, had a Kantian pedigree and origin—the relata were equated with the things themselves. At the same time, however, Poincaré intended to ground the objectivity of science on the historically certified fact that mathematically expressed relations survive radical theory-change in science. These survived relations were taken to express real relations among unobservable entities in the world. He was adamant that they survive because they represent real relations and not the other way around. The key issue, I think, is the status of relations vis-à-vis their relata and their properties. A sharp epistemic dichotomy between structure and things (non-structure) requires a sharp across-the-board disconnection between relations and their relata.

In my own thinking of this matter, I found it useful to think of scientific theories as ‘growing existential statements’ in the sense that in adopting a scientific theory, we are committed to the existence of entities that make our (interpreted) theory true and, in particular, to the existence of unobservable things that cause or explain the observable phenomena, but at the same time we leave open the possibility that the theory might not be uniquely realised or that it might fail to be (fully) realised. Treating theories as ‘growing existential statements’ allows us to overcome the view that only the structure of the unobservable world can be known, or that only the relational properties of the realisers are knowable or that we are cognitively shut off from their intrinsic properties. As theories grow, and as they become more and more predictively and explanatorily successful, we get to know more and more about what these entities are and how they relate to each other. In a conciliatory moment, I called this view ‘modified structuralism’, but I think it boils down to scientific realism, pure and simple.

9780415208192

3:AM:According to you scientific realismhas a metaphysical, a semantic and an epistemic stance or thesis. So what are the three key stances you propose?

SP:I took it that the metaphysical stance of scientific realism is a declaration of independence. Qua realism, scientific realism ought to view the world that science aims to chart as being mind-independent. This is, in the first instance, an anti-idealist position. But I wanted to broaden it so that it blocks Dummettian and (middle) Putnamian verificationism—the view that, simply put, truth is warranted assertibility (or that truth is essentially epistemically constrained). These modern versions of anti-realism are anti-idealist and yet there is a sense in which they compromise the independence of the world from the knowing subject by making it the case that the content of the world extends as far as whatever is knowable. (Naturally, there are different versions according to how we complete the sentence above: knowable by whom?) Modern anti-realism does not deny that electrons etc. exist. But it renders the world logically and conceptually dependent on a set of epistemic conditions and practices. Being realism, scientific realism respects the possibility of divergence between what there is in the world and what is licensed as existing by a suitable set of epistemic practices, norms and conditions. For scientific realism, when truth is attributed to the theory, this is a substantive attribution which is meant to imply that the theory is made true by the world, which, in its turn, is taken to imply that it is logically possible that an accepted and well-confirmed theory might be false simply because the world might not conform to it. Though the concept of truth is involved in this characterisation of scientific realism, it is meant to unpack its metaphysical component, viz., the independence of the world.

I argued however that scientific realism has a semantic component too, since it is meant to be a view about scientific theories. The realist view here is that scientific theories should be understood literally as purporting to describe the deep unobservable structure of the world and hence that theoretical terms and concepts should not be re-interpreted as referring to observable entities and various operational procedures. This view was meant to block reductive empiricist accounts of the meaning of theoretical terms and instrumentalist views, which took theories to be mere instruments for prediction and control. Finally, I argued that scientific realism involves a kind of epistemic optimism in the sense that it takes science to succeed in revealing what the world is like and that there are good reasons to take theories to be true or truthlike. The need for this separate stance arises from the fact that one can adopt semantic realism (that is, one can take theories to purport to refer to unobservable entities) and yet suspend one’s judgement as to the truth of the theories and adopt an agnostic stance. Van Fraassen’s constructive empiricism is such a view and the epistemic stance of realism is meant to stand against it. The famous (or infamous) no-miracles argument is meant to ground this kind of epistemic optimism by offering (roughly put) the truth of theories as the best explanation of their empirical successes.

3:AM:The metaphysical claim is obviously one that anti-realists will dispute. Idealists, phenomenalists, verificationists, instrumentalists and so forth all have arguments to dispute the idea of a mind independent reality. How do you defend the position against their onslaught?

SP:There is no single argument against all of these positions. Verificationists are not idealists—they do not think that only ideas and minds exist. The argument against verificationism is that the epistemic conceptions of truth are inherently unstable. Verificationists tie truth to warrant and justification: truth cannot outrun warranted assertibility. But it is conceivable that any kind of justification we might have for an assertion might be revoked in the future. So we either end up with a repugnant relativism (true is whatever is justifiably believed currently) or we render the required conditions of justification so ideal and remote that it is hard to tell if and when they are ever met. Simply put, verificationism does not honour the intuition that truth does not have a ‘best before’ date. Reductive empiricists do not deny the mind-independence of the world; but they reduce the content of the world to whatever is observable. So they aim to translate talk employing theoretical terms into talk employing only observational terms and predicates. The problem with this move is its patent failure. Everybody now agrees that theoretical concepts have excess content, that their meanings are fixed by the theories in which they are embedded and that they have putative factual reference. In this setting, the irreducible existence of unobservable entities is enough for their independent existence. Interestingly, idealism has no traction any more. But there are neo-idealists—aka social constructivists. Against them, it can be argued that the world shows up its teeth, as it were, in resisting our attempts to appropriate it conceptually and cognitively; or more broadly to ‘construct it’. In other words, the world that science studies is not (socially or otherwise) constructed because it shows up significant and diachronic resistance to our attempts to ‘construct’ it. In this setting, resistance is enough for independent existence.

3:AM:The semantic claim is positioned to dispute claims of eliminative instrumentalism and reductive empiricism. Can you say what these positions claim first, and why semantic realism is better?

SP:I have already highlighted the gist of these positions. I am not sure there are defenders of these two positions anymore and this tends to make us oblivious to the fact that semantic realism(now almost unanimously accepted) has been a hard-won position. Eliminative instrumentalists (arguably, Ernst Mach was one of them) aimed to eliminate theoretical concepts and to equate the content of scientific theories with whatever they say about the observable world. Reductive empiricism (toyed with by Rudolf Carnap c. 1935) moved beyond instrumentalism in admitting the scientific theories are truth-conditioned and perhaps true descriptions of the world, but their truth-conditions were understood reductively: they were supposed to be fully defined by means of an antecedently understood observational vocabulary.

Semantic realism—the view that theoretical assertions are no less meaningful than observational ones and that they have excess content over whatever can be captured in an observational language—became the dominant view from the early 1950s on, when it was realised that the evidence for the truth of a theoretical claim (which evidence is normally observational/empirical) should not be confused with the conditions under which this theoretical claim is true. The theoretical claim might well be about unobservable entities in the world (its truth-maker, as it were, might be a state-of-affairs that involves unobservables) and accepting it as true implies commitment to the existence of these unobservable entities. Ultimately, the chief argument for semantic realism is that in matters of meaning and reference, there should not be double standards: one set of standards for an allegedly antecedently understood observational language and another set for an allegedly problematic theoretical language. The language of science is one and a single (referential) semantic theory should be enough for it.

9780230007116

3:AM:The third stance you label a kind of ‘epistemic optimism’ and it flies in the face of the pessimism of scepticism and agnosticism. Why so optimistic? Bas van Fraessen is an agnostic. What’s wrong with a more pessimistic agnosticism or even a Humean pessimism that allows science but without any claims of thick epistemic access? Wouldn’t that fit with your neo-Humean approach to causation in terms of regularities?

SP:Agnosticism is always a safer position. If you suspend your judgement about X you are committed neither to the truth of X nor to its falsity. Optimism is a lot riskier. But there are two kinds of consideration that make me go for optimism. The first is that I am moved by the explanationist defence of realism. I still think that admitting that theories are on the right track (true or truthlike) is by far the best explanation of their amazing predictive success—especially of their impressive (and otherwise inexplicable) capacity to yield novel predictions; to be, as Duhem put it, ‘prophets for us’. There is no logical compulsion in accepting (as true) the best explanation of a certain phenomenon or fact. But a) this kind of practice is pervasive in science; and b) its reliability is theoretically defensible. At least this is what I have tried to argue in various parts of my work. The second kind of consideration comes from the value of science. I am not saying that anti-realists do not value science. But I do think that part of the (inherent) value of science is that it promises to offers us understanding of the world—to extend the world far beyond whatever is revealed to us by the senses. This extension would be illusory if we were never under a position to tell that it is warranted. My problem with agnosticism is not that it counsels caution in what we accept as true or real but that it issues in a blanket suspension of judgement—whatever is unobservable is such that we can never be in a position to have a warranted belief about it. And I have argued in many places that this kind of view is unjustified.

The second part of this question is tough. I have chosen to defend scientific realism (an ontically inflated view) and neo-Humeanism (an ontically deflated view). I have called my position ‘Scientific realism with a Humean face’. I understand that most of my fellow realists go for stronger neo-Aristotelian positions when it comes to the ontic structure of the world. I still do not see a tension in the position I want to defend. The kernel of neo-Humeanism is the denial of necessary connections in nature; or the denial of regularity-enforcers, as I like to put it. My prime reason for not going all the way to neo-Aristotelianism is that it is not required by the scientific image of the world. Actually, natural necessities, powers, dispositional essences and the like inflate our ontological categories without offering better explanations of scientific facts than the leaner neo-Humean ontological categories. Explanation has to stop somewhere; hence, there always will be unexplained explainers. I just refuse to play the game of positing regularity-enforcers. Let’s say the enforcers are powers that various entities have. What is the explanation of their action? Or of being there in the first place? We just push the issue one (or more) step(s) backwards. So: I can be optimistic about atoms or electrons or DNA molecules or the continental drift. But I do not think there are good explanatory reasons to be optimistic about universals or powers or essences.

3:AM:Larry Laudan disagrees with you and argues for a position that starts with pessimistic induction. What does Laudan argue and how do you and someone like Philip Kitcher counter?

SP:Laudan gave a new lease of life to anti-realism (better put: to resisting realism) by making use of the history of science. As my understanding of the realism debate matured, I realised that the realism debate has been fought many times over in the history of science. One distinctive incarnation of the realism debate was in the late nineteenth century in relation to the atomic conception of matter. Back then, scientists and philosophers of science(in most cases, it was difficult to tell them apart) became sensitive to the history of science. This was a dual endeavour: to understand past science in its own terms (see Duhem’s ground-breaking work on the medieval science) but also to draw lessons from the history about current science. If science lives in history, then its history should be relevant to how it lives now.

In this rich context, there were appeals to the track record of past scientific theories in order to question the very ability of science to reveal the true nature of the world behind the phenomena. The popular claim was that scientific theories have repeatedly failed in the past to reveal the nature of the world and as result of this they were subsequently abandoned. The conclusion that was drawn was that the then dominant scientific theories would also fail and get abandoned too. This kind of historical argument was extremely popular and it caught the public eye in the so-called ‘bankruptcy of science’ debate that took place in France in the last two decades of the nineteenth century. Leo Tolstoy and Ferdinand Brunetiere offered versions of it. Among scientists, it was Ludwig Boltzmann who dubbed it the ‘historical principle’ and took issue with it. Henri Poincaré developed his relationism (at least partly) as a response to this argument in a famous address in the International Congress of Philosophy in Paris in 1900. Pierre Duhem too developed an anti-explanationist strategy to rebut it. A thought that emerged in the early twentieth century was that scientific revolutions (as Boltzmann called them) need not threaten an objectivist stance towards science insofar as there is substantial continuity in theory change and this continuity relates to parts of scientific theories that do play a role in the generation of the empirical successes of the theories.

Laudan, to his great credit, is a philosopher of scienceimmersed in the history of science and in the early 1980s he put forward what came to be known as a pessimistic induction against realism. The gist of the argument is no different from the one outlined above: a number of past theories have been proven false and were abandoned; hence, it is likely that current theories, despite their empirical success, will prove false and will be abandoned in the future. This argument (whose form I have oversimplified) was meant to undercut the realists’ reasons for epistemic optimism. For my generation of philosophers of science Laudan’s paper (together with Worrall’s reply to it in 1989) was a wake-up call. It was imperative to not just analyse the structure of the Laudan’s argument and to try to offer conceptual reasons to resist it, but also to engage with real theories in the history of science and to try to show how there is substantial continuity (and of the right sort) in theory-change. My early work on realism (my dissertation in 1994 and subsequent work) was dedicated to this task. I advanced what I called the ‘divide et impera’ move and I tried to substantiate it with historical cases, such as the caloric theory of heat and the theories of the optical ether. I tried to analyse the empirical successes of past theories and to show that the parts of those theories that were responsible for these successes, far from being abandoned, were accommodated within successor theories. At roughly the same time Philip Kitcher developed a similar idea but he framed it differently, as a distinction between presuppositional posits and working posits.

9780748620333

3:AM:How does Wilfred Sellar’s arguments - and later developments of it - about a layer-cake version of scientific explanation come into all this?

SP:I read Sellars when I was working for my dissertation in the early 1990s. But I decided not to include a discussion of his arguments for realism in my dissertation. This was a mistake that I rectified (I hope) later on. Sellars made scientific realism a respectable position by arguing that the scientific image of the world is the true(r) image of the world (truer than the manifest image). At the same time, he offered a forceful argument for taking seriously theories that posit unobservable entities. BS (Before Sellars) the dominant view of explanation was hierarchical and layered. Theories explain observational (inductive) generalisations and observational generalisations explain individual observable facts. Given this picture, it is always possible to dispense with the upper layer, that is the layer of theoretical explanations. All the serious explanatory work, as it were, is done by the middle layer of inductive generalisations. AS (After Sellars) the view that prevailed was that theories explain directly the lower layer of observable facts, without having to rely on the middleman of inductively established empirical generalisations. In fact, theories explain why the inductively established empirical generalisations hold only approximately or with exceptions. AS, theoretical explanation—that is explanation in terms of whatever entities are posited by scientific theories—was taken to be indispensable in understanding why the observable world is the way it is.

3:AM:And why do you think John Worral’s attempt to put together the ‘no miracles argument’ with the ‘pessimistic induction’ argument fails?

SP:John opened up a new way of thinking about the relation between the explanationist defence of realism and the history of science. Being himself historically sensitive, partly because of his teacher Lakatos and partly because of Poincaré, he strove to reconcile scientific realism with the historical record of science. Influenced by his structuralist reading of Poincaré, he took it that the continuity there is as theories change is only at the level of mathematical equations. And then reflecting on the objectivist tendencies of Poincaré, he took it that the retained mathematical equations correctly represent the structure of the world—real relations among otherwise unknowable things. There are various issues with Worrall’s understanding of Poincaré’s relationism, but the truth is that Worrall tried to canvass a selective realist position which exploited both the ‘no miracles’ intuition, as Worrall puts it, and the force of the pessimistic induction. My chief disagreement with Worrall’s structural realism is that I cannot see how he can exploit the no miracles argument and at the same time restrain it only to the level of mathematical equations. What gets credit from the empirical successes—especially the novel predictions—of scientific theories are various theoretical claims made by the theories (those that are essentially involved in the generation of the empirical successes, as I noted earlier) and not just the mathematical equations which, as they stand, do not entail any predictions. When Worrall adopted the Ramsey-sentence version of structural realism, my major objection was that this kind of position falls foul of the Newman problem and that any attempt to overcome it undermines structuralism.

3:AM:You argue that your scientific realism is enmeshed with another position, that argues that truth is a non-epistemic concept. What do you mean when you make that claim, and why is it significant to the overall position regarding scientific realism?

SP:Answering an earlier question I noted that a non-epistemic account of truth secures the realist claim that the world is mind-independent. But, you might wonder, why can’t a scientific realist be a verificationist about truth? Why can’t you believe in electrons and in truth being essentially epistemically constrained? Of course there are philosophers who adopt this joint view. But I wanted (and still want) to insist that they are not scientific realists—strictly speaking. Recall the debate over the pessimistic induction that I noted above. This debate has driven home the point that if there is continuity in theory-change, this has been a considerable achievement, emerging from among a mixture of successes and failures of past scientific theories. A realist non-epistemic conception of truth, and in particular the possibility of divergence that this secures, do justice to this hard-won fact of empirical success and convergence. Given that there is no guarantee that science converges to the truth, or that whatever scientists come to accept in the ideal limit of inquiry or under suitably ideal epistemic conditions will (have to) be true, the claim that science does get to the truth (based mostly on explanatory considerations) is quite substantive and highly non-trivial. If, on the other hand, the possibility of divergence is denied, the explanation of the success of science becomes almost trivial: success is guaranteed by a suitably chosen epistemic notion of truth, since—ultimately—science will reach a point in which it will make no sense to worry whether there is possible gap between the way the world is described by scientific theories and the way the world is.

In my own work, I have presupposed a substantive account of truth. I have assumed, without defending, something like a correspondence theory. But there is a very serious line of thought which takes a deflationary approach to the truth-predicate and the concept of truth; roughly put, the idea is that truth is not a substantive property but a linguistic device for making certain generalisations (eg, ‘Whatever Plato said is true’). In a certain sense, this formal account of truth is a non-epistemic account (at least, it is not an epistemic account). The issue of which is the correct theory of truth is independent of the issue of realism; that is, we certainly have independent grounds to adopt or reject a certain theory of truth. I tend to think that there are good reasons to reject the deflationary approach as a comprehensive theory of truth. In any case, a deflationary account of truth (assuming that it is independently acceptable) would be suitable for realism, provided that it found the resources to accommodate the possibility of divergence. A lot depends on how deflationists understand the notion of acceptance (to say that p is true is to accept p) and how they dissociate it from justification.

3:AM:Do you agree with Nancy Cartwrights’s version of realism with its motto of ‘most likely cause’ and the wedge she drives in between explanation and truth that makes laws of nature either true but non-explanatory or explanatory but false?

SP:If we know the likeliest cause, we should certainly infer it. But the whole issue is that we are not often in a position to know the likeliest cause and we have to rely on explanatory and other evidential considerations to rank potential causes (or explanations) in terms of likelihood. The key idea behind Inference to the Best Explanation is that explanatory considerations guide (that is, they should guide) judgement of likelihood and concomitant inferences. Inference to the best explanation faces its share of problems. But if we allow ampliative inferences at all (and I cannot see how we can do any justice to our inferential practices if we do not) we have to face the fact that they cannot be made to be algorithmic; judgement plays an indispensable role in them. As Charles Peirce was perhaps the first to note, evaluating ampliative reasoning requires a two-dimensional framework in which we examine and balance two kinds of desiderata: what he called ‘uberty’ (“value in productiveness”) and security. Now, Cartwright was right in criticising the Deductive-Nomological model of explanation (which was widely known as the ‘covering-law’ model) and to defend a broader conception of causal explanation. But I think she was wrong to dissociate causal explanation from nomological explanation.

Causes act lawfully and causal laws are still laws. At least those causes that science can tell us about are regular causes. There are various subtle issues here about causation that cannot be fully canvassed. I think, for instance, that we cannot make sense of singular causes—though others disagree. The alleged reason that laws of nature are not explanatory is that laws are simple (and perhaps idealised) whereas the actual phenomena are too complex (to be covered by simple laws). Cartwright’s example is about a charged particle that moves under the influence of two forces: the force of gravity and Coulomb’s force. Taken in isolation, none of the two laws (i.e., Newton’s inverse-square law and Coulomb’s law) can describe the actual motion of the charged particle. She then concludes that each law loses either its truth or its explanatory power.

I do not see things this way. For one, there could be (in fact, it is likely that there is) a complex law that governs the motion of massive and charged particles. After all, fundamental forces are unified, according to current theory. More importantly, however, there is no tension between truth and explanation. The two laws, taken in isolation from each other, are true of the motion of massive particles and of charged particles respectively. They are also explanatory of them, in the strict sense of ‘covering’. The complex phenomenon is governed by both of them jointly and hence it cannot be covered by each of them separately. This does not imply that they lose their explanatory power. They still explain how the particle would behave if it was just massive and not charged or if it was charged but not massive. To demand of each of them to be explanatory in Cartwright’s sense, that is, that each of them separately should somehow cover the actual complex effect, is to demand of them something they cannot do.

9780773524682

3:AM:And do you dispute her hyper-realism about capacities and powersbecause they don’t fit in with your scientific realism? There’s quite a lot of powerstalk about at the moment. What’s the problem with it from your perspective?

SP:Indeed! Thomas Aquinas and Aristotle would have been very happy with this turn of events in philosophy of science and metaphysics. Powers, then and now, are meant to be explanatory principles. They are (were) posited to explain action and change in nature. They are (were) meant to ground natural necessity; to ensure connections (relations of entailment and repugnance) between seemingly distinct properties. Powers, Aquinas said, are distinguished according to “their acts and objects”: “A power as such is directed to an act”. This view was dominant for centuries, as is well known. It came under attack in the seventeenth century and it was slowly but steadily replaced by the view that the world is law governed and hence that activity and change in nature are subject to laws. The revolt against powers, as I call it, did not aim to dislodge powers altogether; rather it aimed to remove their sui generis character and to ground them in the so-called categorical (structural) properties of things and the law they obey.

The problem with powers is still the same: they are sui generisexplanatory principles. They are widely taken to be grounded in the natures of things (another key neo-Aristotelian concept) but this pushes the problem one step back; for natures are taken to be sui generis. I trust modern science and I take for granted what it tells us about, say, the structure of the water molecule and the properties of water—which are grounded in the properties of the water molecules (collectively understood). If we call this the nature of water, I am very happy with this. But the power of water to dissolve salt is not sui generis. It is fully grounded in the properties of water molecules and the relevant laws. That this power exists even if it is not exercised (a certain glass of water might never exercise its power to dissolve some salt if I drink it first) should be simply taken to imply that the ascription of power to an entity does not require that it is actually exercised but that it could be exercised under suitable conditions and given the properties of this entity and the relevant laws. I am fully aware that it is not quite possible to define powers or dispositions in terms of conditionals. But I am worried that most counterexamples involve cases of wizards and witches, magic spells and supernatural interventions. Could it be that all properties are powers? The answer is negative, I think, for a reason that Aristotle put forward ages ago when he introduced the distinction between potentiality and actuality as modes of existence: no potency, without act.

3:AM:You have a theory of reference for theoretical terms you call ‘causal-descriptive’. Does this result in a theory being almost true rather than full on true? If it does, isn’t this a concession to the other side?

SP:Causal descriptivism is a view of reference I defended being influenced by David Lewis and Fred Kroon. The causal theories of reference revolutionised the way we think about the reference of proper names and theoretical terms. But they told only part of the story of reference fixing: the causal agent and the causal connection. It is well known that pure causal theories make referential success and referential continuity in theory-change almost guaranteed—if indeed there are causal agents which caused the phenomena that led scientists (or whoever) to introduce a new theoretical term to refer to the cause(s) of these phenomena. Descriptive theories of reference took a lot of battering. But they capture a significant part of reference-fixing: the use of descriptions to identity the referent. The basic negative idea behind causal descriptivism is that the reference-fixing mechanism should neither be just a non-conceptual causal relation to the referent, nor just a conceptual description-based relation to the referent. The basis positive idea is that the reference-fixing mechanism should be a genuine hybrid. The causal relation with the referent plays an indispensable role in reference-fixing. But the causal relation is not enough. Reference is fixed by means of descriptions of the causal role attributed to the putative referent. I think this account, which I have defended in some detail in my work, does justice to the thought that a plausible general condition on reference-fixing should (as far as possible) combine two elem¬ents (that need not coincide in all cases), namely a causal element and a cognitive one. These two elements coincide in (non-malign) cases of perceptual access to objects and concomitant reference-fixing acts. When there is no perceptual access (as in the case of unobservable entities) the descriptive element is meant to offer some identifying marks of the putative referent, and in particular those marks by means of which it is supposed to play its causal role.

Regarding the full truth or not of theories, I would say that we should have learned to live with the fact that all theories are (born) false. Falsity is cheap, in many ways. Theories employ idealisations, abstractions, simplifications, approximations etc. In many typical cases, there are no corresponding entities in the actual world. But this has to do with the limits of representation and the need to generalise and to offer simple and comprehensive models and laws. Given this, the question is: can false theories be partially or approximately true, or truth-like? Or, can some of them be truer than others? I have tried to answer this question in the positive, though it is notoriously hard to formalise the notion of partial or approximate truth. Formalisation is conducive to clarity but I firmly believe the complex structure of most interesting philosophical concepts (and their inevitable interconnections) resists formalisation.

3:AM:You’re philosophical struggle over the years has consisted in trying to resist metaphysics on the one side and instrumentalism on the other in order to remain a scientific realist. But a scientist might say that there’s no point to any of this - that science exhausts itself and philosophy has nothing to add to whatever the science says. So why should we heed the philosophers?

SP:I have the greatest respect for science and I have defended its epistemic credentials over the years. But I do not think that science exhausts itself. I have immersed myself in the scientific tradition of the late nineteenth century and the early twentieth century and I am fully persuaded that if it was put to the most eminent scientists of that time that ‘science exhausts itself’ they would reply with an incredulous stare (to say the least). For them, practising science and doing philosophy were the two sides of the same coin; and the coin was understanding the world we live in and understanding what makes science a special way to achieve this understanding. This kind of tradition of philosophers-scientists goes very far back in time and it represents some of the finest moments of the history of philosophy and the history of science. Nowadays, it is hard to be a serious scientist unless you devote yourself to science and a serious philosopher unless you devote yourself to philosophy, (partly because of the over-specialisation of both science and philosophy). But it does not follow from this the two areas can grow in isolation from each other. They never have; they never will. Philosophy might not be able to yield further facts about the world. In this sense, the facts are revealed by science. But what science says about the world is not always and invariably clear and uncontroversial. There are issues of interpretation of theories, of explication of concepts and most importantly of criticism. Philosophy of science performs all of these functions vis-à-vis science and it thereby acts as the platform in which science is analysed and scrutinised. Those scientists who proclaim the end of philosophy, they proclaim, without being aware of it, the end of science. For science is not just about getting the facts right; it is also about making sure that the facts that have been got are right. And this second issue is where philosophy becomes indispensable. There is another reason why science does not exhaust itself. Philosophy offers the space in which the various partial images of the world provided by the individual sciences are fused together into a stereoscopic view of reality. The various sciences offer us perspectives on reality. They employ different kinds of taxonomic categories and conceptualise the world by means of different structures of concepts. Still, there is a presumption—to say the least—that the various perspectives offered by the various sciences or theories are perspectives of the same world. Hence there is the need to put together the scientific image of the world; to look at the various interconnections among the ‘partial’ images generated by the individual sciences; and to clear up tensions and conflicts. This is precisely a kind of job that philosophy of science is meant to do. It offers a more global (but not absolute) perspective on reality—for seeing the whole picture.

Isn’t there bad philosophy of science? I think there is. To be somewhat provocative, I would say that the philosophy of science which disregards science altogether is as bad as the philosophy of science which succumbs to science (especially in the form of intelligent commentaries on scientific findings). Critical engagement with science as it is practised now and as it has grown through its history seems to me to be the right attitude. Philosophy retains its independence but forfeits its foundational role.

3:AM:And for the curious here at 3:AM what books should we be reading to go further into your philosophical world?

PS:To see where I come from, I would recommend Aristotle’s Posterior Analytics, Hume’s Treatise of Human Nature, Poincaré’s Science and Hypothesisand Duhem’s The Aim and Structure of Physical Theory. I have also been heavily influenced by the writings of Richard Boyd and of (the early) Hilary Putnam. To see where I am heading, I would recommend Aristotle’s Metaphysics, Malebranche’s Elucidation Fifteenth (of the Search After Truth), Carnap’s Empiricism, Semantics and Ontology, Collingwood’s An Essay on Metaphysicsand Gerd Buchdahl’s Metaphysics and the Philosophy of Science. My own philosophical world has recently expanded in hitherto uncharted territories (for me, I mean): medieval philosophy, especially in connection with theories of Induction. I am finishing a book on empiricism and I have started a book on the conceptual history of induction from Aristotle to Quine. I have also been studying the famous two Principia of the seventeenth century, Descartes’ and Newton’s.


ABOUT THE INTERVIEWER
Richard Marshallis still biding his time.

Buy his book hereto keep him biding!