Interview by Richard Marshall.

'What I want to do in my book is to defend the rationality of commitment to do difficult things—paradigmatically of commitment to resist temptation. This is neither a psychological matter nor quite a logical matter but, one might say—though I don’t much like the word—a normative matter.'

' Promising to try is, I think, a non-starter. Imagine saying to the person you love, “I promise to try to be faithful to you.” (You may as well add, “And if I am unfaithful, it will of course be unwillingly.”) —Doesn’t this promise ring hollow? We know that being faithful may be difficult—that it may require persistence in the face of temptation—and that some people fail to keep such promises. However, to weaken our promise to merely trying seems somehow wrong. The problem is: What could be wrong with it? After all, what more can we do than try?'

'In thinking about agency, I look to Sartre, but in thinking about trust, I look to Strawson. I see a parallel between those philosophers: both show, in different ways, why there is something wrong with looking at people—ourselves or others—as objects of prediction. I think that Sartre is particularly perceptive in matters of the first person, and I think that Strawson is particularly perceptive in matters of the second person. I am not so sure about Sartre on the second person, nor about Strawson on the first person.'

Berislav Marušić'smain research interests lie at the intersection of epistemology, ethics and the philosophy of mind. He is also interested in the nature of reasons and emotions, the philosophy of perception, existentialism and the history of modern philosophy. Here he discusses promising and impossibility, promising and difficulty, rationality, why promising to try is bad faith, why non-cognitivism won't help, why seeing the problem in terms of practical knowledge won't help either, why evidential constraints are the big challenge - but they don't help either, the importance of Kant's notion of freedom and Sartre's notion of bad faith, Strawson's 'participation stance', the rationality of commitment to difficult things, why attempts to understand ourselves from an objective scientific stance distorts, and why he is a domesticated existentialist. This goes far...

3:AM:What made you become a philosopher?

BM:I suppose I’ve always wanted to be a philosopher. When I was a kid, I had the theory that everything was made of tiny grains of sand. But I was bothered that mayonnaise didn’t seem to fit my theory. So I chose to become a philosopher.

My challenge, in becoming a philosopher, was to trust philosophy enough to not feel the need to do something in addition to it. I started out wanting to be a philosopher and a writer, but my writing was too programmatic. I wanted to be a philosopher and a physicist, but the physics I was studying was too hard and had too little to do with philosophy. I wanted to be a philosopher and a literary critic, but I would read literature as if it was analytic philosophy and was annoyed by its ambiguities. Eventually, I reconciled myself to the thought that philosophy was enough.

I’ve also had an important role model: My grandmother, Heda Festini, is a philosopher and has written books on existentialism, Wittgenstein, and the history of Croatian philosophy.

3:AM:Now in your book you’ve analysed a problem that seems related to a problem I got into trouble with: I once argued that we couldn’t intend to do something that we know is impossible – people came back and said, “we do that all the time.’ You don’t talk about this but about promising: you look at whether can we promise to do something that we have evidence that there’s a good chance that we won’t do it. It seems clear to me that you can’t but again, we do it all the time. So first, could you set out why at first it looks like something is going wrong when we make that kind of promise?

BM:I think there is a big difference between promising to do something that we know is impossible and promising to do something that we know is difficult to do. That is because what is impossible is not up to us. It is not subject to our agency—and so it is not something that we can commit to doing. But something that is difficult to do may nonetheless be entirely up to us. The difficulty may in no way undermine our freedom. For example, it may be extremely difficult to resist temptation, but temptation won’t take matters out of our hands. Quite to the contrary, temptation works through our agency, rather than around it (to borrow a formulation from Richard Holton), so that when we give in, we do so, precisely, through an exercise of our freedom.

I am especially interested in the possibility of promising to do something when we have evidence that there’s a good chance that we won’t follow through. The problem, as I see it, is clearest if we consider what to believe about our future course of action when we make such a promise. If we believe, as our evidence suggests, that there’s a good chance that we won’t follow through with our promise, we seem to be insincere in making it. After all, if we believe that there’s a good chance that something won’t happen, it seems that we can’t sincerely say that it will happen. However, if we believe that we will follow through, we seem to be irrational. After all, if we have evidence that we might fail to do something, it seems that we can’t rationally believe that we will do it; we can only rationally believe that we will possibly or probably do it, depending on what our evidence is. So it seems that we must be insincere or irrational when we promise against the evidence. Yet whether we are insincere or irrational, we seem to be irresponsible, because we are liable to mislead the person we are making our promise to. Therefore, as you say, something must be going wrong—or so it seems.

3:AM:So if this is right, does it follow that we are being irrational if we go on and make this kind of promise? And is that irrationality the same as being irresponsible? Is it a problem that on first blush is a psychological error or a logical one, or both, or something else?

BM:I think that it does not follow! The ambition of my book Evidence and Agencyis to explain why we needn’t be irrational or irresponsible if we make this kind of promise. We needn’t be irrational or irresponsible, though, of course, we often are. However, in the good case, the mere fact that we have evidence that there’s a significant chance that we won’t do something (if we promise to do it), does not imply that we are irrational or in any way irresponsible, when we make the promise.

What I want to do in my book is to defend the rationality of commitment to do difficult things—paradigmatically of commitment to resist temptation. This is neither a psychological matter nor quite a logical matter but, one might say—though I don’t much like the word—a normative matter.

3:AM:You reject one approach to overcome the problem, the one that appeals to ‘trying’ don’t you? Can you say what this proposal is, and why it doesn’t work?

BM:Some people think: “We can’t responsibly promise against the evidence. We can only responsibly promise to try.” (Or, more precisely, “We can’t responsibly promise to do something when we have evidence that there’s a significant chance that we’ll fail to follow through.”)

However, promising to try is, I think, a non-starter. Imagine saying to the person you love, “I promise to try to be faithful to you.” (You may as well add, “And if I am unfaithful, it will of course be unwillingly.”) —Doesn’t this promise ring hollow? We know that being faithful may be difficult—that it may require persistence in the face of temptation—and that some people fail to keep such promises. However, to weaken our promise to merely trying seems somehow wrong. The problem is: What could be wrong with it? After all, what more can we do than try?

Here is why I think it’s wrong to merely promise to try to be faithful: It makes sense to merely promise to try to do something, rather than outright promise to do it, if we anticipate that we might be prevented from doing it. If I think that my train might be late, it makes sense to merely promise to try to meet you at 2pm—since something that is outside of my control might prevent me from being there by 2pm. But being faithful is entirely up to us. And this means that we cannot be prevented from being faithful. The only way we will fail to be faithful is if we don’t try, or don’t continue trying. But since the mere promise to try is expressly a weaker commitment that the outright promise, it suggests that we are not prepared to continue trying, and, therefore, are not prepared to actually do it.

Indeed, I think that the promise to try to be faithful can be deceptive, since it implicitly suggests that we could be prevented from being faithful. In this way, it presents something that is entirely up to us as if it were not—and so it exhibits something akin to what Jean-Paul Sartre calls bad faith.

Matters are more complicated when it comes to promising matters that are not entirely up to us—such as promising to run a marathon. In those cases, it might make sense to promise to try, since in those cases we might be prevented from doing what we would outright promise to do. But even then we must be mindful of the possibility of bad faith! Sometimes the promise to try simply hides a choice under the veil of our susceptibility to circumstances beyond our control.

3:AM:Why don’t you think appeals to non-cognitivism work either to get us out of the problem? (Can you quickly sketch what non-cognitivism is for the non-specialist reader.)

BM:Non-cognitivism, in the present context, is the view that we can intend to do something without believing that we will actually do it. A non-cognitivist might maintain that sincerely promising to do something does not require believing that we will succeed. It only requires intending to do it. And we can intend to succeed without believing in success. For example, suppose that you intend to be faithful to the other person, but you aren’t absolutely sure that you will succeed. In this case, it might seem that you can intend to be faithful without believing that you will be.

I don’t think that this will work. That is because our intentions inform our outlook of what will happen; they are, as Michael Bratman argues, elements in our plans that, together with our beliefs, “fit together into a consistent model of [our] future.” This implies that our intentions and beliefs have to be consistent. And this implies (though Bratman doesn’t think so) that, if we outright intend to do something, we ought, at pains of inconsistency, also outright believe that we will do it.

The easiest way to see this is to consider how we would express an outright intention. Suppose you outright intend to go to Sam’s party. You would say, “I’ll go to Sam’s party.” (Saying, “I intend to go,” usually expresses something that falls short of an outright intention—a partial or conditional intention.) Similarly, if you outright intend to be faithful to someone and thus sincerely promise to do so, you’ll say, “I’ll be faithful to you, I promise.” This does not leave open the question of what you will do. So you can’t, without some form of inconsistency, add, “But there is a good chance that I won’t.”

The conclusion to draw from this is perhaps that intentions are beliefs, or involve beliefs. But one could also think there is a strong ‘normative’ relation between intentions and beliefs—so that if you outright intend to do something, you should outright believe that you will do it, at pains of inconsistency. On such a view, my arguments would be compatible with non-cognitivism. Since I don’t think that the main problem depends on whether cognitivism or non-cognitivism is true, I am happy to leave it open.

I should add that I sometimes worry that I have not properly grasped the non-cognitivist view. But at other times, non-cognitivists like Bratman strike me as deeply cognitivist. How could intentions be fundamentally different from beliefs if they are elements in our plans that, together with our beliefs, “fit together into a consistent model of [our] future”? Isn’t a ‘model’ of our future our view of what will happen?

3:AM:Does seeing the problem in terms of practical knowledge help?

BM:Some people think so. And although I reject this approach, I am actually quite sympathetic to it.

Let me first say a few words about ‘practical knowledge’. Elizabeth Anscombe argues that as agents, we enjoy a distinct kind of knowledge of what we are doing and of what we will do—practical knowledge. In particular, in deciding to do something, we can come to know that we will do it. When we come to know it in this way, our knowledge is not based on evidence but is grounded in our decision. For example, when in a café you decide to have an espresso, you gain knowledge that you will have an espresso in virtue of deciding to go so. Your knowledge does not rest on the observation that whenever you are in a café, you end up having an espresso—as a detective’s knowledge might—but flows directly from your decision.

With a conception of practical knowledge in hand, we might say that in sincerely promising to do something, you decide to do it, and so you acquire practical knowledge that you will, indeed, do it. Moreover, if you know that you will do what you are promising to do, you can rationally believe that you’ll do it despite your evidence. After all, it is plausible that knowledge is sufficient for rational belief.

The trouble with this response is that when we promise against the evidence, we don’t seem to have practical knowledge that we will follow through with our promise. That is because the evidence is a defeater for our practical knowledge: it prevents us from having knowledge.

To see this, consider a very different kind of case, involving mathematical knowledge. Suppose you calculate something by longhand. Your calculation is accurate and you have made no mistake. It seems that you have mathematical knowledge of the result of your calculation; your knowledge of the result does not rest on evidence or observation but is grounded in the calculation alone. However, if you acquire evidence that there is a good chance that you made a mistake—say, because you often make mistakes when you perform such calculations, or because people who are in many respects like you often make mistakes—this will defeat your mathematical knowledge, even if your calculation was perfect.

Analogously, your evidence that there is a good chance that you will fail to do what you are promising to do will defeat any practical knowledge that you might have had when you made your promise. Your situation would be analogous to a case in which you proved something mathematically but had empirical evidence that you made a mistake.

In short, the appeal to practical knowledge won’t help, because as long as we are promising to do something that we know will be difficult to do, and we have evidence that there’s a significant chance that we’ll fail to follow through with our promise, we can’t have practical knowledge that we will do it—since our evidence defeats any knowledge we might have had.

However, I do think that the appeal to practical knowledge contains an important insight: It recognizes that the grounds on which we, as agents, form our view of what we will do are different than the grounds on which mere observers form their view of what we will do. As agents, our grounds are fundamentally practical. This is the starting point for my own solution to the problem, which I will discuss shortly.

3:AM:Why don’t evidential constraints help?

BM:This, I think, is the main challenge.

Let me explain how one might appeal to evidential constraints to address our problem.
One might hold that there is an evidentialist constraint on promising: In order to responsibly promise to do something, we must have adequate evidence to rationally predict that we will actually follow through. Therefore, when we have evidence that there’s a significant chance that we won’t follow through, we cannot make a promise responsibly.

I think that an evidential constraint on promising is not plausible. Here is why: Such a constraint would imply that something can speak in favor of doing something only to the extent that it constitutes evidence that we’ll do it if we decide to do it. But that is implausible: It can be tremendously important for us to do something—and to be resolute and resist temptation—even though we are not in a position to predict that we will succeed if we decided to do it. In short, it can be practically rational to do something that we know will be difficult to do. An evidentialist constraint gives us a mistaken view of the strength or significance of reasons for action.

I think that the point is essentially the same if, instead of the evidentialist constraint, we consider a ‘practical knowledge’ constraint—if we hold that to responsibly promise to do something, we must have practical knowledge that we will follow through. Such a constraint would also rule out the possibility of responsibly promising or rationally deciding to do something that we know will be difficult to do. It is not a plausible constraint on the strength of our practical reasons—even if it does not amount to an evidential constraint.

3:AM:So you go to Kant and Sartre to set up a solution. Can you say how their take on agency helps us?

BM:Kant’s dictum is that we act under the idea of freedom. Sartre’s view is that we are in bad faith when we deny our freedom. I think that both Kant’s and Sartre’s insights have considerable epistemological significance. Here is how I propose to understand it: When it is up to us to do something, then we can, and should, settle the question of what we will do in light of our practical reasons—reasons that show it worthwhile to do. Since it might be extremely important for us to do something difficult, we can have excellent practical reasons to do it even if we don’t have evidence in light of which we can rationally predict that we will follow through with our decision. In those cases, we can rationally believe against the evidence: We can believe that we will do something difficult, even though we have evidence that there’s a significant chance that we will fail to follow through. If, however, we look to our evidence to settle the question of what we will do, when matters are up to us, we deny our freedom and we exhibit something akin to bad faith.

That’s the core of my view.

3:AM:You make a distinction between [decisions]* and predictions that is crucial here. Can you say how the distinction is established and how it works?

BM:Following Kant and Sartre, I think that we have two ways of settling the question of what we will do: we can decide what to do, or we can predict what we will do. (Sometimes the distinction is drawn in terms of two ‘perspectives’ or ‘standpoints’—a practical and a theoretical one, or a first-personal and a third-personal one.) I propose to spell this out in terms of the kinds of reasons in light of which we could arrive at an answer to the question of what we will do: We can settle the question in light of practical reasons or in light of evidence. If we settle it in light of practical reasons, our view of what we will do is, or rests on, a decision. If we settle the question of what we will do in light of our evidence, our view of what we will do is a prediction.

My main claim is that if matters are up to us, then we can and should decide what we will do—that is, settle the question of what we will do in light of our practical reasons. Otherwise we exhibit something akin to bad faith. And if it is very important for us to do something difficult, we may well rationally decide—and responsibly promise—to do it, despite our evidence.

I hasten to add a very important point: We can’t be practically rational if we ignore our evidence. If we made decisions while ignoring all information about the likelihood of succeeding, we’d be—stupid.

The hard question is how to consider evidence in our decision-making without taking it as grounds for a prediction—especially if what makes something a prediction is the fact that it is based on evidence.

The answer to this hard question is that there are two different ways in which we can take into account evidence about what we are likely to do. We can take it into account as the basis for prediction. Or we can take it into account as considerations about how difficult it will be to do what we are deciding to do.

Let me illustrate the point with an example. Suppose you are considering running a marathon with a limited number of starting places. The starting places are assigned through a lottery. Now suppose that consideration of likelihoods reveals the following: Your chance of getting a starting place is pretty good—say 80%. Your chance of actually running the marathon (or finishing with a certain time), conditional on getting a starting place and the world cooperating with your plans, is also pretty good—say also 80%. However, these two uncertainties are very different: Whether you get a starting place is not up to you, and so you can’t decide to get a starting place. You have to make your decision to run the marathon conditional on winning a starting place in the lottery—and, in general, conditional on the world’s cooperation. But whether you actually run the marathon is, we may suppose, entirely up to you. (Let us ignore the possibility of getting injured, or hit by a car or a meteorite. Those are ways in which the world must cooperate.) But if it is entirely up to you whether you run the marathon, then you cannot make your decision to run it conditional on—your running the marathon! You have to regard the uncertainty that arises from the possibility of failing to run the marathon in a different way than the uncertainty that arises from the possibility of not getting a starting place, or of the world not cooperating.

However, you cannot make a good decision to run the marathon if you ignore the difficulty for you of dong it. You should be aware that running a marathon will require resolve and persistence. So, if you make a good decision, you will take into account the difficulty: You will consider whether this difficult project is really worth the effort. And, if it is, you will train hard in advance of the race (for a very long time!), and you will be mindful of the need for resolve during the race. In this way, you will not ignore the evidence of difficulty. But you will also not use it as the basis for prediction.

In sum, when matters are up to us, we can and should decide settle the question of what we will do in light of our practical reasons, while being mindful of the difficulty of what we are deciding to do.

3:AM:So is Sartre asking us to trust against the evidence, and is this trust in the context of a promise the kind that Strawson tasks about in terms of his ‘participant stance’? Is this what you’re calling ‘doxastic partiality’?

BM:The connection between Sartre and Strawson is not quite this tight. In thinking about agency, I look to Sartre, but in thinking about trust, I look to Strawson. I see a parallel between those philosophers: both show, in different ways, why there is something wrong with looking at people—ourselves or others—as objects of prediction. I think that Sartre is particularly perceptive in matters of the first person, and I think that Strawson is particularly perceptive in matters of the second person. I am not so sure about Sartre on the second person, nor about Strawson on the first person.

I think that the topic of trust is extremely important in the context of promising. When we make promises, we don’t make them into the void; we make them to someone or other. In promising, we offer the other an answer to the question of what we will do (whether this is the point of promises or not). That is why it matters how others—especially those to whom our promises are addressed—will view what we will do. But others don’t share our privilege of settling the question of what we will do in light of our practical reasons. Precisely not—since matters are up to us, not them. How, then could they trust us if they share our evidence?

Here I propose to extend Strawson’s insights into epistemology. We normally don’t encounter another person as an object of prediction—or, to quote Strawson, “as an object of social policy; as a subject for what, in a wide range of sense, might be called treatment; as something certainly to be taken account, perhaps precautionary account, of; to be managed or handled or cured or trained; perhaps simply to be avoided.” In particular, when someone promises us something, we normally don’t encounter her promise as a piece of evidence in light of which we settle the question of what she is likely to do. Rather, we normally regard her promise as an occasion to trust or distrust her. And if we do trust the other, then we take her at her word—we believe her—rather than settle the question of what will happen in light of our evidence.

My aim in the last chapter of Evidence and Agency is to argue that just as an agent’s view of what she will do is grounded in her practical reasons, another person’s view of what an agent will do could be grounded in interpersonal reasons—paradigmatically, the agent’s word. When this is the case, the recipient of the promise will believe the person making the promise—and so her belief won’t rest on evidence. (If it rested on evidence, there would be no need to believe the other; the facts would speak for themselves.) This is why someone who is close to the person making the promise can have a view of what she will do that is distinct from that of an impartial observer. In this sense, she is doxastically partial to the person making the promise.

Of course, trust is not always justified, and not everyone’s word about everything will always constitute a good interpersonal reason. We have to take into account other people’s trustworthiness, just as in our own practical deliberation we have to take into account the difficulty of our possible projects. Nonetheless, our view of what others will do, when they make promises to us, can, at least in the good case, be grounded in reasons that are fundamentally different from evidence.

3:AM:Why is it important to establish the rationality of committing to do difficult things – is this a position that contemporary philosophy has had trouble with?

BM:It is important, because it is a problem of life: our commitments to do difficult things are of central importance to us. Much of the fabric of our relationships rests on commitments that we have made to others and that they have made to us. The fabric of society rests on implicit mutual commitments and implicit mutual trust. It is a tremendous problem if commitments to do difficult things are condemned irrationality. Being faithful, being resolute, and being just is often difficult. I think it is of utmost importance to vindicate the thought that we can rationally commit to do difficult things and trust others to follow through on their commitments.

I do think that contemporary philosophy has particular trouble with this. That is because it seems eminently reasonable to form our view of everything in light of our evidence. By extension, it seems eminently reasonable to form our view of what we will do, and of what others will do, in light of our evidence. But this, I think, leaves out the distinct perspective we enjoy as agents and as participants in relationships.

3:AM:Why is it also important to kick against the view that evidential and other epistemic considerations solely determine what it is rational to do?

BM:I think it is important, because what it is to be an agent is to act in light of what we value, or see as good, or desire—and not in light of our evidence about what we will do. This is the epistemological import of the principle that we act under the idea of freedom.

3:AM:And does this view support the idea that we distort notions of agency if we look at ourselves and others as impartial scientific observers?

BM:Yes; it is of a piece of it. I think that a fully impartial, scientific view of ourselves is a view in which only the evidence matters. If we adopt such a view, our promises and decisions are merely predictors of our behavior, and others’ decisions and promises are merely predictors of theirs. This is not entirely inaccurate: Decisions and promises are predictors of behavior. But it is distorting: My decision is, for me, not a piece of evidence in light of which I settle what I will do. My decision is part of my view of what will happen: my decision embodies my take on my future, insofar as that future is up to me. Similarly, your promise is, for me, not a piece of evidence in light of which I settle what you will. Your promise is an occasion for me to trust you.

3:AM:You see all this work as a defense of commitment and paradigmatically, the resistance to temptation. It seems clear that there is a strong ethical and existentialist approach in this – obviously Sartre being a key figure makes this kind of obvious. But I will ask – are you an existentialist – and do you think existentialism and Sartre still have many legs for contemporary philosophers? And is this position then one that is incompatible with a fully naturalistic view?

BM:I suppose I am an existentialist of sorts: I hold that it is our privilege as agents to settle questions about our future in light of our practical reasons, not our evidence. This is the privilege of freedom. It is at odds with a fully naturalistic view of ourselves.

Perhaps I am a domesticated existentialist—an existentialist raised on analytic philosophy. Unlike Sartre and other existentialists, I like to talk about reasons, and I reject the theory of radical choice. Also, I have a more favorable view of other people and a more optimistic view of love.

3:AM:And finally, are there five [six?] books that you could recommend to readers here at 3:AM that would take us further into your philosophical world?

BM:

My favorite book of philosophy is Richard Moran’sAuthority and Estrangement. It is the inspiration for Evidence and Agency. It also opened my philosophical world to Sartre’s Being and Nothingnessand Anscombe’sIntention.

I learned a great deal from Michael Bratman’s Intention, Plans and Practical Reason

and Richard Holton’s Willing, Wanting, Waiting.

Tim Williamson’s Knowledge and its Limitswas very important in my philosophical education and is a book I admire.

Sebastian Rödl’s Self-Consciousnessfills me with hope about how much more there is to discover.

And Pamela Hieronymi’sMinds that Matter—still in progress—has given me the central concepts with which I think.

I must end by mentioning my very favorite philosophical article, which I tell everyone to read—Strawson’s “Freedom and Resentment.” If you haven’t read it yet, do it now!

ABOUT THE INTERVIEWER
Richard Marshallis still biding his time.

Buy his book hereto keep him biding!