Interview by Richard Marshall.

'The question of irrelevant influences on belief has been an obsession of mine since I was in high school and is one that I am still wrestling with in my current work. Here’s the basic problem (the first formulation of it that I’m aware of is in Al Ghazali’s “The Deliverance from Error,” written in 1116). We see that people who grow up with liberals, tend to be liberal, people who grow up amongst Christians tend to be Christian, people who get PhDs tend to take up the general intellectual viewpoints of their advisors, and so on. It would be hubristic to suppose that our own beliefs weren’t subject to these kinds of social influences. So if I know, for example, that I have the political commitments that I do because I was raised or educated in a certain environment, and that, had I been raised or educated elsewhere, I would hold the opposite views, should that, in and of itself, lead me to radically doubt my position?'

'There had better be a close connection between what we conclude about what’s rational to believe, and what we expect to be true. But it turns out to be very tricky to say what the nature of this connection is! For example, we know that sometimes evidence can mislead us, and so rational beliefs can be false. This means that there’s no guarantee that rational beliefs will be true.'

'If you calculate a tip when you’re drunk, you should think of yourself as taking a belief-gamble – you’re forming a belief in a way that gives you a 50% shot at getting things right and a 50% shot at getting things wrong. This is the equivalent of guessing. Since the way in which we care about the accuracy of our beliefs prohibits guessing, this way of caring about accuracy also prohibits forming beliefs while drunk. A really interesting question is why we don’t like gambling on our beliefs, when we’re happy gambling on all sorts of other things.'

Miriam Schoenfieldis an assistant professor in the Department of Philosophy at The University of Texas at Austin. Between 2015 and 2017 she’ll be doing a Bersoff Fellowship at New York University. Her primary research interests are in epistemology but she also has interests in ethics and normativity more broadly. Here she discusses the rationality of rationality, permissivism, irrelevant influences, the connection between rationality and accuracy, higher-order evidence, imprecise credences, callibrationism, internalism, decision theory and parity and ontic vagueness. Go figure...

3:AM:What made you become a philosopher?

Miriam Schoenfield:When I arrived at college I felt that the best way to figure out what the world was all about was to study a subject matter that wouldn’t be subject to doubt – and mathematics, I thought (erroneously, I now believe), fit the bill. I did take some philosophy classes though. I had a conversation with the professor who taught one of my first classes – Palle Yourgrau – and I told him why I thought math was great and philosophy was suspect. He pointed out that there might be a trade off between truths that were important and interesting on the one hand, and truths that one can be certain about, on the other. This conversation had a significant impact on my thinking, and by the end of college, I felt that philosophy offered me the opportunity to apply the analytical tools that I was so enchanted by to much broader questions, and that excited me.

3:AM:One of the areas you’re working in is epistemic normativity. How rational is our rationality, and how can we assess it without falling into the dilemma of using rationality to assess itself?

MS:This is a difficult question. But I like to think of things this way: we find ourselves in the world, trying to make sense of it. We regard some ways of responding to evidence as better than others. For example, we generally think that it’s irrational to believe that a coin will land Heads if the only information you have is that the coin is fair. That’s a simple case, but some cases are much more complicated. For example, does the order we see in the universe provide a good reason to believe that the universe was designed? This is subject to much debate. I like to think of the study of normative epistemology as just the study of how to think: how to figure out what the world is like given the information we’re presented with. Now, of course, one thing we might do is question our ability to do this properly – maybe the conclusions we reach will take us further from the truth rather than closer to it. This is a possibility, and how seriously to take this possibility is itself a question in normative epistemology. But I see us as faced with a choice: to inquire or not. If we choose to inquire, then we have no choice but to start somewhere. And so any inquiry, I believe, requires some degree of self-trust.

3:AM:I guess it doesn’t sound bad to say there’s more than one way to rationally respond to evidence if we think evidence is partial, for example, but it seems less obviously right if parties have all the salient facts from the evidence. Is permissivism true even when agents are omniscient?

MS:If by “omniscient” you mean agents know the truth about everything, then it is indeed hard to make sense of the idea that there are multiple rational ways for such agents to respond to evidence. After all, these agents won’t really be doing any responding to evidence – they already are in possession of all of the evidence and have already come to know the truth about everything! The claim that there is more than one rational way to respond to evidence is most directly applicable to cases in which two agents possess the same evidence and are trying to figure out something they are currently uncertain about. My view is that they might reach different conclusions on the basis of the same information, but both be rational.

3:AM:Your argument looks at a couple of principles that make irrelevant influences troublesome. What are these and why do they cause problems for beliefs?

MS:The question of irrelevant influences on belief has been an obsession of mine since I was in high school and is one that I am still wrestling with in my current work. Here’s the basic problem (the first formulation of it that I’m aware of is in Al Ghazali’s “The Deliverance from Error,” written in 1116). We see that people who grow up with liberals, tend to be liberal, people who grow up amongst Christians tend to be Christian, people who get PhDs tend to take up the general intellectual viewpoints of their advisors, and so on. It would be hubristic to suppose that our own beliefs weren’t subject to these kinds of social influences. So if I know, for example, that I have the political commitments that I do because I was raised or educated in a certain environment, and that, had I been raised or educated elsewhere, I would hold the opposite views, should that, in and of itself, lead me to radically doubt my position? When the question is posed this way a lot of people find a “yes” answer intuitive. At the same time, the beliefs in question (religious, political, philosophical) are some of the beliefs people feel most strongly about. They inform and structure our lives in important ways and very few people are inclined to actually abandon them when they realize that they hold the beliefs they do because of some series of chancy events that led them to live here or go to school there. So my question is: how do we confront the fact that that our beliefs were influenced in this way? Is there a way to maintain these beliefs that are so important to us with integrity and in a way that is intellectually honest?

3:AM:How do you argue against these principles and what role does permissivism play in this?

MS:These are issues I’m in the process of working on right now, so I don’t have a fully developed view on the matter: but here’s how I’ve been thinking about things: Suppose that with respect to some of these important questions, there is more than one rational response one might adopt. Perhaps, for example, given the evidence we share, it’s rational to be a theist (a believer in God), an atheist, or an agnostic. Now imagine that you were all alone, working through the evidence in pristine conditions – no suspicious influences are at play. If, given your evidence, you could rationally be a theist, an atheist, or an agnostic, then, in these pristine conditions, what will determine which position you end up with? What I have suggested in some of my work is this: since rationality won’t tell you which of these to be, some arational factor will lead you to adopt one position rather than the other. Maybe it will be a matter of pure chance – a matter of which ways the neurons happened to be bumping around in your head. Who knows? The point is, in such cases, it will inevitably be, in some sense, arbitrary which position you end up with since, by stipulation, in these cases, rationality won’t help you decide. Now, if, absent any social influences, it would still be arbitrary which position you end up with, it shouldn’t matter, I’ve claimed, that, as things actually are, your position was brought about due to various influences like your upbringing, your education, and so on. True – the fact that your beliefs were caused in this way means that, in a certain sense, it was arbitrary which position you ended up adopting, but if the case permits multiple rational responses, the thought is, this sort of arbitrariness is unavoidable – there is no escape from it.

3:AM:Near to these concerns is the connection between rationality and accuracy. You present two alternatives to understand the connection: one involving the assumption of rational omniscience, and another where you don’t. Can you sketch out what you argue here and again suggest why non-philosophers ought to pay heed.

MS:As I mentioned earlier, I think that the point of the study of rationality, and of normative epistemology more generally, is to help us figure out how to inquire, and the aim of inquiry, I believe, is to get at the truth. This means that there had better be a close connection between what we conclude about what’s rational to believe, and what we expect to be true. But it turns out to be very tricky to say what the nature of this connection is! For example, we know that sometimes evidence can mislead us, and so rational beliefs can be false. This means that there’s no guarantee that rational beliefs will be true. The goal of the paper is to get clear about why, and to what extent, it nonetheless makes sense to expect that rational beliefs will be more accurate than irrational ones. One reason this should be of interest to non-philosophers is that if it turns out that there isn’t some close connection between rationality and truth, then we should be much less critical of people with irrational beliefs. They may reasonably say: “Sure, my belief is irrational – but I care about the truth, and since my irrational belief is true, I won’t abandon it!” It seems like there’s something wrong with this stance, but to justify why it’s wrong, we need to get clear on the connection between a judgment about a belief’s rationality and a judgment about its truth. The account I give is difficult to summarize in just a few sentences, but I can say this much: what we say about the connection between what’s rational and what’s true will depend on whether we think it’s rational to doubt our own rationality. If it can be rational to doubt our own rationality (which I think is plausible), then the connection between rationality and truth is, in a sense, surprisingly tenuous.

3:AM:What do you mean by ‘higher-order evidence’ here?

MS:I understand higher-order evidence as evidence that bears on our own ability to evaluate evidence. For example, learning that I have implicit racial or gender biases is higher order evidence: it tells me that I won’t be very good at evaluating evidence when race or gender is involved. Learning that I’ve been awake for a long time and that I tend to make mistakes when I’m sleep-deprived is another example of higher order evidence.

3:AM:Linked to the issue of accuracy, you’ve examined what you call ‘imprecise credences’. What are these and why should accuracy-centred epistemologists reject them?

MS:Sometimes when asked how confident we are in something we use a number. For example, I might say, based on the weather report, “I’m 90% confident that it will rain today.” Or, if I’m in a 100 person fair lottery, my confidence level might be 99% that I’ll lose. But sometimes, it doesn’t seem like there are particular numbers that can represent how confident you are. Suppose someone asked you how confident you are that the woman sitting next to you on the bus is named “Sarah.” Some people think it’s implausible that you’re, say, exactly 4.3% confident. Rather, they think that your confidence is, and should be, represented as a range. Maybe you’re somewhere between 1 and 10% confident that her name is Sarah. These ranges of probabilities are called imprecise credences. What I’ve argued is that if you’re concerned with having credences that accurately represent the world, you shouldn’t think it’s better to have imprecise credences than to have a precise credence. Intuitively, the thought is this: it’s hard to make sense of the idea that you’re closer to getting things right if your confidence is represented by a range, such as 40%-60%, than if you are exactly at the midpoint of that range (in this case 50% confident).

3:AM:Callibrationism involves view about higher-order evidence whereby credences are calibrated to one’s expected degree of reliability. Is that right? Could you first sketch out what this means and give some examples of what is going on and what makes the position attractive?

MS:Here’s an example: Suppose you’re told that when you try calculating tips at restaurants while you’re extremely drunk, you only calculate correctly half of the time. Calibrationism says: given this knowledge, if you find yourself extremely drunk at a restaurant and you calculate the tip, you should only be 50% confident that you got it right (even if in fact you did get it right). A lot of people find this judgment obvious and the intuitive plausibility of these sorts of judgments is part of what makes the view attractive. I actually think it’s surprisingly difficult to motivate these judgments from a theoretical perspective. But the reason I think these judgments are correct is roughly this: the way in which we care about the accuracy of our beliefs prohibits taking what I call “belief gambles.” That is, usually, we’d rather stick to thinking, say, that there’s a 50/50 chance of a coin landing heads, then just forming a belief by guessing that it will land heads. If you calculate a tip when you’re drunk, you should think of yourself as taking a belief-gamble – you’re forming a belief in a way that gives you a 50% shot at getting things right and a 50% shot at getting things wrong. This is the equivalent of guessing. Since the way in which we care about the accuracy of our beliefs prohibits guessing, this way of caring about accuracy also prohibits forming beliefs while drunk. A really interesting question is why we don’t like gambling on our beliefs, when we’re happy gambling on all sorts of other things. I think there are interesting things to say about this question, and some of my current work addresses this issue directly.

3:AM:What dilemmas does callibrationism face, and how do you propose to solve them?

MS:Here’s the problem: Calibrationism says that if you expect to do pretty poorly you shouldn’t be very confident in your conclusions. But it also says that if you expect to do pretty well, you should be pretty confident in your conclusions. Intuitively, though, there are many cases in which we expect we’ll do pretty well – we’re generally pretty reliable – but in some particular case we reason poorly. We tend to think that, even if we expect to do well, if we reach a conclusion on the basis of poor reasoning, our belief is irrational. But calibrationism seems to say otherwise: that it’s perfectly rational to believe whatever crazy thing you concluded so long as you had good reason to think that, in general, you’re pretty reliable in that domain. Another way of putting the point is that calibrationism seems to make being rational “too easy.” Pretty much the only thing you have to do to be rational is match your confidence in your conclusions to how reliable you expected yourself to be.

How to solve the problem? I’ll tell you what I think now (which isn’t quite what I thought when I wrote the paper). These days, I’m not that interested in judging who is and isn’t rational from a third personal perspective. I’m more interested in thinking about what kind of advice is helpful for people who are trying to figure out what to think about the world when their goal is to end up with an accurate portrayal of how things are. The advice that says: match your degree of confidence to your expected degree of reliability is good advice for those trying to achieve accuracy. The thought that certain beliefs that will be had on the basis of following this advice are, intuitively, irrational is only problematic if we think that the best advice should coincide with our intuitions about how to evaluate the rationality of beliefs from a third personal perspective. I don’t think that these should coincide, but this is a substantive claim that requires significant argumentation.

3:AM:Internalists have claimed that they have internal states to which they have a special kind of epistemic access. First of all, can you sketch out why internalists think it so important that the access they have is so special, and what empirical evidence and philosophical arguments have made their claim for this luminosity dubious?

MS:Internalists think that it’s easy to be wrong about some things, but very difficult, or even impossible, to be wrong about others. I might be wrong in thinking that there are three people outside of my window, but could I be wrong be in thinking that it appears to me as if there are three people outside my window? I might be wrong in thinking that I broke my leg, but could I be wrong in thinking that my leg hurts? Many internalists think that certain facts about our own mental life are not facts that we can easily get wrong. The reason that this is important is that there is a question about the extent to which our knowledge can be based on a completely solid foundation of beliefs: beliefs that it would make no sense to doubt. If there are certain beliefs we have that it doesn’t make sense to doubt, as these internalists think there are, then beliefs about our own mental states can serve as the foundation for all of our other beliefs. However, in recent years, many people have questioned whether we really are so good at knowing what is going on in our own mental life. Certainly there are some mental states that people think we might be wrong about: Maybe I’m angry at my friend even though I don’t know that I am. Arguably, most people in the United States harbor subconscious racist and sexist attitudes that they are not aware of. Now, internalists needn’t say that all mental states are easy to know about: it’s enough that a certain class of such states are ones that we have special access to. But some empirical work has suggested that even states about how things look to me, or whether I’m in pain are things I can be wrong about. The philosophical arguments against the internalist claims about special access are somewhat complex so I won’t get into those here. I’ll just say that the issue remains controversial.

3:AM:Why don’t you think the privileged access claim is not required by the internalist? Does this put paid to the private language argument of Wittgenstein?

MS:Different people define internalism differently, but I take the core insight of internalism to be this: what it makes sense for a person to think in a given situation is determined by what’s going on in her mental life, and not by what’s going on out there in the world. For example: suppose I walk into a room and the wall appears to be red. Under normal circumstances, it makes sense for me to believe that the wall is red. But suppose that, in fact, the wall is white, and some trick of lighting, or some malfunction of my visual system, is making it appear red (though I have no reason to suspect this is so). The internalist thought is that, even though the wall isn’t red, I’m reasonable in believing that it is, and that this is because what it’s reasonable for me to believe is determined ultimately by facts about my mental life. A traditional way of justifying these claims about what’s reasonable to believe appeals to the idea that we have special access to facts about our own mental life. But I’ve argued that there is a different explanation available that doesn’t rely on the thought that we have special access. Rather, I claim, what’s important about these mental states is that they play an important causal role in determining what beliefs we have.

Here is an example to illustrate: suppose the revolutionaries tell Paul Revere: “Light one lantern if the British are coming by land and two if by sea.” The best that we can expect will happen is that Revere will light one lantern if it appears to him that the British are coming by land and two if it appears to him that the British are coming by sea. If the British are coming by land, but they create a decoy to make it appear that they are coming by sea, then what we can expect Revere to do, if he takes the rule seriously, is to light two lanterns – not one. What this demonstrates is that the way that we go about following some principle like “one if by land two if by sea” is that we use our mental representations of how things are to determine what we will do. I argue in the paper that this fact about our mental states, and not whatever privileged access we have to these states, is what ultimately explains why they play an important role in epistemology. Even if we adopt some principle like “I’ll believe the wall is red whenever it is red” the best we can expect to do is believe the wall is red when it appears to us that it is red.

3:AM:You’ve argued that there’s a problem if we adopt a decision theory and what Ruth Changhas called parity. What’s the problem and does that mean that decision theory is in trouble?

MS:Cases of parity are cases in which option A is no better than option B, option B is no better than option A, but A and B are not equally good. How can this be? Imagine a case where you are lucky enough to choose between a vacation in Hawaii or a vacation in South Africa. You might regard both of these options as excellent and neither as better than the other. But suppose you found out that the hotel you would stay at in Hawaii offers free coffee. That does make the Hawaii option seem ever so slightly better than you initially thought, but does it break the tie? Do you now definitively prefer Hawaii over South Africa? For many people, in cases like this, the answer is no. In cases in which small improvements in one option don’t break a tie we say that the two options are on a par. The problem that arises with decision theory is as follows: standard decision theory works by allowing us to represent the value of different options using real numbers. However, numbers have the following property: if number a is not greater than number b and number b is not greater than number a, then number a and number b are equal. But when number a and number b are equal, even the teeniest addition to number a will break a tie. What this shows is that if we regard options as on a par we can’t represent the value of the options by numbers, and if we can’t do that, standard decision theory breaks down. Some alternatives have been proposed to deal with cases in which options are on a par, but I argue that most of the options for dealing with parity that are currently on the table face a serious problem: they sometimes tell us that we ought to choose an option which we are certain is no better than some alternative. I claim that this is an unacceptable feature of a decision theory and so we need a radically different way of thinking about decision theory to account for cases of parity.

3:AM:You’ve also jumped in to the vast literature of vagueness and argued for a surprising version. Usually the literature is all about semantic vagueness but you defend the thesis that if robust moral realism is right then moral vagueness is not semantic but ontic. What’s the reason for this – and do you think this shows that robust moral realism is unlikely are you more comfortable than many philosophers with the idea of ontic vagueness?

MS:Vagueness is frequently thought to be a feature of language. The reason there’s not a sharp cutoff between red and orange, the thought goes, is that we, speakers of English, haven’t bothered to delineate exactly when it’s okay to apply the word “red” and when it’s okay to apply the word “orange.” But what about vagueness in, say, the permissibility of an abortion? The main thought of the paper is that if there are real and objective moral facts, then it doesn’t make sense to claim that the reason there is vagueness with regards to the permissibility of abortion is that we, speakers of English, haven’t bothered to delineate exactly when it’s okay to apply the word “permissible” and when it isn’t. If we thought that the source of vagueness in the abortion case was linguistic, we’d be committing ourselves to the view that learning facts about language use could settle complex moral questions. The moral realist is going to find this unacceptable, since she thinks there are moral facts that obtain independently of what we think or how we talk. This means that if moral vagueness exists, and moral realism is true, then the vagueness doesn’t arise because of the way we use words, but because of some feature of morality itself. In other words, moral vagueness is a species of ontic vagueness: vagueness in reality itself, and not just in our descriptions of reality. I suppose I am more comfortable than most philosophers with ontic vagueness. I think some people find the idea that there is vagueness in how things really are kind of spooky, but I don’t. So no, I don’t personally take these considerations be an argument against moral realism. But the main aim of the paper is to illuminate connections between moral realism and moral vagueness, so I’m happy for the arguments to run in either direction.

3:AM:And finally, are there five books that you could recommend to the readers here at 3:AM that will take us further into your philosophical world?

MS:I must confess that I don’t read many philosophy books. Most of the philosophy that I read these days is in article form.

In the book department, I love to read fiction: especially short fiction and anything by George Eliot(who I regard as a very philosophical writer).

But of the philosophical classics, I love Plato’s five dialogues, especially the Euthyphro.

Descartes’ Meditationsare also on my favorites list, and indeed, at the moment, I’m working on a philosophical piece that takes a similar strategy to the one taken in the Meditations.

The first philosophy book I ever read was Sophie’s World, which is a kind of philosophical novel. My high school math teacher, Ms. Milton lent it to me, and I’ll always be grateful to her for opening my eyes to the world of philosophy. Indeed, if I may, I’d like to take this opportunity to express my gratitude to all of the incredible teachers that I have had throughout the years for inspiring and supporting me in my philosophical adventures.

ABOUT THE INTERVIEWER
Richard Marshallis still biding his time.

Buy his book hereto keep him biding!