The Epistemology of Rational Candy

'It is useful to think of the theory of rational action as saying that the rationality of an action is a function of two things: what evidence you have and what your preferences are. Thus, if you prefer excitement to boredom, and according to your evidence option A promises more excitement than any of your other options, then it is rational for you to choose A. Psychologism, Factualism and my own Experientialism are different theories of evidence, and so when you plug them into that framework they may well give different answers to what the rational action is in any given case.'

'You are right that my own Experientialism has the consequence that we can have false evidence. And you are also right that it may sound strange initially. But I think that further reflection shows that it is not as strange as it sounds. It is important to be clear on what we want of a theory of evidence. One desideratum would be to stick as close as possible to rescuing every ordinary use of “evidence”. That is not my aim—and it is not the aim, really, of anyone else.'

'My Evidentialist Reliabilism (again, roughly) had it that a belief is justified if and only if it fits the evidence the subject has, and that a body of evidence fits some belief if and only if the process which consists in believing that proposition on the basis of that evidence is a reliable one.'

Juan Comesaña is an expert in epistemology. Here he discusses rational action, psychologism, factualism, Experientialism, Bayesianism, knowledge based decision theory, why some false beliefs are justified, justification and rationality, the distinction between evidence and evidence-providers, Chisholm’s puzzle of contrary-to-duty obligations, conditional obligations, the problem of irrational beliefs, easy rationality, Evidentialism and Reliabilism, why skepticism matters, skepticism about induction, Cartesian skepticism and Pyrrhonian skepticism, and whether we can have the knowledge the skeptic denies.

3:16: What made you become a philosopher?

Juan Comesaña: Well, both of my parents are philosophers themselves. I guess that could have gone either way, but instead of rebelling against it I embraced it. The more internal answer is that I really enjoyed thinking about Zeno’s paradoxes when my dad told me about them, and also doing the exercises in one of Copi’s introduction to logic book. All of this was when I was a young boy. Gradually I became aware that, with luck, you could do that for a living, and I thought I would give it a shot. I did my undergraduate degree in Argentina, where you study a single subject for six or more years. There I had excellent teachers in Logic, Philosophy of Science and Philosophy of Language. I also met Carolina Sartorio there, who was to become my wife, and together we thought it would be a good idea to come to the United States to do a PhD somewhere. Argentina’s economy cycles between disastrous and merely very bad, and when we graduated in the early 2000s (my wife from MIT, me from Brown) Argentina was in a disaster stage, so we thought we would try our luck in the American job market. We took jobs at the University of Wisconsin – Madison, after six years we moved to the University of Arizona in Tucson, and we are now moving to Rutgers.

3:16:  You’re an epistemologist and interested in what it means to be rational and being right. In your new book you look at a guy who thinks he’s accepting candy being offered to him but it turns out he’s wrong and that it’s a marble. Two prevalent views judge his action to be irrational: psychologism on the one hand and factualism on the other. So taking these in turn: first, why does psychologism say constitutes rationality and that the guy is being irrational, and why don’t you agree with this position?

JC: It is useful to think of the theory of rational action as saying that the rationality of an action is a function of two things: what evidence you have and what your preferences are. Thus, if you prefer excitement to boredom, and according to your evidence option A promises more excitement than any of your other options, then it is rational for you to choose A. Psychologism, Factualism and my own Experientialism are different theories of evidence, and so when you plug them into that framework they may well give different answers to what the rational action is in any given case. One of the overarching arguments in my book takes certain verdicts about practical cases as given, and argues that only Experientialism can deliver those verdicts. The cases in question involve unknown deception. Thus, Lucas gives Tomás something that looks like candy (but is a marble) under apparently normal circumstances. I take it as a given that it is rational for Tomás to act as if he is being given candy without even considering the possibility that he is not, and I argue that only Experientialism can deliver that verdict.

According to Psychologism, your evidence is constituted by a subset of your mental states. In the case of Lucas and Tomás, Tomás’ relevant evidence is his experience as of being offered candy. This experience is compatible with his not being offered candy—indeed, this is exactly what happens in the case. Therefore, according to Psychologism, Tomás’ evidence does not rule out the possibility that what he is being offered is not candy, and so should take that possibility into account in deliberating about what to do. But we never really do take those possibilities into account: unless there is something abnormal about the situation we face (unless, in epistemologist’s parlance, we are aware of some defeater for our experience) we take our experience at face value without even considering the possibility that it is misleading. We can also tinker with the case so that Tomás’ preferences are different conditional on his being offered candy and conditional on his not being offered candy. So Psychologism fails in two ways for these cases: it gives the wrong verdict and it offers an unrecognizable picture of deliberation.

3:16: Factualism is the alternative view and again you don’t think it gets rationality right do you? So what does this position claim and why are you unconvinced?

JC: According to Factualism, your evidence is constituted by the propositions you know. Given that what he is being offered is not candy, Tomás does not know that he is being offered candy, and so Factualism agrees with Psychologism that it is compatible with Tomás’ evidence that he is not being offered candy. So Factualism has the same (wrong) answer regarding these types of cases as does Psychologism: it entails that Tomás should take into account the possibility that he is not being offered candy, and depending on the details it also entails that Tomás should not behave as if he is being offered candy.

3:16: You propose an alternative approach to show that the guy is rational. This is a position you call Experientialism. Objective decision theory and factualism are involved in this approach aren’t they so could you sketch for us what these are and how you use them?

JC: According to Experientialism, your evidence is constituted by the content of your undefeated experiences. In Tomás’ case, for example, his evidence is that he is being offered candy. That is why, according to Experientialism, what Tomás in fact does (and what we would all do in his shoes), namely, to act as if he is being offered candy without even considering the possibility that he is not, is the rational thing to do. I take this to be a big win for Experientialism. Both Psychologism and Factualism would have us behave as paranoid conspiracy theorists in certain cases, whereas Experientialism would have us behave as we normally do.

Objective Bayesianism, which I think is what you are referring to in your question, is a version of Bayesianism. According to Bayesianism, your degrees of confidence (credence) at any given time should be probabilistically coherent, and when acquiring new evidence you should update your credence by “conditionalizing” on that newly acquired evidence (i.e., your new credence in any proposition P should be equal to your old conditional credence in P given your evidence). Now, how you react to evidence is therefore a function of your prior conditional credences. Are any priors acceptable? This is where subjective and objective versions of Bayesianism differ: according to subjective Bayesianism, any prior is fine as long as it is probabilistically coherent, and according to objective Bayesianism there are additional constraints on acceptable priors.

I am not a Bayesian, of either kind, but if I were I would be an objective Bayesian. Indeed, I would be an extreme objective Bayesian—I would think that there is exactly one acceptable prior. This sounds nuts, of course, but the reason it does sound nuts is not because it is a form of objective Bayesianism, but because it is a form of Bayesianism (for instance, one of the reasons it sounds nuts is because it holds that there is all the difference in the world between the objectively correct prior and one that differs from it in arbitrarily small ways—but this is just a form of the false precision objection to Bayesianism in general). Subjective Bayesianism is really a form of epistemic relativism, and one of the surprising differences between formal and normal epistemology (“normal” is Jim Van Cleve’s term for the opposite of “formal”) is that whereas relativism is a minority position in normal epistemology, it is (or at least it was until recently) the received position in formal epistemology. I think there is no good reason for this difference, and that epistemic relativism is just as suspect in formal as in normal epistemology. So I frame the dispute between Psychologism, Factualism and Experientialism in terms of objective Bayesianism—for what any form of Bayesianism is missing is a theory of evidence, and Psychologism, Factualism and Experientialism can be slotted in as the missing theory of evidence in an objective Bayesian framework.

3:16: What’s ‘knowledge based decision theory and why doesn’t it work? You argue that not all evidence is knowledge which is unusual isn’t it – I thought evidence that isn’t true just isn’t evidence – is that wrong and is this what the ‘Fumerton’s thesis’ sets out to show?

JC: Knowledge-based decision theory (the name I take from Julien Dutant) is the standard Bayesian decision theory (where the rational action is the one that maximizes utility relative to your credences and your utility function) with two additional constraints: an objective form of Bayesianism, where (as discussed above) not any old prior would count as rational, and Factualism as the theory of evidence. Fumerton’s thesis is the claim that practical rationality requires epistemic rationality: that what it is rational for you to do depends not on your actual doxastic attitudes, but on which doxastic attitudes it would be rational for you to have. Given that what evidence you have is important for deciding which doxastic attitudes you are justified in having, Fumerton’s thesis entails that we should add a theory of evidence to standard Bayesian decision theory. So adding Factualism to standard Bayesian decision theory is just a formal way of seeing what verdicts Factualism would deliver for different cases. And it doesn’t work because, for instance, it delivers the verdict that Tomás has to consider the possibility that he is not being offered candy, and depending on the details it delivers the verdict that the rational action is for Tomás to not take what he is being offered. As I said before, knowledge-based decision theory would make Tomás behave as a paranoid conspiracy theorist in those cases.

You are right that my own Experientialism has the consequence that we can have false evidence. And you are also right that it may sound strange initially. But I think that further reflection shows that it is not as strange as it sounds. It is important to be clear on what we want of a theory of evidence. One desideratum would be to stick as close as possible to rescuing every ordinary use of “evidence”. That is not my aim—and it is not the aim, really, of anyone else. Rather, we all want our theory of evidence to recover certain roles, roles which we think are worthy of the name “evidence”. I focus on the following role: you are rationally required to consider all the possibilities that are compatible with your evidence. Sometimes, we are not rationally required to consider some true possibility. Therefore, sometimes our evidence is incompatible with a truth, which means that sometimes our evidence is false. In joint work with Matt McGrath we further defend the possibility of false evidence, and other philosophers have defended it as well.

3:16: Why do you think that someone who has false beliefs has a justification rather than an excuse for what they do?

JC: My position is that some false beliefs are justified, not that they all are. According to a version of knowledge-first epistemology which identifies justification with knowledge (a position which goes beyond Factualism, which identifies evidence with knowledge but is compatible with false justified beliefs), no false belief is justified, although some of them are excusable. According to my position, some false beliefs are justified, some are unjustified, and some of the unjustified ones are excusable. So it’s not that I do not admit of the category of excusable unjustified belief, it’s that I differ from this kind of knowledge-first epistemologist in the extension of that category. For instance, whereas they would say that Tomás’ belief that he is being offered candy is unjustified but excusable, I would say that it is justified. But if someone has a false and unjustified belief that is due to understandable emotional interference, for instance, then it may well be excusable. There is an important difference between believing a false proposition on the basis of misleading evidence and believing a false proposition out of intense emotional distress. Someone who acts on the latter belief acts unjustifiably but perhaps excusably, but someone who acts on the former belief acts justifiably.

3:16: Doesn’t your position stick together justification and rationality in a way that doesn’t give room for a clear distinction between the two things? Is this because you don’t think there is a distinction of any weight to acknowledge?

JC: It’s true that in the book I do not distinguish between rationality and justification. Distinctions are a basic element of the philosopher’s toolkit, and I am not in principle opposed to distinguishing between justified and rational beliefs—but your question hits the mark in asking whether the distinctions we can make between justified and rational beliefs are sufficiently weighty. Some reserve the label “rational” for something like coherence. For instance, if your credences are probabilistically coherent, then they are rational. If this is what is meant by rationality, then I’m not opposed to circumscribing that class of beliefs and referring to it by some term or another, but I do think that “rational” may not be a very good terminological choice here. A coherent set of doxastic attitudes need not have much going for it, from an epistemic point of view—insane views may well be perfectly coherent. On the other hand, I hold that sometimes incoherence in this sense is perfectly justified. For instance, I do not have a solution to the sorites paradox, so I have very high credence in these three propositions: Yul Brinner was bald, Brian May is not bald, and a single hair does not make the difference between being bald and not being bald. Now, philosophers wielding a theory of vagueness would have you lower your credence in one of these propositions considerably (probably the third one). But, to put it mildly, I’m certainly not required to put more stock into any of these philosophical theories than I am in any of those propositions. So I maintain that my incoherence in that respect is perfectly justified—I want to say it is rationally required. So reserving the word “rational” for something like coherence seems to me to be a waste of a perfectly good word.

Others appeal to the distinction between rationality and justification (or something like that distinction) as a damage-control measure. For instance, some philosophers who identify justification with knowledge say that nevertheless a belief that falls short of knowledge may be rational. Different philosophers implement this maneuver in different ways, but I am not convinced by any of them. Hence my resistance to distinguish between justification and rationality in my work.

3:16: Why is the distinction you make between how evidence justifies what it is evidence for, on the one hand, and how evidence providers like experience justify belief in the evidence, on the other, important in defending your idea that we can have false evidence and false rational beliefs and in defending Experientialism against the counter positions of psychologism and Factualism?

JC: The distinction between evidence and evidence-providers is most obvious in the case of inferentially justified beliefs, and it is related to the notorious act-content ambiguity of “belief”. What justifies my belief that my cat has allergies? Well, she’s scratching and grooming herself way more than normal. That is a perfectly good way to answer the question, and it appeals to a proposition (that my cat is scratching and grooming herself more than normal) rather than a belief of mine. Some philosophers think this is a mistake, and that what really justifies me in thinking that my cat has allergies is my belief that she is scratching and grooming herself more than normal, but I think they are the ones making the mistake. What I believe has no direct bearing on whether my cat has allergies, so information about what I believe is not by itself a justification for believing anything about my cat. Of course, my beliefs about my cat, together with facts about the reliability of my beliefs about my cat, can constitute an indirect justification for thinking that my cat has allergies. But this is a recherche justification which no normal person would appeal to. Rather, it is the fact that she is scratching, etc., that justifies me in believing that she has allergies. So, my evidence is that she is scratching, etc. Of course, that wouldn’t be my evidence if I were not justified in believing it. So that she is scratching, etc., is my evidence for thinking that she has allergies, and I have that evidence in virtue of having a justified belief that she is scratching, etc.

This distinction between evidence and evidence-providers carries over to the non-inferential case. What justifies me in believing that my cat is scratching herself? Well, this is not an inferential belief—it is not another proposition that I believe that justifies me, but rather my visual experience itself. So my visual experience right now is doing double duty: it provides me with the proposition that my cat is scratching herself as evidence for further downstream beliefs of mine (for instance, the belief that she has allergies), and it justifies my belief that my cat is scratching herself.

This distinction is accepted by Factualists—indeed, Williamson insists on it, for example. But it is blurred by Psychologists. Philosophers like Pollock, for example, insist that evidence is constituted entirely by mental states. This is an unstable position. Pollock himself, when talking about the inferential case, often slips and talks about the content of the belief as being the evidence. Although one could, of course, justify this by saying that appealing to the content is an oblique way of appealing to the belief itself, I think that the slips are revealing of the fact that it is problematic to think of the beliefs themselves as evidence, for the reasons I briefly alluded to above. Analogous reasons apply to experiences—they are evidence-providers rather than evidence. This distinction, then, is important in diagnosing some defects of Psychologism.

3:16: If I have an irrational belief what should we say about the rationality of forming further attitudes from these and how does this link to Chisholm’s problem of ‘contrary-to-duty obligations? And why does this lead you to conclude that in deontic contexts , Modus Ponens arguments can be false even when premises are true but also that its conclusion may be false even when premises are known?

JC: Chisholm’s puzzle of contrary-to-duty obligations arises from consideration of sets of four propositions like the following:

I ought to finish the paper by the deadline.

If I finish the paper by the deadline, I ought not to ask for an extension.

If I don’t finish the paper by the deadline, I ought to ask for an extension.

I will not finish the paper by the deadline.

The question is, in light of these facts, what ought I to do: ask for an extension or not? On the one hand, given that I ought to finish the paper by the deadline and that I ought not to ask for an extension if I do, there is reason to say that I ought not to ask for an extension. On the other hand, given that I will not finish the paper by the deadline and that I ought to ask for an extension if I don’t, there is reason to say that I ought to ask for an extension. But they cannot both be true: it cannot be both that I ought to ask for an extension and that I ought not to ask for an extension. Hence the puzzle.

I take the best answer to this puzzle to rely on the distinction between unconditional and conditional obligations—although the terminology is not perfect. The problem with the terminology is that conditional obligations are not obligations. Rather, when you have a conditional obligation to do something A given that B obtains, all that means is that A is what you ought to do if we ignore the possibilities where B doesn’t obtain. Thus, to say that if I will not finish the paper by the deadline I ought to ask for an extension is to say that, ignoring possibilities where I finish the paper by the deadline, I ought to ask for an extension. But what we ought to do does not coincide with what we ought to do if we ignore certain possibilities—in particular, what we ought to do does not coincide with what we ought to do if ignore what we ought to do. So, I ought to finish the paper by the deadline. Given that I won’t, it’s better if I ask for an extension than if I don’t, but asking for an extension is still the wrong thing to do, because I ought to finish the paper by the deadline and not ask for an extension.

That analysis of conditional obligations is, perhaps surprisingly, strikingly analogous to the way in which linguists think of the conditional. Following Kratzer, many linguists think that to say that the picnic will be cancelled if it rains is to say that the picnic will be cancelled (ignoring those possibilities where it doesn’t rain). This vindicates the analysis of conditional obligations given above, for it strongly suggests that the analysis gives the correct meaning of conditionals such as “if I don’t finish the paper by the deadline I ought to ask for an extension”. If this is so, then Modus Ponens fails for the conditional, even when the premises are known. For I know both that I will not finish the paper by the deadline and that if I don’t finish it by the deadline then I ought to ask for an extension, but it is false that I ought to ask for an extension.

I think that we should deal with the problem of irrational beliefs in exactly the same way. The problem of irrational beliefs is the following: suppose that I irrationally believe that I am in Paris. Which attitude should I take towards the proposition that I am in France? On the one hand, given that I believe that I am in Paris there is reason to think that I should also believe that I am in France. On the other hand, given that I shouldn’t believe that I am in Paris, the fact that I do gives me no reason to think that I am in France. The solution, as before, is to say that the claim that if I believe that I am in Paris then I ought to think that I am in France simply means that, ignoring possibilities where I do not believe that I am in Paris, I ought to believe that I am in France. But we ought not ignore possibilities where I don’t believe that I am in Paris, and so that conditional is idle.<br>10.What’s the problem of ‘easy rationality’ and how does it help in your dispute with the factualists?

The problem of easy rationality has its origin in Stew Cohen’s problem of easy knowledge. Stew argued that any theory according to which it is possible to know some proposition on the basis of some source K without independently knowing that K is reliable faces a problem. The problem is that you can come to know that P on the basis of K, and then realize that K was the source that delivered P to you—one point for K! You can repeat this process and thus come to be justified in believing, and even knowing, that K is highly reliable—after all, it always gets it right. But of course, you cannot really do this. Hence the problem. I think this problem can be generalized, and that a generalized form of it is what lies behind Bayesian objections to the dogmatism of Jim Pryor and other philosophers (“dogmatism” is Jim’s own term for his position). The generalization goes roughly like this. Suppose that some evidence E justifies you in believing P. Given that P entails that either E is false or P is true, then by closure (the principle according to which, if you are justified in believing P and also justified in believing that P entails some other proposition, then you are justified in believing that other proposition) you are justified in believing that either E is false or P is true. But what justifies you in believing this? Maybe the most obvious answer is: P itself (after all, it entails that disjunctive proposition). But remember that P itself is justified by E, and cannot acquire justificatory powers of its own. So, if P justifies you in believing anything, that must be because E justifies you in believing that. So, can we say that E justifies you in believing that either E is false or P is true? Not really. Think of the negation of that proposition: the proposition that it is not the case that either E is false or P is true. That is equivalent to saying that E is true and P is false. So to say that E can justify you in believing that either E is false or P is true is to say that E can justify you in rejecting that E is true and P is false. But, of course, that E is true and P is false entails that E is true. And you cannot reject a hypothesis on the basis of some evidence which is entailed by the hypothesis—after all, if the evidence is true then that means that things are as the hypothesis has them to be (as far as the evidence is concerned). Therefore, you cannot reject the proposition that E is true and P is false on the basis of E, and so E cannot justify you in believing that either E is false or P is true. But if neither P nor E can justify you in believing that either E is false or P is true, what does?

You can answer that question by rejecting a bunch of the assumptions on which the problem rests, but I’m not myself inclined to challenge them. Rather, I am inclined to think that we have a priori justification for believing those kinds of propositions—to put it picturesquely, propositions to the effect that our evidence is not misleading. How does this relate to Factualism? Well, I also argue that the Factualist’s identification of evidence with knowledge fails for the case of inductive knowledge. For if I know some proposition P inductively, I should not assign credence 1 to P—which I should if it is part of my evidence. But then, how does this worry about Factualism cohere with my claim that I can have a priori justification for believing that my evidence is not misleading? Doesn’t that mean that I should be certain of the conclusions of my inductive argument? After all, I have a priori justification for believing that the evidence in that case is not misleading. The answer is that although I do have a priori justification for believing that, it is not justification for assigning credence 1 to the proposition that my evidence is not misleading. Indeed, the degree of a priori justification for thinking that my evidence is not misleading with respect to P is exactly the degree to which that evidence would justify me in believing P. Therefore, my view on the easy rationality problem does not after all vindicate Factualism.

3:16: What do Evidentialism and Reliabilism claim, and is your position of Experientialism a hybrid of Evidentialism and Reliabilism?

JC: Roughly put, Evidentialism is the claim that a belief is justified if and to the extent that it is supported by evidence. Also roughly put, Reliabilism is the claim that a belief is justified if and to the extent that it was produced by a reliable belief-forming method. Evidentialism and Reliabilism have counterbalancing virtues and problem areas. Reliabilism promises a reductive analysis of epistemic justification, but must face serious alleged counterexamples. Evidentialism has the ability to deliver many common-sense verdicts about which beliefs are justified, but at the cost of relying on an epistemic primitive (that this bunch of evidence fits this belief, say). My own early interventions in the epistemological literature suggested that we can get a better overall theory by combining aspects of Evidentialism and Reliabilism. My Evidentialist Reliabilism (again, roughly) had it that a belief is justified if and only if it fits the evidence the subject has, and that a body of evidence fits some belief if and only if the process which consists in believing that proposition on the basis of that evidence is a reliable one.

In the book I argue for a view according to which your evidence is constituted by the content of your undefeated experiences, and that evidence justifies you in believing further propositions in accordance with objective evidential relations. Is that a version of Evidentialist Reliabilism? Maybe, if you squint hard. But I also think that it has aspects that make it very different from all three (Evidentialism, Reliabilism, and Evidentialist Reliabilism). Against Evidentialism, it has it that there are propositions which are justified but not on the basis of some evidence. Against Reliabilism, it has it that objective evidential relations cannot be reduced to reliability facts.

3:16: You’ve also written about skepticism which is a very strong tradition within epistemology with a long history and a range of approaches and arguments. Why does skepticism matter and why is it such a strong presence in philosophy?

JC: Skepticism matters because how an epistemological theory deals with the threat of skepticism is an important feature of the theory. Dealing with the skeptical threat does not necessarily mean answering the skeptics in their own terms. It is kind of a logical truth that it is not possible to rationally convince universal, Pyrrhonian skeptics, out of their skepticism, but that impossibility does not at all mean that there is something right about Pyrrhonian skepticism. As Hume said about Berkeley, skeptical positions often “admit no answer and produce no conviction”. So the way many epistemological theories deal with the skeptical threat is not by converting skeptics themselves, but rather by diagnosing what goes wrong in the arguments for the different kinds of skepticism. Again, these diagnoses need not be rationally convincing to skeptics themselves, who may well be beyond rational conviction, but they need to be rationally convincing to us.

3:16: You look at three particular strands of skeptical enquiry – skepticism about induction, Cartesian skepticism and Pyrrhonian skepticism and ask – are skeptics right? Could you sketch for us what you find strong about these positions and then say whether you thinking there are good reasons for thinking they are right?

Skepticism about induction is the claim that we cannot know or be justified in believing anything on the basis of an inductive inference—i.e., an inference where it is possible for the premises to be true and the conclusion false. Cartesian skepticism is the claim that we cannot know or be justified in believing anything about the external world—we can only know what is going on in our own minds. Pyrrhonian skepticism is universal skepticism: it’s the claim that we cannot know or be justified in believing anything (not even Pyrrhonian skepticism). What I find strong about the different skeptical positions are the arguments for them. In the case of skepticism about induction it’s Hume’s argument, in the case of Cartesian skepticism it’s the argument based on the closure principle, and in the case of Pyrrhonian skepticism it’s the regress argument. These arguments do give us some initial reason to think that each of these skeptical positions is right, but common sense and non-skeptical epistemological theories agree that they must be wrong. They are still interesting, because they force us to say surprising things about knowledge and justification. For instance, I think that each of these forms of skepticism may well force us to admit that there is more a priori knowledge and justification than a thoroughgoing empiricist would be happy with.

3:16: Tim Williamson recently gave an anecdote about an economist who said that Gettier’s paper ‘Is Justified True Belief Knowledge?’ would never have been published in an economics journal because in economics (and in many of the sciences such as biology or population studies or studies of hurricanes) they use models which are used to produce fruitful results and are assumed to be false. I wonder what you think about the idea that philosophical skepticism embodies a methodological shortcoming that obscures rather than illuminates epistemological insights?

JC: I take it that you are suggesting the following: strictly speaking, skepticism of various sorts may well be true, but that doesn’t matter because the models according to which we have knowledge can produce fruitful results even when they are assumed to be false. If that is what you are getting at, I guess I disagree. I really do think that philosophical skepticism is strictly speaking false, that we do have the sorts of knowledge and justified beliefs that the skeptic would deny us. As I said before, however, I do also think that skeptical arguments force us to re-examine how exactly is it possible for us to have that knowledge and justification, with answers that are many times surprising.

3:16: And finally, for the readers here at 3:16, can you recommend five books other than your own that will take us further into your philosophical world?<br>This is the hardest question for me. As Borges said, the first thing we notice about lists are the omissions. That said, here are five books that I have enjoyed reading.

- Timothy Williamson, Knowledge and Its Limits. My own position in epistemology is in large part a reaction to what I think is wrong here, and it is a contemporary classic.

John Pollock, Contemporary Theories of Knowledge. A textbook, but one where Pollock advances his own position, and also the other foil for my own view.

David Christensen, Putting Logic in Its Place. Christensen writes in a flowing prose about technical issues in epistemology, an achievement that not many can match.

 Mike Titelbaum, Fundamentals of Bayesian Epistemology. A great textbook from where I learned (or relearned) much of what I know about these topics, and written at a very accessible level for those willing to put in the effort.

Ernest Sosa, Epistemic Explanations. Sosa was my graduate advisor, and I once interviewed him and the issue arose about why he didn’t write books (the website for which the interview was made is long gone, but you can still find it here). In the intervening years, Ernie has taken to publish books with a vengeance, and this one is the latest where he explores his own virtue epistemology.

ABOUT THE INTERVIEWER

Richard Marshall is biding his time.

Buy his second book here or his first book here to keep him biding!

End Time series: the themes

Huw Price's Flickering Shadows series.

Steven DeLay's Finding meaning series

Josef Mitterer's The Beyond of Philosophy serialised

NEW: Art from 3:16am Exhibition  - details here