
John Hawthorne, a leading contemporary philosopher,gave an interesting analysis of safety conditions in epistemology and in so doing gave a brilliant example of how contemporary analytic philosophers work. I try and give an account below.
Timothy Williamson is the major contemporary philosopher associated with the safety conception of knowledge. In Knowledge and Its Limits, Williamson argues that knowledge is not simply justified true belief plus some extra ingredient. Instead, knowledge is basic, and one of its marks is that it excludes a certain kind of epistemic luck. Williamson’s intuitive image comes from ordinary ideas of danger. If someone is walking right near the edge of a cliff, we say they are in danger of falling. If they are walking well away from the edge, we say they are safe. Williamson’s proposal is that knowledge works in a parallel way. The relevant danger is now not falling but false belief. A person knows only if their belief is safe from being false. So a belief is safe, roughly, if there are no sufficiently similar or nearby cases in which one believes the same thing and is wrong.
John Hawthorne calls this “basic safety”. The thought is easy to grasp through an everyday contrast. If I look at a clock in good light, it is working properly, and I form the belief that it is three o’clock, then that belief seems safe. In nearby situations I would still have looked and still have been right. But if I glance at a stopped clock that just happens to read three at the moment I look, my belief is true by luck. In a very similar situation, one minute later, I would have formed the same belief and been wrong. So the stopped clock case looks unsafe, and that is why it does not look like knowledge. The safety condition is doing the work of explaining why some true beliefs are too lucky to count as knowledge.
Hawthorne shows that even this first formulation is too simple. He introduces the role of method. Suppose I know that someone is sitting because I can see them with my own eyes. There may be a nearby situation in which that person is not sitting, but I am falsely told by a liar that they are sitting, so I form the same belief and it is false. Does that nearby error stop my original perceptual belief from being knowledge? Intuitively it should not. The liar case seems irrelevant because it uses a different way of forming the belief. So the safety theorist introduces a refinement. What matters is not merely whether there are nearby false beliefs, but whether there are nearby false beliefs formed by the same method as the actual belief. That is the work method is meant to do. It stops safety from becoming too demanding, because it screens out irrelevant routes to error.
Hawthorne is showing that safety is not one simple thesis. It is already becoming a family of views, depending on how one fills in ideas like “nearby” and “same method”. There are “choice points”. There is not just one safety theory sitting there ready made. There are many possible safety theories. Hawthorne talks about closeness. The basic safety condition relies on nearby or close cases, but what does close mean? Hawthorne’s point is that philosophers who all say they like safety often mean very different things by this. The apparent agreement hides a great deal of disagreement. Some use danger language (you’re too near the edge), some talk about similarity (they look similar), some about what could not easily have gone wrong (you might have not been doing that), some about chance (lucky), some about normality (it ought to be like that). Hawthorne makes us feel how much hangs on this one notion. It all seems a bit more difficult than it seemed at first.
He notes that Williamson himself is often extremely cautious, even evasive, about giving a fully independent account of closeness. Williamson does not want to pretend that safety gives a completely non circular analysis of knowledge. Hawthorne recalls a reply Williamson makes to Alvin Goldman. Goldman, a major reliabilist epistemologist, seems to hope that Williamson is offering an independently specifiable condition that can then be used to test for knowledge. Williamson resists that. He says, in effect, that in many interesting cases you may have to decide whether safety obtains by first deciding whether knowledge obtains. That sounds odd at first, but the work it does is clear. It blocks the expectation that safety is a tidy reductive formula. It says that the notion of closeness may itself be partly guided by our prior grip on knowledge. If that is right, then the usual hunt for “counterexamples to safety” becomes less straightforward, because those counterexamples always rely on some prior way of making closeness concrete. Hawthorne then shows what remains if we keep safety at this very abstract level.
One thing we do get is factivity. Every case is at least close to itself. So if someone’s belief is false, then there is automatically a close case, indeed the actual case itself, in which they believe falsely. That means false belief can never be knowledge. Safety already builds in truth. But Hawthorne also sees the cost. If we say almost nothing about closeness, then safety risks becoming structurally suggestive but not very informative. He links Williamson’s approach to modelling in epistemic logic. In formal epistemology, one often studies the structure of accessibility relations between worlds without yet saying in rich detail what makes one world accessible from another. That can still illuminate the formal behaviour of knowledge. So one thing safety can be is a structural model, not a fully informative analysis. Hawthorne turns to alternative ways philosophers have tried to put more flesh on the bones of safety.
The first is to connect safety with danger or chance. Since Williamson himself began with a danger metaphor, one natural thought is that a belief is unsafe if there is a real chance of its being false. This promises to explain closeness in non epistemic terms. We understand chance and risk independently, so perhaps we can use them to explain safety. Hawthorne shows both the attraction and the difficulty of this. Suppose we tie safety to objective chance as used in physics. Then we quickly get two distortions. Knowledge of the past becomes too easy, because on many ways of thinking about physical chance the past is fixed. If the past has chance one, then dogmatic beliefs about the past may count as safe almost automatically. Conversely, knowledge of the future becomes too hard, because physics allows tiny chances of all sorts of wild events. If every tiny chance generates a close possibility, then we are pushed toward scepticism. There is always some minute chance of a sudden catastrophe, a brain aneurysm in the next five minutes, some bizarre fluctuation. If those possibilities count as close, then future directed belief is almost never safe.
Hawthorne shifts from formal objective chance to a more everyday notion of danger. Perhaps “danger” in the ordinary sense is looser and more humane than physical chance. But this too is unstable. Suppose I’m looking for keys and I believe that the keys will be found today, and unbeknownst to me they have already been found. There is now no danger that they will remain unfound. So my belief seems, in one sense, perfectly safe. But does that really show that I know? Not obviously. The case shows that a belief can pass a danger test too easily if the world happens to cooperate in ways unknown to me. The objective absence of danger is not the same as epistemic security.
Again, suppose I optimistically believe that a lion in front of me will not attack, and unbeknownst to me it is glued to the ground. There is no real danger of its attacking. But my belief still seems epistemically poor. I do not know in the right way. The example shows that the ordinary notion of danger is not fine grained enough to capture the epistemic notion of safety. Objective lack of danger can coexist with poor epistemic position. So danger talk does not simply solve the problem.
Now imagine an alien civilisation decides by lottery whether to destroy the Earth. You are in the middle of making some ordinary judgement, perhaps that you will have pepperoni pizza tomorrow. The aliens do not destroy the Earth, and you do get the pizza. The question is whether this bizarre low probability danger should count against the safety of your belief. After all, the danger seems unrelated to the content of the judgement. Hawthorne compares the alien case to crossing the road while an unseen sniper keeps missing you. The sniper is not part of what you are thinking about, but that does not obviously make the danger irrelevant. Safety, as Hawthorne is thinking of it, is not simply a matter of what risks are salient to the thinker. External threats may still matter to epistemic evaluation. That pushes the view away from a purely internalist picture of knowledge.
He then turns to another common way of explaining safety, the phrase “could not easily have been false.” This is a canonical safety formulation in contemporary epistemology. Hawthorne distinguishes two meanings of “easily”. One concerns effort, as in “I could not easily have made that mistake” because it took a lot of work even to get into the position to make it. The safety theorist does not mean that. They mean modal ease, roughly that in nearby situations the belief would not have gone wrong. Once that is clear, Hawthorne argues that this formulation inherits the same sorts of problems as the danger account. It can make some things too easy and other things too hard.
So, suppose someone dogmatically believes that they are going to die soon, and it turns out they have a symptomless terminal illness. Then they could not easily have been wrong. Their belief passes the safety test. But that does not mean it is a good belief. Dogmatism can accidentally satisfy a “not easily false” condition if the world lines up with it. On the other side, if we take ordinary low level risks seriously, the view again tends toward scepticism. In a large enough group, someone could easily choke, be struck by lightning, or die in an accident. Once one starts to think this way, one risks saying that very many ordinary beliefs are unsafe.
Hawthorne brings in Duncan Pritchard as a philosopher who tends to think of closeness in terms of direct similarity between possible worlds. A close world is just a very similar world. If in very similar worlds the belief goes false, then it is unsafe. Hawthorne pushes back by invoking statistical mechanics. His point is to remind philosophers that the world is more complicated than ordinary similarity judgements suggest. Very tiny microphysical changes can produce strange macro level differences. He uses a case discussed by Ernest Sosa, where someone drops a bag down a chute in a high rise building and believes it will soon be in the basement. Pritchard thinks there is no very similar world in which the bag fails to arrive. Hawthorne says that once you appreciate physical theory, you may think there are extremely similar worlds with strange outcomes. If those count, then similarity based safety gives sceptical results. This is to show that direct similarity is not as innocent as it looks. It can carry hidden metaphysical assumptions about how the world behaves.
What about necessary truths? If you believe a necessary truth, then there is no possible world in which that proposition is false. Does safety become trivial in that domain? Hawthorne’s answer is that there is another refinement one can make. Instead of holding the exact proposition fixed, one can look at epistemic counterparts, beliefs formed in relevantly similar ways under relevantly similar circumstances. Suppose I correctly answer a sum, but in a close counterpart case I make a slightly different but related mathematical error. Then safety is not trivial after all. This exchange shows that safety can be refined not only by methods but by relaxing what counts as the same belief content. The philosophical work here is to indicate how safety might still say something about mathematics or necessary truths, though at the cost of added complexity.
Hawthorne’s next major topic concerns whether safety is merely necessary for knowledge or whether it is also close to sufficient. Williamson sometimes speaks as though knowledge is basically safe belief. Hawthorne thinks if safety is only necessary, then theorists will want to add another condition. Hawthorne then offers a very instructive warning about this strategy. His warning grows out of the Gettier problem. Gettier showed that justified true belief is not enough for knowledge because truth, belief, and justification can come together in the wrong way. Hawthorne says a similar thing may happen with any conjunctive theory where safety is only one ingredient. Suppose we add some further property, call it F, such that safety and F are each necessary, and together supposedly sufficient. If safety and F can come apart, then we can build a disjunctive case where one disjunct is safe but lacks F, and the other has F but is unsafe. Then the disjunction may inherit both properties, safety from one side and F from the other, but still fail to look like knowledge. The point is not that every analysis must fail, but that conjunctive recipes are fragile. Hawthorne is showing why it is hard to just say “knowledge is safety plus one more thing”.
This is a reason for him not liking Pritchard’s anti luck virtue epistemology. Pritchard’s idea is that knowledge requires two things, anti luck, which safety captures, and a significant manifestation of the subject’s own cognitive abilities. The slogan is attractive. Knowledge is not just safe true belief, but safe true belief creditable to your skill. This view is related to themes in Sosa’s virtue epistemology, where epistemic success is understood on the model of achievement due to ability. Hawthorne thinks this whole strategy is unpromising, and he explains why through several examples. One is about testimony. Suppose Y does all the mathematical work, tells me the answer, and I trust her. I may do almost nothing beyond noticing that she is not joking or lying. Still, it seems I can know the answer. So the amount of my own cognitive ability involved can be very small. But then, Hawthorne says, if the role of ability is allowed to be that small, it becomes hard to use ability to distinguish knowledge from non knowledge in a principled way.
He then introduces a case from Pritchard’s own discussions, the “temp” case. Imagine a subject whose true safe belief about the temperature depends too much on outside help and not enough on their own ability, so Pritchard says they lack knowledge. Hawthorne then makes the subject infer that if it is roughly three degrees, then it is roughly the square root of nine degrees, or the cube root of twenty seven degrees. These are mathematically equivalent descriptions. But now the subject is contributing more cognitive work, because they are doing some maths. Hawthorne finds it absurd that one should say the subject does not know it is three degrees but does know it is cube root twenty seven degrees. The example reveals that adding “credit to ability” can make knowledge depend on irrelevant mathematical fancy footwork. Perhaps the relevant skill must be the right kind of skill. Hawthorne’s reply is that this still does not solve the structural problem. The math is relevant to the cube root description, but that does not make it the right explanation of the original knowledge. The broader point is that our epistemic life often involves a division of labour between the world, other people, and our own limited checking skills. If the contribution demanded from us is set too high, testimony stops being knowledge. If it is set too low, the ability condition becomes toothless.
So, in fake barn country, there are many barn façades, and if you happen to look at the one real barn, many philosophers think you do not know it is a barn, because you could easily have been looking at a fake. Hawthorne modifies the case by adding angelic protection. If you were about to look at a fake barn, the angel Gabriel would whisk it away so that you never falsely believe there is a barn. Now your belief is safe. Pritchard wants to say the safety is due entirely to the angel, not to your own skill. Hawthorne rejects this. He points out that your perceptual abilities are still doing a lot of work. If you looked at the sky, or your shoes, you would not think you were seeing a barn. You are discriminating barns from many other things. So it is wrong to say the environment alone supplies the safety. Hawthorne is trying to break the intuitive grip of “all the credit goes to the protector” arguments. The work of the case is to reveal how hard it is to allocate epistemic credit cleanly.
Suppose instead of an ordinary barn you are Len identifying a red kite among fake red kites, and Len is an expert ornithologist. Your skill is now manifestly sophisticated. Yet Hawthorne says it would be bizarre if this change in the level of expertise altered the knowledge verdict in a principled way. The example presses the virtue theorist by showing that increased skill does not clearly line up with the pattern of knowledge judgements we want.
Hawthorne discusses closeness in terms of symmetry. The term “closeness” naturally suggests a symmetric relation. If A is close to B, then B is close to A. That is just what close usually means. But some of the epistemic phenomena philosophers want to capture look asymmetric. Ofra Magidor and other safety theorists often want to say that ordinary embodied people know they are not brains in vats, because the brain in a vat world is not a close enough possibility to threaten their knowledge. But if closeness is symmetric, then if the brain in a vat world is not close to ours, ours is not close to it either. That seems to mean that a brain in a vat who dogmatically thinks it is not envatted may also count as safe, because there is no close world where it is embodied. Hawthorne finds that strange, though he notes that Magidor is willing to accept something like that result. But once you adopt symmetric closeness, some unintuitive consequences are forced on you. The issue is about the formal shape of the relation you use to model knowledge.
Suppose appearance and reality come apart. It looks like seventy degrees, but is really eighty. If closeness depends on both appearance and reality, and if one assumes symmetry, one can get bizarre outcomes where a creature that systematically overestimates temperatures by ten degrees passes the safety test in some actual cases, even though its general epistemic policy seems poor. Symmetry plus certain modelling assumptions can make weird belief forming habits come out as safe. That suggests the formal apparatus is misaligned with what we want to understand.
Among non sceptical philosophers, it is common to think there can be good and bad cases such that the bad case takes the good case to be possible, but the good case does not take the bad case to be possible. A brain in a vat cannot rule out being normal, but a normal person can rule out being a brain in a vat. This is an asymmetry. Hawthorne credits Jeremy Goodman for pushing him into this line of thought and notes that Goodman and Salow use a different modelling tool, not symmetric closeness, but a more normal than relation.
Normality is naturally asymmetric. One world can be much more normal than another without the reverse holding. Hawthorne’s toy example is temperature. If appearance seventy and reality seventy is much more normal than appearance seventy and reality eighty, then from the strange eighty seeming seventy case the normal seventy seventy case may remain epistemically accessible, while the reverse does not hold. The work being done here is to suggest a different philosophical territory for modelling knowledge, one built not out of distance but out of normality rankings.
We might want to ask why asymmetry of epistemic accessibility should be taken so seriously. Hawthorne does not pretend there is a knock down proof. He says sociologically, most non sceptics want to say they know they are not brains in vats, while not wanting to say brains in vats know they are not embodied. That ordinary pattern of judgment is what generates pressure for asymmetry. We might then ask why we should not drop symmetric closeness altogether and use something like a counterpart or relevant to relation? Hawthorne says that is one path you could try out.
Is Williamson analysing or modelling safety? In science, a model is a simplification used to make computations or predictions manageable. But in epistemology, are we not trying to answer foundational questions like “what is knowledge”? If so, how can mere modelling help? Hawthorne thinks that not every epistemological project has to be foundational in that direct sense. Suppose we build a simple model of perceptual knowledge where what matters are just two things, the appearance and the reality. Such a model is obviously false in detail. It ignores the complex path by which light reaches the eyes, the psychology of attention, all sorts of things. But it may still illuminate how inexact appearances can yield more or less exact knowledge. In that role, the model is not pretending to tell us the final nature of knowledge. It is giving us a manageable structure for understanding how some aspect of knowledge works. The work done by this distinction is substantial. It softens the demand that safety be a perfect analysis and instead allows it to function as an explanatory framework. This also helps explain why Hawthorne treats counterexamples differently depending on what the theorist is trying to do. If the goal is reductive analysis, a counterexample is devastating. If the goal is modelling, a counterexample may just show the limits of the idealisation.
Closure is the principle that if you know p, and competently deduce q from p, then you know q. Hawthorne notes that basic safety can threaten closure. If you safely believe p, but there are close cases where you would falsely believe q and hence infer p or q from q, then safety may fail to extend to the disjunction. That would mean safe belief is not closed under competent deduction. But once methods are refined finely enough, the method of deducing p or q from p may be distinguished from the method of believing p or q from q. Then closure may be preserved. This shows that safety’s compatibility with closure depends heavily on how methods are typed. Again, the theory is a landscape of parameters, not a simple rule.
We might wonder what role safety can play against the sceptic. Hawthorne distinguishes two projects. One is trying to convince someone who is genuinely worried they might be a brain in a vat, a Boltzmann brain, or trapped in a demon world. The other is trying to explain how, if one is already convinced of ordinary things, that conviction can amount to knowledge. Hawthorne thinks contemporary epistemology is much better at the second than the first. Reading Sosa or Williamson will not make a deeply anxious person stop worrying, just as a philosophical theory will not cure a person who keeps worrying they left the coffee pot on. The work of this distinction is to resist an over ambitious conception of philosophy. Epistemology is not therapy. It is conceptual explanation. That is why Hawthorne is not impressed by the thought that safety must directly defeat scepticism. If closeness itself cannot be specified independently of what we count as knowledge, then safety may not explain why the brain in a vat world is remote. It may simply reflect our antecedent judgment that it is. Still, safety can offer structural insight into why knowledge excludes certain patterns of luck or error.
Could one could combine safety with contextualism, the view that whether “knows” is true can vary with conversational standards? One might let standards of closeness shift with context. But there are limits. If there is a very high objective chance that someone in a large group will be struck by lightning, can a context simply ignore that? If each individual can be said to know they will not be struck, what happens when we combine all those judgments under closure and conclude that no one in the group will be struck? Hawthorne uses this to show that contextualism is not magic. It interacts with objective chance, closure, and safety in complicated ways.
Safety began as a very simple idea: knowledge is belief that could not easily have been false. Hawthorne shows that every term in that sentence opens a field of dispute. What counts as “easily”? What makes a case “close”? Do we look at danger, chance, similarity, normality, or relevance? Do we hold the content fixed, or allow epistemic counterparts? Do we build method into the condition, and if so how? Is safety enough for knowledge, or must it be paired with ability, rationality, or something else? Is closeness symmetric, or should it be replaced by an asymmetric accessibility or normality relation? Are we trying to analyse knowledge once and for all, or only to model some of its structure?
Hawthorne has given a careful mapping of a philosophical territory. Williamson supplies the initial safety picture and the danger metaphor. Goldman stands in the background as someone wanting a more independently specifiable condition. Pritchard represents attempts to combine safety with ability and to use direct world similarity. Sosa appears both in connection with virtue themes and with cases meant to sharpen our intuitions about modal safety. Magidor presses the surprising consequences of symmetry in sceptical cases. Goodman and Salow point toward asymmetric normality models. Contextualists hover as a further resource. Boltzmann brains and statistical mechanics widen the modal landscape and show how cosmology or physics can bear on what we count as close possibilities. What Hawthorne gives is a way of understanding why contemporary epistemology can look so technical. The technical terms mark pressure points in a very ordinary problem.
We want to explain why some true beliefs are knowledge and some are merely lucky. Safety promises to do that by requiring robustness across nearby possibilities. But as soon as we ask which possibilities are nearby, why they are nearby, and how they relate to the believer’s method, environment, and reasoning, we find ourselves in a much broader and more intricate philosophical map. Hawthorne is much less interested in defending a final theory than in showing how a theory gets built. That is philosophically important. He is not simply asking whether safety is true. He is asking what happens when one starts with a plausible slogan, knowledge is safe true belief, and then has to make it workable. This means he’s thinking about the engineering of theories. He shows how philosophical views acquire refinements, auxiliary notions, and hidden costs.
That tells us that in epistemology one rarely evaluates a slogan in isolation. One evaluates the whole machinery needed to keep it standing. Hawthorne has a very strong sensitivity to the difference between formal elegance and epistemic plausibility. Symmetric closeness is elegant. It fits the ordinary word “close.” But it generates strange epistemic results. Ability based virtue epistemology is elegant. It says knowledge is success due to ability. But it behaves badly in testimony and fake barn style cases. Chance based accounts are elegant. But they veer toward scepticism. Again and again Hawthorne is showing that a formally tidy principle can be epistemologically untidy. So neatness is not enough. A theory must track the uneven grain of actual epistemic life.
Another thing that emerges is how central testimony is to Hawthorne’s picture. He keeps returning to cases where one person tells another something, or where much of the work is done by others. This blocks any picture of knowledge that is too individualistic or too skill based. A lot of epistemology, especially virtue epistemology, can sound as though the knower must personally produce the success in a strong way. Hawthorne keeps reminding us that much knowledge is socially distributed. We know because others did the work, because institutions are functioning, because environments are friendly, because informational routes are stable. So the talk pushes toward a more socially embedded externalism about knowledge.
Another thing to note is Hawthorne’s hostility to what might be called cheap anti-scepticism. He does not want an easy move that simply declares sceptical possibilities “far away” and leaves it at that. His pressure on symmetry, Boltzmann brains, and bizarre appearance-reality cases is really pressure on any too easy reassurance that bad worlds can just be ignored. He keeps asking what earns that ignoring. What makes a possibility genuinely irrelevant rather than merely inconvenient? This means he’s not just talking about safety but about the standards for excluding possibilities in epistemology. That is a much larger issue. It links the talk to broader debates about relevant alternatives, contextualism, and anti-sceptical strategy.
Another theme is the relation between knowledge and exactness. This comes out especially in the appearance-reality toy models. Hawthorne’s examples suggest that knowledge often has a kind of coarse-grained fit to the world rather than perfect precision. If it visually appears to be around seventeen centimetres, that does not put you in a position to know that it is exactly seventeen centimetres. The talk therefore points toward a more graded picture of perceptual knowledge, where what matters is not only whether you know but how precise the content can safely be. This opens onto issues about measurement, approximation, perceptual discrimination, and even vagueness.
Another thing is the importance of content individuation. The question about necessary truths opens a very large door. If safety is tested by holding the proposition fixed, then necessary truths look automatically safe. If instead we look at epistemic counterpart beliefs, then safety starts to depend on more than truth conditions. That suggests that epistemology may need a finer grained notion of content than ordinary propositional identity. One could push from this toward hyperintensionality, modes of presentation, and Finean distinctions between contents that are necessarily equivalent but epistemically different. Hawthorne’s recurring concern is where the epistemic work is being done. He keeps asking: what exactly explains the subject’s security? Is it the world? The subject? Their method? Their environment? A hidden protector? A general normality structure? This is really a question about epistemic explanation itself. When we say someone knows, what are we citing as the explanatory ground of that success?
He doesn't use the language of grounding, but the issue is clearly there. Different theories of knowledge distribute the explanatory burden differently. Throughout, Hawthorne is implicitly mapping the relation between local and global epistemic evaluation. Some cases are very local, like looking at a barn. Others are global, like Boltzmann brains populating the universe. Hawthorne is sensitive to the fact that a theory might work locally and fail globally, or vice versa. For example, a local perceptual story may seem fine until cosmology changes the background space of possibilities. This is philosophically rich because it suggests that epistemology cannot always remain at the scale of the individual encounter with the world. Sometimes the larger structure of reality matters. That opens the talk out toward philosophy of physics, anthropic reasoning, and the role of cosmology in epistemology.
Hawthorne is also reluctant to let ordinary language dictate theory too quickly, even while he starts from ordinary language. He begins with danger, closeness, and safety, all familiar words. But then he repeatedly refuses to assume that those everyday words map neatly onto philosophical needs. Ordinary danger may be too weak or too strong. Ordinary closeness suggests symmetry, but perhaps symmetry is the wrong structure. Ordinary ideas about ability may not fit testimony. This is a good lesson in philosophical method. Start with ordinary language, but do not become captive to it. The distinction between epistemic evaluation and psychological reassurance is also important. He says, in effect, philosophy is not very good at making anxious people stop worrying. Reading Williamson will not cure obsessive worry.
That is actually a deep point. It separates the normative question, under what conditions does a belief count as knowledge, from the psychological question, what will calm a subject down or persuade them. Those are different enterprises. Much confusion in epistemology comes from running them together. Hawthorne is unusually clear that the aim is not therapy.
His discussion helps reveal safety as part of a broader family of anti-luck approaches. Even when Hawthorne criticises Pritchard, he is not denying the importance of luck. He is probing different ways of understanding how luck is excluded. Basic safety excludes nearby error. Ability based theories exclude luck by requiring credit to the subject. Normality theories exclude luck by ranking worlds. Relevant alternatives theories exclude luck by narrowing the field of serious alternatives. So one can use the talk to teach the broader landscape of anti-luck epistemology. Safety is one version, but it sits alongside several neighbouring approaches, and Hawthorne’s comparisons help show why people move between them.
He also shows the possibility of a more explicit pluralism about epistemic models. Hawthorne edges toward this when he contrasts modelling and analysing. But one can push it further. Perhaps there is no single safety relation fit for all purposes. Perhaps perceptual knowledge, testimonial knowledge, mathematical knowledge, and future directed practical judgement need somewhat different modal treatments. Hawthorne does not say this outright, but he strongly points in that direction because each attempt at a universal relation seems to break somewhere. That would be a major conclusion: not that safety fails, but that safety may need to be pluralised.
Hawthorne suggests a fruitful connection with vagueness and margin for error. Williamson’s work on vagueness famously uses safety-like ideas, especially margins for error, to explain why we cannot know sharp boundaries. Hawthorne’s discussion of appearance, reality, normality, and close cases naturally connects to that territory. So maybe he’s offering tools for rethinking not only knowledge but also borderline cases, discriminative limits, and higher order uncertainty. He teaches us to separate structural claims from substantive metaphysical claims. Sometimes Hawthorne is making a purely structural point, for example that reflexivity of closeness yields factivity. Other times he is making a substantive point about physics, danger, or normality. This helps us see that some criticisms are formal and some are world involving. That distinction is useful across philosophy.
Finally, he can be read as giving a miniature lesson in how contemporary analytic philosophy often works. It begins with a compelling intuition, formulates it in a crisp condition, subjects that condition to examples, refines it by adding parameters like method, compares neighbouring theories, and then steps back to ask whether the whole project should be understood as analysis or modelling. That shows philosophy as a process of pressure testing conceptual structures.