10 Apr
A Landscape of Tensions: 6 Decision Points For Tim Williamson's Safety Theory Identified by John Hawthorne

Top philosopher of metaphysics and epistemology John Hawthorne gives a dense but very revealing tour through one of the most influential contemporary approaches to knowledge, the safety condition, developed most prominently by Timothy Williamson. He maps the decision points inside safety theory, showing that what looks like a single unified view is actually a cluster of competing ways of filling in key ideas. The result is not a tidy theory but a landscape of tensions. 

Safety From Error

Williamson’s basic intuition is that knowledge involves safety from error. The intuitive model is physical: if you walk close to a cliff, you are in danger of falling; if you are far away, you are safe. Translating this into epistemology, a belief counts as knowledge only if it is not in danger of being false.Formally, this becomes something like: A belief that p is safe if there are no nearby cases in which you believe p but p is false. This is the basic safety condition. It introduces two central notions, closeness and error, and immediately shifts epistemology into a modal space of possible cases.

However, almost immediately, the simple version breaks. The classic problem is that not all nearby errors are relevant. If I know something by seeing it, the fact that in some nearby scenario I could have been lied to by someone else should not undermine my knowledge. So the theory is refined: Safety must be method-relative. What matters is whether using the same method, I could easily have been wrong. This refinement already shows that safety is not a single principle but a framework that must be adjusted at multiple points. Hawthorne identifies 6 such points.

1. What Does Closeness Mean?

Hawthorne emphasises that there is no consensus here. Philosophers who all claim to endorse safety actually rely on very different notions of closeness. This is the first instability in the theory. At one extreme, Williamson himself sometimes refuses to define closeness independently of knowledge. He suggests that we may not be able to determine whether a case is “close” without already knowing whether it is a case of knowledge. This blocks counterexamples. If closeness is not independently specified, you cannot straightforwardly refute safety by producing a case where it fails. The theory becomes structurally illuminating rather than directly testable. So according to this move safety is not necessarily an analysis of knowledge but a model of its structure.

At the other extreme, philosophers try to give substantive accounts of closeness. Hawthorne surveys several.

a. Closeness as objective chance. A nearby case is one with non-negligible probability. But this leads to distortion. The past becomes trivially safe because its chance is fixed, while the future becomes too risky because even tiny probabilities generate nearby error cases. The result is either triviality or scepticism.

b. Closeness as ordinary danger. This avoids some technical problems but introduces others. You can have cases where there is no objective danger, yet the belief still seems epistemically defective. The example of someone thinking they are safe from the lion even though she doesn't know the lion is in fact glued to the floor shows this: the subject is not in danger, yet their belief seems unsafe in an epistemic sense.

c. Closeness as “could easily have been false”. This is perhaps the most common formulation. But it inherits similar problems. It allows dogmatic beliefs to pass the safety test if the world happens to align with them, and it again risks sceptical collapse when small risks are treated as nearby possibilities.

d. Closeness as similarity between worlds. This is attractive but becomes unstable when combined with physical theory. Tiny microphysical changes can produce wildly different outcomes, so “similarity” can generate too many nearby error cases, again pushing toward scepticism.

Hawthorne's cumulative point is not that any one of these is wrong, but that each choice reshapes the theory dramatically. There is no neutral notion of closeness.

2. Is Safety Necessary or Sufficient?

Initially, Williamson presents safety as a necessary condition. Later, he sometimes treats it as close to sufficient. This distinction matters because once safety is only necessary, one must add further conditions. Hawthorne argues that this strategy is unstable. Suppose we add a second condition, such as rationality or justification, and treat both as necessary but jointly sufficient. Then we can construct Gettier-style disjunction cases: You have a safe belief in p but not justified, and a justified belief in q but not safe. From these you infer p or q, which is both safe and justified. Yet it still does not seem like knowledge.

This shows a structural problem. When conditions are distributed across different parts of a disjunction, their combination does not yield knowledge. So the “two-factor” strategy is fragile.

Duncan Pritchard proposes that knowledge requires both safety and the manifestation of cognitive ability. Hawthorne is highly critical of this. The core problem is that ability does not track knowledge cleanly. In testimony cases, we often know things despite contributing very little cognitive effort. If ability is required, it must be minimal. But once it is minimal, it cannot do the work needed to distinguish knowledge from non-knowledge.

Hawthorne uses several cases to show the instability. You can have safe belief without enough personal contribution, yet still want to count it as knowledge. Increasing the subject’s intellectual contribution in irrelevant ways can artificially increase “credit” without improving epistemic standing. In “fake barn” scenarios where an angel ensures the viewer only sees the real barn amidst the fake ones, the subject’s perceptual abilities still contribute significantly, even though the environment is doing much of the work. The division between ability and environment becomes blurred. The conclusion is that tying knowledge to the degree of cognitive ability leads to arbitrary thresholds and implausible distinctions.

4. Symmetry

If closeness is really “closeness”, it should be symmetric: if A is close to B, then B is close to A. But this creates striking problems. Consider sceptical scenarios. Many philosophers want to say we know we are not brains in vats but brains in vats do not know they are not embodied. This creates an asymmetry in epistemic accessibility. Our situation rules out theirs, but theirs does not rule out ours. However, if closeness is symmetric, this asymmetry becomes difficult to explain. If the brain-in-vat world is not close to ours, then our world is not close to it. This can generate bizarre results, such as brains in vats having safe beliefs about not being embodied. The same issue arises in perceptual models. If closeness is symmetric, certain systematically distorted belief systems can pass safety tests in ways that seem completely wrong. This motivates abandoning symmetry and replacing closeness with a more flexible, directional relation, such as a “relevant to” or “counterpart” relation. Once again, the theory fragments into multiple variants.

5. Closure

Does knowledge transmit through valid reasoning? If you know p, and you competently infer p or q, do you know p or q? Basic safety struggles with closure. You may safely believe p but not safely believe p or q, because there are nearby cases where q is false and you would still believe it. However, Hawthorne argues that once safety is refined by methods, closure can potentially be restored. If the method of forming the disjunction is appropriately tracked, the problematic cases can be excluded. Hawthorne argues that this shows again that the behaviour of safety depends heavily on how its parameters are set.

6. Scepticism

Safety theorists often hope to respond to scepticism by saying that sceptical scenarios are not “close”. But this only works if closeness is independently specified. If, as Williamson sometimes suggests, closeness depends on prior judgments about knowledge, then safety cannot explain why we are not brains in vats. It can only systematise judgments we already accept. Hawthorne makes an important psychological point. Epistemology is not in the business of convincing sceptics. It is in the business of explaining how knowledge is possible, given that we take ourselves to have it. This reframes the role of safety as explanatory rather than dialectical.

7. Analysis or Modelling.

The final and perhaps most important distinction is between analysis and modelling.Traditional epistemology aimed to analyse knowledge, to give necessary and sufficient conditions that are strictly true. Williamson increasingly treats safety differently. It is a model, not an analysis. Like models in economics or science, it simplifies, idealises, and captures structural features without aiming for exact truth. This explains several features of Hawthorne's discussion such as why counterexamples are less decisive because models are expected to be imperfect, why precision is valued, even at the cost of realism and why different versions of safety can coexist because they are different modelling choices rather than competing definitions. Hawthorne suggests that much of the debate about safety becomes more intelligible once we see it as a modelling framework rather than a strict definition of knowledge.

What emerges overall from Hawthorne is a picture of safety theory not as a single doctrine but as a field of tensions

1. Closeness can be understood in multiple ways, each with different consequences. 

2. Safety can be necessary, sufficient, or part of a hybrid account.

3. Symmetry can be retained or abandoned. 

4. Closure can be preserved or sacrificed depending on refinements. 

5. Scepticism can be addressed or bypassed depending on how independence is handled. 

6. And the entire project can be understood either as analysis or as modelling.

Hawthorne's philosophical lesson is that the concept of knowledge resists simple decomposition. The safety framework reveals important structural features, especially the role of modal robustness, but it does not settle the question of what knowledge is. Instead, it shows that any attempt to do so must navigate a complex space of competing theoretical pressures.