1. Sellars and Consequentialism
Consequentialism says that right action is action that brings about the best overall consequences, given the aims we care about. On Sellars’ way of working, we do not start by postulating a mysterious property of "goodness". We begin with the public practices that already guide decision and justification. We then make explicit the roles that consequentialist terms play within those practices. Finally, we ask what we are committed to when we use those terms correctly. After that, so "finally" is not really so, I throw in Tim Williamson's knowledge-first and vagueness arguments to stir the pot a bit.
Start with the Sellarsian entry, inference, exit pattern. Entry links connect facts to provisional directives. If a school has fifty Special Educational Need (SEN) students and only one SEN staff, entry links take measured facts and license a provisional priority ranking. Language to language links connect one directive to another by general principles, for example if an action is expected to prevent many serious harms at low cost then prefer it to an action that prevents few harms at high cost. Exit links connect the ranked directives to what teams actually do, for example give extra targeted provision to the prioritised students first and expand to other students only if conditions change. In this way the content of the consequentialist vocabulary is fixed by the place it occupies in a rule governed practice that uses facts to guide action.
Sellars ties meaning to roles, so we state the central consequentialist role plainly. A judgement of better than licenses choices that raise expected value relative to an accepted aim. In the school case the aim is student care under a duty of fairness. In road safety the aim is fewer injuries and deaths. Here's another example. A driving limit of twenty miles per hour by a school has content because we can cite data on stopping distances, sight lines, and injury rates. The rule is not a private preference. It is a public directive that connects measurements, predictions, and actions, and that can be revised if new evidence shows a different speed delivers better outcomes without new harms.
Consequentialism needs an account of value and of how to compare outcomes. Sellars’ method says we should look to the practices that already make such comparisons. In health policy, quality adjusted life years are used to combine length and quality of life into one scale for planning. This is not a metaphysical essence of value. It is a calibrated instrument. Its content comes from the procedures that fix how to score a life year lived with pain or disability, how to combine scores across people, and how to check whether the scoring tracks what patients and clinicians judge to matter. When those procedures fail, for example by understating chronic pain or undervaluing care burdens on families, consequentialists do not say value has changed in some metaphysical way. Rather, they say the instrument needs revision, and we give reasons tied to the aim of the practice.
Aggregation raises the worry of interpersonal comparison. The method again goes through practice. Elections, budgets, and public consultations already combine claims. They do so under rules about representation, weighting, and review. A consequentialist plan becomes publicly contentful when it states the aggregation rule it uses and shows how that rule is justified by the aims of the institution. For example, a council may adopt a rule that gives priority to the worst off neighbourhoods when allocating playground repairs, because that rule better serves the aim of fair opportunity for children across the city. In education, a school may adopt a rule that gives priority to the worst off students when allocating support resources, because that rule better serves the aim of fair opportunity for students in the school. The priority is a higher order directive. It changes how we add up benefits without appealing to a mysterious extra property. It is grounded by reasons that the practice recognises and can test against outcomes.
Examples track very familiar scenarios. In vaccination campaigns the consequentialist directive is to vaccinate groups in an order that reduces mortality and serious illness most. Entry links bring in infection rates, contact networks, and vaccine effectiveness. Language to language links add priority rules, for example give extra weight to clinical staff and carers because protecting them sustains the system that protects others. Exit links are schedules and clinics. When new data show that a booster for an older group prevents more severe illness than a first dose for a younger group with low risk, the ranking can change. The correctness of the updated plan is fixed by the practice’s aim and by the new facts that bear on expected outcomes.
Schools supply a clear case. A head teacher must choose between buying new devices for every pupil or funding smaller reading groups for those who are behind. Consequentialist reasoning asks which option is expected to increase reading levels and life chances more per pound spent, once we account for teacher time, training, and the stability of gains. Entry links bring in evidence from past interventions. Language to language links allow known modifiers, for example, that device access without structured literacy teaching seldom changes outcomes. Exit links set budgets and timetables. If later data show the devices help once a reading programme is in place, the ranking can be revised. Again, content and correctness live in the public web of evidence, aims, and action.
The worry about naïve maximising can be handled with Sellars’ tools. Good practices carry tests for perverse incentives and for gaming of measures. Economists call this Goodhart’s law. If a hospital pays only for speed of discharge, wards will discharge patients too early. If a school is paid only for high gradings, schools will not count low-scoring students or game the system and ensure all students get high scores.
The consequentialist lesson is that a measure is part of a rule governed practice, not a target to be chased in isolation. Good practice uses a basket of indicators, for example, progression rates, student reported wellbeing outcomes, and school leaver destination data (e.g. percentage of students in employment, apprenticeship or further education), and places them within a priority scheme that is justified by the aim. Here the language to language links do the work. They say which indicators count and how to weigh them. Because they are public, they can be criticised and improved.
Risk and uncertainty require a similar treatment. Consequentialism typically uses expected value. Sellars’ method asks for the rule that fixes how to estimate probabilities and how to handle fat tailed risks. Weather planning offers a familiar model. A council does not require certainty of ice before gritting roads. It uses forecast probabilities and the costs of false alarms and misses to set thresholds. The rule is public, can be audited, and can be tuned after winters with unusual patterns. In ethics, the same structure supports decisions about invasive procedures in medicine or about evacuation orders during fires. The content of the risk rule lies in its tested ability to connect evidence to action in a way that serves the warranted aim.
The difference between act and rule consequentialism can be restated in this framework. Act consequentialism tells us to pick the option with the best expected outcome case by case. Rule consequentialism tells us to select and maintain the set of rules whose common acceptance yields the best outcomes overall. Sellars’ focus on practices favours a disciplined form of the rule view. Institutions function by stable, teachable rules with known entry conditions and known exit actions. For example, the highway code is not rewritten for every junction. It is fixed and only revised when review shows that a change will improve safety and flow. The code’s content and its claim to correctness come from what it does in practice, not from a private calculus at each traffic light. At the same time, the framework allows exceptions under higher order rules, for example, emergency vehicles may pass a red light when safe because that exception improves outcomes relative to the aim of saving life.
Consequentialists are often pressed about constraints and rights. In Sellars’ terms, these are higher order rules inside a justified practice. They do not float above outcomes. They express the insight that certain patterns of action, such as torture, coercion, or deception in research, systematically damage the aims we have reason to endorse, such as trust, dignity, and fairness, even when a short term tally seems favourable. The rules that forbid them are part of the best package in the long run when we consider stability, predictability, and the needs of vulnerable agents.
Two examples show this. Informed consent in medicine is a constraint that protects persons and keeps trust. In corrupt settings where consent is a sham, outcomes worsen. In research ethics, a ban on data fabrication and plagiarism is a constraint that preserves the aim of knowledge. Where either is tolerated, even for causes that are said to be noble, the practice degrades and good outcomes fall. This is why standards gatekeeping is justified by Sellarsian consequentialist thinking.
Measurement of value is the hard engineering of consequentialism. Sellars’ method requires that we build, calibrate, and audit our measures openly. Consider criminal justice. If the only measure is reconviction within one year, programmes that simply delay offending will look good. A better practice uses longer windows, measures of severity, and independent audits. It also takes account of side effects on families and neighbourhoods. The value function is then not a single lever. It is a rule that tells us how to combine parts in a way that serves the accepted aim. That rule is open to inspection and revision.
Finally, the harmony of images matters. Sellars makes a distinction between the manifest and the scientific image. The manifest image contains persons who give and ask for reasons. The scientific image contains facts about causes and capacities. Consequentialism sits where they meet. When officials say "You ought to vaccinate", the ought belongs to the manifest image of shared rules and responsibilities. The entry conditions that ground the directive, such as transmission rates and risk by age, belong to the scientific image. A good practice shows the bridge clearly so that citizens can see why the directive holds, and when it changes, why it changes.
The result is a practical consequentialism that fits Sellars’ standards. It fixes the meanings of better, worse, and ought by their roles in a public practice that connects measurement (entry) to action (exit) through rules of inference (language to language). It states its aggregation and priority rules and justifies them by the aims of the institution. It treats value instruments as accountable and revisable. It handles risk through explicit thresholds tied to costs. It locates constraints as higher order rules that protect the long run aims of justified practices. And it keeps the harmony of images by using science to set entry conditions while keeping the norm governed structure of reasons in view.
Three takeaways sum this up. First, consequentialist talk becomes clear and teachable when it is tied to explicit rules that say what counts as benefit, how we add up, and what priorities apply. Second, those rules are not guesses. They are the parts of institutions that can be checked by outcomes and revised when they fail. Third, constraints and rights are not enemies of outcomes. They are rules that make the practice work better over time for persons who live together under common standards. That is consequentialism done in Sellars’ way.
2. How This Applies In Schools
Suppose in a public education setting the overall aim is to expand life chances fairly for all students. Life chances means the real options a person will have over the long run, such as literacy that supports employment, numeracy that supports further study, and the habits that make learning and good living continue after schooling is over.
On Sellars’ way of working we fix the meaning of better and worse by the role these words play inside the practice that links evidence to action. Consequentialism then tells us to choose the options that raise expected life chances most under rules that respect fairness and are teachable, checkable, and revisable.
Entry links tie facts to provisional directives. A teacher and a department have measures that matter for life chances, for example reading age, attendance, and the presence of barriers at home. The data show that some pupils arrive with lower prior attainment, poorer attendance, and fewer supports. Entry rules license the claim that these pupils are at higher risk of poor life outcomes unless the school intervenes. The rules also say which indicators are noisy and which are stable. A single missed homework is weak evidence. A run of missed homework together with low reading age and low attendance is strong evidence.
Language to language links supply priorities. A school can adopt higher order rules that give extra weight to gains for the worst off, because a gain there shifts life chances more. This is the rule that justifies targeted small group tutoring in early literacy even when the time could be spread thinly across all pupils. The rule is not a result of private taste. It is justified by evidence that early reading is a gateway skill that supports all later learning. The rule is public and can be challenged if later evidence shows a different pattern of benefits.
Exit links fix what is done. The timetable allocates staff for reading groups that serve those with the lowest reading age. The budget buys books with controlled levels that match the decoding stage of each pupil. Parents are invited in for training sessions on shared reading. Attendance officers focus early attention on families where attendance is drifting. These actions are the outputs of the practice. They make the consequentialist content real.
This approach makes the value function explicit. A value function is a rule that says how to score gains for planning. In this setting it should include immediate attainment, persistence of gains, and spillovers into other subjects. It should also include estimates of how early gains change later options. In other words, it must be open to public audit. One simple way is to score an intervention by literacy months gained at six months, at one year, and at two years, with larger weights for the later scores, since durable gains move life chances. A second part of the function is a fairness weight. A gain for a pupil in the lowest quartile of prior attainment counts more for planning than the same short term gain for a pupil in the top quartile, because the long run effect on life chances is larger.
A common worry says that this favours some pupils over others in a way that feels unfair and violates equality. Sellars’ method answers by returning to the practice and the aim. If the aim is fair expansion of life chances, a rule that spreads resources evenly by applying an equality rule can be less fair when starting points are uneven. The school states a priority rule that makes the case plain. Raise the floor strongly in the early years, keep core entitlements for all, and allow small additions for very high performers when an addition there has strong spillovers, for example when a high attainer can support peers in a mentoring role. The rule is public and its fit to the aim is tested over time.
Concrete choices give the rule content. Suppose a year group has funds for five extra teaching hours a week. Option A funds a small extension seminar for already high attainers in mathematics. Option B funds daily thirty minute reading groups for the lowest readers. The entry facts show that the reading groups produce three to six months of reading age gain over a term and that the gains persist if followed by structured practice. The extension seminar produces modest gains on a competition that has little bearing on later mathematics for most pupils. The priority rule and the value function together license Option B. If later evidence shows that enrichment for high attainers has large positive spillovers on peer learning or on teacher development, the rule can incorporate that. The point is that the rule explains the choice now and sets the conditions for revision later.
Life chances are about long time horizons. So the practice must handle duration, uncertainty, and path dependence. Duration means measuring not only immediate progress but also whether the gain lasts. Uncertainty means using expected value, which is the gain weighted by the chance of success. Path dependence means recognising that early crossings of thresholds change all later steps. For example, a pupil who moves from decoding to fluent reading by the end of primary school is far more likely to enter secondary with confidence and to access the curriculum well than if they didn't move. The planning rule should therefore favour early reading and early attendance interventions, since these affect the whole future path, not just what happens in the next test.
Attendance shows the same structure clearly. The entry measures will include days missed, reasons, and patterns by week. The priority rule gives greater weight to improving attendance for pupils who are at risk of falling below the point where learning routines are formed. Actions include first day calls, home visits that remove practical barriers, breakfast club access, and timetable adjustments that place high value subjects early in the day. The value function tracks not only attendance change but also the knock on change in reading age and behaviour. Over a year the practice can show whether the attendance work is buying life chance gains or merely surface compliance. The rule is then revised in light of those gains.
Aggregation across pupils must be handled with care. A school can use a simple fairness rule for planning. For any two plans, prefer the one with the larger weighted sum of durable gains, where the weights give priority to the worst off and to changes that unlock later learning. This avoids a slide into a crude average that hides serious loss for a minority. Importantly, it also avoids a crude focus on only the lowest performers that ignores the need to maintain broad quality. The rule can be taught to staff and to governors and checked in budget meetings.
Constraints and rights enter as higher order rules that make the practice work in the long run. A rule that respects dignity bans public shaming as a tool to raise attendance, even if a short term calculation says it moves a number. A rule that protects trust bans manipulation of test entries to inflate results. A rule that protects equity sets a floor of provision for all, such as time with a qualified teacher and access to the library, that cannot be traded away even for a tempting gain on one metric. These rules are not outside consequentialism. They are inside it as the parts that deliver better outcomes over time for a community that must continue to learn together.
Teachers sometimes face the immediate sense that one pupil deserves more because of effort, behaviour, or background. Sellars’ approach places that feeling inside the practice. Deserving more becomes a claim that the priority rule should grant more weight to a certain case, given the aim. The case must then be made with reasons that the practice recognises. For background disadvantage, the case is that more resource raises life chances more and reduces unfair gaps. For admirable effort, the case is that recognising effort sustains a culture that improves outcomes across the group. The rule can allow a small budget for recognition that keeps the culture strong while keeping the main flow toward the aim of lifting the floor.
Planning over a long period can be understood as a simple cycle. Set the aim as fair life chance expansion. State the value function with durability and fairness weights. State priority rules that favour early gateways and the worst off, while preserving core entitlements. Choose actions that fit the rules. Measure short term and medium term results. Audit for perverse incentives. Revise the rules openly. This is the same entry, inference, exit structure we've been using, now run over years. It allows a teacher to explain a choice today to a parent, to a student, and to a governor, and to explain a revision next year when new evidence warrants it.
Three practical tools make this concrete. First, a gateway tracker that marks whether each pupil has crossed key thresholds. For reading, for example, decoding by the end of year two, fluency by the end of year four, and comprehension at grade level by the end of primary. The tracker sets entry conditions for extra support. Second, a durability log that checks gains at six months and one year. This prevents the practice from being captured by short term boosts that fade. Third, a fairness dashboard that shows the distribution of provision and gains by prior attainment and by background. This guards against quiet drift back to even spread when even spread would fail the aim. Much of this happens in schools today.
Finally, the harmony of images matters inside a school. The manifest image is the world of reasons, meanings, and norms. It is where a teacher explains to a pupil why a rule applies and why a timetable shifted. The scientific image is the world of causes and measurements. It is where the reading scheme, the timetable for practice, and the attendance interventions are justified by evidence. Good leadership brings them together. The directive "you ought to attend the decoding group" is grounded by the measured fact that this group raises the chance of fluent reading and by the public aim of fair life chances. When later the pupil no longer needs the group, the directive changes, and the practice shows why.
On this approach, prioritisation in education stops being a private feeling that some deserve more. It becomes a public and teachable rule that uses evidence to move life chances where they can be moved most, that protects core entitlements, that respects constraints that keep the community healthy, and that looks far enough ahead to see how early crossings of thresholds carry through a life.
3. Williamson's Knowledge First Challenge
Sellars says that to understand a word is to know how to use it in a rule governed practice. In school terms, to understand the word fraction is to have been trained in the moves that link classroom situations to talk, for example when to say this is a half, what follows from that, and what to do with it in work. Meaning lives in the entry moves from the world to words, the language to language inferences, and the exit moves from words to action.Timothy Williamson’s challenge is that this picture, by itself, does not secure meaning or good reasoning. He centres truth and knowledge. The core idea is simple. Teaching, planning, and assessment should be guided by what we know to be true, not only by which inferences a practice currently permits.
Here's an example. A student says the shape is a triangle because it has three sides. That rule of use usually works. But a concave three sided figure breaks it. If meaning were fixed only by the inferences students are disposed to make, a mistaken classroom habit could fix the meaning of triangle. On Williamson’s view, the meaning is fixed by truth conditions, in this case, whatever would make the sentence "it is a triangle" true, in all the relevant cases. Good teaching aims to bring pupils to know those conditions, not merely to follow a local set of inferences that mostly succeed.
Here's another example. Two teachers both use the word fluency. One takes it to mean fast decoding. The other takes it to mean accurate, paced reading with expression. If Sellarsian inferential habits fixed meaning, the word would split. In fact, the school treats one use as correct because it matches the truth about what fluency is, namely accuracy and pace that support comprehension. On Williamson’s picture, reference to how things are in the world, and to our knowledge of that, has priority over a free standing web of internal rules.
A further worry about an inferential picture is that many different meanings can fit the same limited set of classroom inferences. If we only look at a few consequences that teachers happen to use, we can match more than one truth condition to them. To fix content we need contact with truth about application conditions, for example, what really counts as a fluent read of a soliloquy in Macbeth, not which lines on a mark scheme we usually tick.
Williamson also says that the right norm for assertion and action is knowledge. So, you should tell a pupil that a claim is true only if you know it is true. You should base a plan on claims you know, and when you do not yet know, you should say so and design the next steps to find out.
Consequentialism tells you to choose the option with the best expected results for the aim you care about, such as fair expansion of life chances. The Williamsonian addition is the knowledge constraint. This requires us to use expected results that are supported by knowledge grade evidence where it matters, and where knowledge is missing, we should be honest and name the gap and run a design that will close it at an acceptable cost.
So if we return to the example we looked at earlier where a school must choose between structured decoding groups or buying tablets for all. The knowledge question asks whether or not you know that decoding raises durable reading outcomes in your setting. If yes, and you do not know that tablets improve reading in your setting, the knowledge norm says you should fund decoding first. If you suspect your context differs from the studies, you should do something like run a simple randomised rollout across classes for one term, measure with reliable probes, and then act on what you come to know. The announcement to staff should match the status of your evidence. We know structured decoding works here, so we are doing it. We do not yet know the value of tablets for reading, so we are testing them this term and will decide after we know.
Another familiar example concerns placing the right students into support. Suppose a reading score of 90 is the cut for extra help. Scores are noisy. A pupil on 89 may truly be above the threshold. The knowledge norm introduces margin for error discipline and this requires the school to take further actions such as widening the band around the cut, taking a confirmatory probe and looking at a second indicator such as fluency or accuracy rate before placing the student. The aim is to make "this student needs decoding support" a claim you know, not a guess balanced on noise. The same rule governs exit from support. Ask for durable evidence, for example a cold read after six weeks, so that you know the gain will last.
Attendance claims are similarly affected. A class has rising attendance after a breakfast club starts. Before telling governors that the club caused the rise, ask whether you know this, or only believe it. If you do not know, set up a quick comparison next half term, for example a phased start across classes, so you can come to know whether the club made the difference or whether the rise was seasonal.
Williamson requires that evidence is only what you know. A progress graph counts as evidence only if you know that the test is valid in your context, that scoring is sound, and that the data handling is clean. This pushes you to calibrate running records, to train scorers, to keep an audit trail, and to separate practice probes from decision probes.
From Williamson it follows that a head teacher should model knowledge-aligned speech. Say a programme improves something only when you know that it does. Say it is likely to help if certain conditions hold when you have suggestive but not decisive evidence, and say exactly what will turn that likelihood into knowledge, for example two months of probes with independent scoring.
Consequentialist planning under Williamson's knowledge first approach becomes straightforward. Choose the option with the highest known expected value. If no option has a known value, choose a safe to fail design that will most quickly deliver knowledge without undue risk. When the stakes are high and errors are costly, raise the knowledge bar. When the stakes are low and the action is reversible, you may try more while learning. Always label which parts of the plan are known and which are under test.
This also cleans up the yearly cycle introduced in the previous section. Aim at fair life chance expansion. State a value rule that rewards durability and gives extra weight to the worst off. Set priority rules that favour early gateways while keeping core entitlements for all. Choose actions. Measure with instruments you know are valid. Audit by asking which assertions were knowledge grade, where margins for error were too tight, and where gaming crept in. Revise rules openly. The same entry, inference, exit structure is still there, only now each assertion that drives a decision should meet the knowledge norm or be explicitly marked as a live uncertainty with a plan to resolve it.
Williamson's knowledge first norm helps by expanding what can be understood about what we're doing. Two short vignettes show the gain. Imagine a parents’ evening. A parent asks whether her child still needs the decoding group. The teacher says, last term we thought so, now we know because the last three cold reads met the target and a six week follow up held steady. We will step down for a term and check again. The explanation uses known facts and clear thresholds. It is not a vague appeal to practice.
Or again, imagine a budget meeting where governors ask why enrichment for very high attainers in mathematics was delayed. The head replies, "because we did not yet know that our chosen model would raise attainment beyond what pupils already get. We are running it in half the classes this term with an external assessor. If we come to know that it works here, we will scale it next year. In the meantime, we put funds into structured decoding where we already know the benefits are large and durable for those most at risk."
In sum, Williamson’s challenge pushes a school to treat truth and knowledge as the final arbiters of meaning, assertion, and planning. Sellars remains valuable in showing how training, rules, and practices make language and learning possible. The knowledge norm then sits over the top as the standard for when to assert and when to act. Consequentialism still chooses by expected results, but now the numbers that drive choices must be either known or openly tested. The result is a system where you can say what you know, run designs to learn what you do not yet know, protect pupils from knife edge decisions by using margins for error, and spend most on what you know changes life chances, while learning quickly and safely about the rest.
4. Williamson's Vagueness
Tim Williamson's work also draws attention to another challenge that a Sellarsian inflected consequentialism needs to address and that is vagueness. Williamson’s theory of vagueness says that many ordinary terms have sharp boundaries in the world even when we cannot know exactly where the boundary lies. A vague term is one that admits borderline cases such as "fluent" or "disadvantaged". On his view there is a fact of the matter about whether a given pupil falls on one side or the other, but near the boundary no amount of reflection will let us know the truth with certainty. This imports an immovable constraint into educational decision making. Some decisions, by their nature, force yes or no classifications using vague predicates at their boundary. For many students close to a cut, we cannot know which side is true. Educationalists tend to underestimate this because school culture encourages tidy thresholds, fine grained rubrics, and confident claims about small score differences.
We can begin to see this if we consider a simple reading case. Imagine a school sets a fluency target of ninety correct words per minute on a cold passage. A student reads eighty nine, another reads ninety one. We often speak as if we know that the second is fluent and the first is not. Williamson’s margin for error principle says that knowledge must be safe from small changes. If a difference of one or two words would make the statement false, then claiming to know that the pupil is fluent is unsafe at the boundary. The point is not scepticism about everything. The point is that near a vague boundary, small perturbations in text, fatigue, or scoring can flip the truth. In that region, knowledge grade assertion is out of reach. The constraint is built into vagueness, not into teacher skill.
The same applies to "disadvantaged". Imagine a school uses free school meal status as a marker. A family loses eligibility by a small change in income. In most respects their need has not changed. The term "disadvantaged" has borderline cases where we cannot know that the family is on one side rather than the other in the sense that matters for life chances. If a plan treats the line as if we could know the truth at the borderline, it will create cliff edges and perverse incentives. That is the cost of underestimating vagueness.
Assessment rubrics do not remove this. Suppose a rubric for writing has five descriptors for structure. Two scripts sit on the boundary between levels three and four. Moderation helps, training helps, exemplars help, but at the line there will remain cases where we cannot know which level is truly correct if the descriptors are vague. Just to ensure we understand the stubborn difficulty of vagueness, Williamson’s claim is that vagueness comes with higher order vagueness, which is the idea that the notion of "borderline" has borderline cases. So the idea that more descriptors will eventually remove all grey zones is false. The grey moves with the descriptors.Target setting shows the same constraint. Imagine a school sets a target that ninety five percent of students will be fluent by July. If fluency is inherently vague near the cut, and if measurement noise is real, then there is a hard limit on how precisely the school can know its true rate near the target. This does not forbid targets. It forbids pretending that the last few percentage points are knowable in the way we know that a pupil with forty words per minute is not fluent or that a pupil with one hundred and twenty is fluent.
These facts change how consequentialism should be used. Consequentialism asks us to choose the option with the best expected results for the aim we care about. A knowledge first version asks that the claims driving the plan be known where the costs of error are high. Vagueness now says that for some decisions, knowledge at the cut is not available. The plan must avoid designs that rely on knife edge classifications. Instead it must be robust to category uncertainty, especially where incentives and sanctions attach to the cut.
Practically, schools should not decide on a single probe at a sharp cut. Close to the line, a school should require repeated measures and a second indicator so that the claim that this student needs decoding support becomes safer. It could treat a band as the decision region, for example eighty five to ninety five words per minute, within which the default is to give the extra help, then retest after six weeks. That rule spends a little more resource but avoids harm at the boundary that vagueness makes inevitable.
Exit from support is the same. The approach should not to remove help on the first pass over a threshold. Require a cold read after a gap to show durability. Require that the teacher knows that the gain will hold. This uses the margin for error idea used as the familiar practice of not promoting a swimmer from one group to the next on a single good day at the pool.
Grading policy should avoid using borderline case results to steer life changing outcomes. If university reference bands or internal awards depend on a boundary, state a buffer. Within two marks of the boundary, allow review by a second reader who is blind to the first score, or allow a short additional sample. Announce that these cases are inherently borderline and will be treated with a tie breaker rule. This is an institutional way to respect that we cannot know the truth in the grey zone while still reaching a fair decision.
School wide metrics must avoid perverse incentives at vague borderlines. Attendance rewards that flip on ninety five percent create gaming around things like excuse/explanation notes and early registers. Use stepwise rewards at broader bands and weight improvement rather than only attainment. Publish how the bands are set and keep them stable. The constraint is that any attempt to pin a crisp line in a vague field will produce pressure and injustice for cases near the line.
Teacher evaluation is another familiar trap. A colour code that classifies teachers as green amber red on small year to year shifts in test scores ignores vagueness and noise. Replace it with wide bands, multiple years, and multiple measures, and move decisions out of the boundary region and a school makes standards usable in the face of vagueness.
Communication with parents should follow the knowledge norm and the vagueness constraint. Do not claim certainty near a borderline. Say what is known and what is close. "Your child’s fluency is close to the threshold, so we are keeping support in place for six more weeks and will retest on a fresh passage." That sentence honours the norm on assertion and teaches families that grey zones are real and managed, not denied.
Planning needs one more adjustment. Because vague predicates have sharp but unknowable boundaries, there will always be students whose true status we cannot know this term. A fair plan spreads risk. Keep core entitlements for all so that students wrongly excluded from a programme at the line are not left with nothing. Prioritise resources to those far from the line where knowledge is available and gains are large, while using tie breaker rules that favour the worst off inside the boundary band. That is a consequentialist response that pays the vagueness cost openly.
Finally, be careful with new labels. Mastery, fluency, deep understanding, disadvantaged, and at risk are all vague. They are useful kinds, but their usefulness depends on how the practice handles the boundary. If the boundary is linked to sanctions, widen the band and add safeguards. If the boundary is used for information only, keep the band but the cost of error is lower. Train staff to hear the unspoken rider near the borderline. "This is true so far as we know, and near the line it may be unknowable for now." The aim is honest speech that matches the limits of the case.
Williamson’s theory of vagueness fixes a ceiling on what can be known at boundaries and it requires designs that work under that ceiling. Educationalists underestimate this when they trust that more descriptors, finer scales, and stricter moderation can eliminate the vagueness. The right response is to build margins for error into entry and exit, to avoid cliff edge incentives, to use wider bands and repeated measures for high stakes choices, and to say plainly when a case is close and how the school will proceed. That is how a knowledge first consequentialism respects vagueness and still raises life chances fairly.
4 Conclusion
Sellarsean consequentialism filtered by Williamson keeps three working parts. First, meanings and policies live inside public practices with entry links from facts, language to language links among reasons, and exit links into action. Second, choices are made by expected results for the general aim, such as fair expansion of life chances. Third, assertion and decision are constrained by knowledge and by vagueness. You assert only what you know. Where a predicate is vague near a cut, knowledge at the cut may be out of reach, so designs must be safe there.
In class this looks like a simple rule set. Spend most on what you know works for the worst off. Label unknowns and run quick trials to learn. Use margins for error near cuts. For placement and exit, build a decision band and take repeat measures so that the claim that support is needed or no longer needed is robust. Keep core entitlements for all so that students near a boundary are never left with nothing. Prefer thresholds that change slowly and publish them so that families and staff can see the rule.
Communication follows the knowledge norm. Say it is true only when you know it is true. Say it is likely when the design is still learning, together with what will settle the matter. Planning then repeats in a visible cycle. Set the aim. State a value rule that rewards durable gains and weights the worst off. Set priority rules that favour early gateways while keeping core entitlements. Choose actions. Measure with instruments you know are valid. Audit for gaming and knife edge effects. Revise openly. The result is a school that raises life chances with clear rules, honest claims, and designs that work under the hard limits that vagueness imposes.
Long Run Planning Cycle for Fair Life Chances — With Examples
This cycle sets the aim, states value and priority rules, chooses actions, measures results, audits incentives, and revises the rules openly. It uses the same Sellarsian entry, inference, exit structure extended over years.
1. Set AIM: Fair life chance expansion. Example: For Year 3, raise decoding fluency to 90 percent within 12 months and close the disadvantage gap by 40 percent while maintaining core provision for all.
2. Value function: Durability and fairness weights. Example: Count reading gains at 6 and 12 months, weighting the 12‑month check twice as much as the 6‑month check, and weight gains for pupils in the lowest prior‑attainment quartile by 1.5.
3. Priority rules: Favour early gateways and the worst off, while preserving core entitlements. Example: Allocate the first 5 support hours each week to early decoding groups for pupils below benchmark; protect daily whole‑class reading and library time for every pupil.
4. Choose actions: Timetables, staffing, budgets. Example: Schedule four 30‑minute decoding groups each day; assign a trained assistant; purchase decodable readers; set first‑day attendance calls for pupils in the groups.
5. Measure: Short term and medium term results. Example: Use weekly curriculum‑based measures, half‑term reading age checks, and termly fluency probes; record attendance and homework adherence for the same pupils.
6. Audit: Check for perverse incentives and gaming. Example: Verify that pupils with low attendance are tested, that results are not inflated by rehearsal, and that no class time is pulled from protected core entitlements.
7. Revise rules: Open review and update. Example: If gains fade at 6 months, increase home reading coaching; if attendance is the bottleneck, move groups to early slots and add breakfast club access; publish the change to staff and governors.
8. Structural reminder: Same entry, inference, exit structure, run over years. Example: Entry = data capture and thresholds; Inference = apply value and priority rules; Exit = timetable, staffing, budget changes. Repeat each year with public review.