Litland and Definition: A Note on Metaphysics, Logic and Epistemology in Philosophy of Education(5)

Jon Erling Litland’s paper 'Real, Immediate, Multiple: Towards a Theory of Definition' is doing a technical and intuitive thing. The intuitive thought is : instead of starting with essence and then asking what a real definition is, why not start with real definition and then explain essence in terms of it? 

In ordinary philosophical language, essence is usually the question of what something is, what belongs to its nature. Real definition is the question of how something is defined in reality, not just how a word is defined in a dictionary, but what makes the thing the thing it is. Litland’s proposal is that this second notion, real definition, may be more basic than the first. Essence would then be what follows from a thing’s real definition or, in an important refinement, from all its real definitions.

That is the central idea of the paper. A thing’s essence is not taken as primitive. Rather, the basic explanatory work is done by real definition. This changes the direction of explanation. Standard Finean essentialism often starts from claims like “it is true in virtue of the nature of x that p.” Litland wants to ask instead, what are the defining facts or inferential roles in virtue of which x is what it is, and how can essence then be recovered from those. The paper is therefore both a proposal in metaphysics and a sketch of a formal framework. It is trying to show how one could build a logic around this alternative starting point.

One reason Litland thinks this route is promising is that some things appear to have more than one equally good definition. This is where the notion of an essential manifold comes in. Suppose we ask how the number 2 is defined. One natural thought is that 2 can be defined as the cardinal number abstracted from any pair of objects. It could be abstracted from Batman and Robin, or from two apples, or from two chairs, or from any two distinct items. There is no unique privileged pair from which the number 2 must be obtained. So 2 seems to admit many different real definitions, not just one. That plurality of acceptable definitions is what Fine had called an essential manifold.

This gives a more flexible and arguably more realistic account of derivative entities than a single account. If an object or property can be defined in several equally good ways, then we should not force the theory to pretend that there is only one canonical definition. That would distort the structure of the case. Litland therefore argues that if one starts from real definition, one can better accommodate such manifold cases. The number 2 is his main example, but he quickly widens the point. A geometric type can be defined from any suitable token. A colour like red can be defined by pointing to any red thing. A direction can be defined by any line that has that direction. A class can be defined by abstraction from any coextensive property. A complex property such as being a wise philosopher can be defined by abstracting from different suitable propositions involving different individuals. In all these cases, the thought is not that there is one hidden uniquely correct definitional route and all the others are somehow derivative or defective. Rather, the object or property seems to admit a family of genuine definitions.

What belongs to the essence of an item is not what follows from one selected definition, but what follows from each of its genuine definitions. That makes the resulting essence more stable and less hostage to arbitrary choice. If 2 can be defined from Batman and Robin, and also from two apples, then it is not essential to 2 that Batman and Robin are involved. What is essential is only something more abstract, such as that 2 is the number abstracted from some two-membered plurality. The manifold of definitions thus filters away accidental detail and leaves what is definition-invariant. So if you want what is truly essential, you should not build it out of what happens to appear in one merely convenient route of definition.

Litland then argues that the basic notion of definition should be both immediate and full. An immediate definition is one that defines something directly, not by going through a chain of other definitions. His example is the set {{Socrates}}. It may be immediately defined as the result of applying set formation to {Socrates}. It may only be mediately defined as the result of applying set formation twice to Socrates. Litland’s thought is that immediate definition is the more basic notion because mediate definition can be built out of it, while the reverse is much less clear.

A full definition is one that gives the whole defining basis at once, not just some fragment or part of it. Litland argues that this too should be basic. A partial definition can be understood as something that is part of a fuller one. But starting with partial definition and trying to recover full definition is much harder. He gives the example of defining a real number by increasingly rich constraints from above and below. If every candidate full definition can always be extended further, then a notion of full definition based on “not being properly extendable” may fail. So again, immediate full definition is proposed as the primitive building block.

Litland is trying to construct a formal theory that can represent real metaphysical definition without circularity or confusion. To do that, he needs a way of saying clearly what is in the defining position and what is in the defined position. Why? Because he wants to allow self-definition in a certain limited sense. That sounds paradoxical at first. How could something define itself? Well, the number 2 can be abstracted from any two-membered plurality, and one such plurality might even contain 2 itself. So there can be cases where the object being defined appears in the definiens. The theory should be able to register that without collapsing into vicious circularity.

To handle this, Litland develops a formal notation in which the variable-binding structure of definitions is made explicit. The notation is not the main thing thank god because its very hard. What matters is the reason for it. He wants to distinguish the occurrence of an item as what is being defined from occurrences of that same item inside the content of the definition. Ordinary sentential notation cannot do this well enough. So the formalism is designed to mark those roles sharply. Once this framework is in place, Litland attempts to define logical operations themselves by real definition. This brings together metaphysics and proof theory which is a bit daunting. But the general idea is that if essence is to be defined in terms of real definition, we need to know what “follows from” a definition. But to explain what follows from what, we need some account of logical consequence. And if we want a self-contained theory, we should not simply help ourselves to the logical constants as already understood. We should show how they themselves can be defined.

Litland’s strategy is inferentialist, so in the same ball park as Brandom. Instead of defining logical operations like disjunction by truth-conditions alone, he defines them by their role in inference. Disjunction, for example, is characterised by the familiar introduction and elimination rules. From p one may infer p or q. From q one may infer p or q. And if from p one can derive r, and from q one can derive r, then from p or q one may derive r. This is standard proof-theoretic material, but Litland’s point is to bring it inside the object language of a theory of definition. The operation of disjunction is itself defined as that operation whose values are governed by these inferential roles.

This lets him claim that a logical operation can be defined independently of the others, at least in principle, if one works inferentially in this way. That is attractive because many philosophers dislike views on which, say, disjunction can only be understood in terms of quantifiers and conditionals that are already taken for granted. Litland wants to show that one can build definitions of logical operations more locally, by the rules that govern them. There is, however, a cost. The notation becomes elaborate because rules and arguments themselves have to be treated as items the theory can talk about. Litland openly acknowledges this. But he thinks the complexity is worth it because it makes explicit what was already implicit in inferentialist views of logic. 

He also notes an important danger here, the old “tonk” problem. If one defines a logical operation by introduction and elimination rules that are not harmonious, one can get disaster. What disaster you cry? Well, when philosophers say that a logical operation, such as “and” or “or”, can be defined by rules, they usually mean two kinds of rules. One kind tells you when you are allowed to introduce the operation. The other tells you what you are allowed to infer once it is there. Take “and”. If I know that “the pupil arrived” and I know that “the pupil handed in the essay”, I am allowed to introduce “and” and say “the pupil arrived and handed in the essay”. Then, once I have that combined statement, I am allowed to eliminate the “and” and go back to either part on its own. From “the pupil arrived and handed in the essay” I can infer “the pupil arrived”. I can also infer “the pupil handed in the essay”. Those rules fit together neatly. They are well matched.

The “tonk” problem is the name philosophers use for what happens when the rules are not well matched. Imagine I invent a fake new logical word, call it “tonk”. Now suppose I give it these two rules. First, an introduction rule that is wildly generous. From any statement at all, I can infer a “tonk” statement. So if I know “the window is open”, I can immediately infer “the window is open tonk the moon is made of cheese”. Second, an elimination rule that is also wildly generous. From a “tonk” statement, I can infer the second part. So from “the window is open tonk the moon is made of cheese”, I can infer “the moon is made of cheese”.

Now look what has happened. Starting from any true or ordinary statement whatever, I can use “tonk” to prove absolutely anything. From “the window is open” I can get “the moon is made of cheese”. From “today is Tuesday” I can get “2 + 2 = 5”. The whole system collapses, because the invented word lets nonsense flood in.

That is the disaster. So when philosophers say there is an old “tonk” problem, they mean this. You cannot just make up a logical operator by giving it any introduction rule you like and any elimination rule you like. The two sets of rules have to be in balance. They have to be harmonious. The elimination rules should not let you get out more than the introduction rules legitimately put in. Think about your bank account. If the rules for depositing money and withdrawing money are not matched, the system breaks. If I can deposit £1 but withdraw £1,000 just because the account has a special label on it, then the account system is worthless. “Tonk” is like that. It lets you “withdraw” more than you ever “deposited”. So not every package of rules really defines a genuine logical operation. Some rule packages only look like definitions. In fact they create chaos. That is why philosophers are careful here. They want rules that explain a real operation, not a fake one that allows anything to follow from anything.

So not every putative definitional package really succeeds in defining a genuine operation. The theory of real definition must therefore be selective. Not everything that looks like a definition really defines. That thought leads into the next problem. If essence is what follows from all definitions of an item, how do we talk about all definitions. Litland admits that there seems to be no neat internal way of defining a general relation that captures this in full type-theoretic generality. So he introduces a primitive relation, in effect a formal device saying that a certain property applies to all and only the definitions of an item. This is one of several places where the paper is exploratory rather than finished. Litland is candid about this. He is sketching a path, not pretending to have completed the system. (It seems a feature of many inferentialists that what they offer are promissary notes - Williamson accuses Brandom of doing this too, and Brandom's an inferentialist.)

 Still, the philosophical point is clear enough. If an item has multiple definitions, then there should be some unity across them. They are not just an arbitrary heap. They belong together as definitions of the same thing. Litland calls this definitional unity. A given definition should allow us to recover a property that characterises the whole family of definitions to which it belongs. His examples make the idea vivid. From one definition of 2 as the cardinality of Batman and Robin, one should be able to recover the broader pattern, being the cardinality of some two-membered plurality. From one definition of red via one red object, one should be able to recover the broader pattern, being the colour of some chromatically equivalent object. From one definition of a set like {{2}}, one should be able to recover the fact that any genuine definition of that set must exhibit it as the result of applying set-formation to {2}. This gives the manifold internal shape.

There is also a delicate issue here. Some candidate definitions seem to depend on contingent features of the objects used. If red is defined via a particular red object, then that object might not have been red. How can such a contingent item serve in a real definition of a colour. Similarly, if a shape-type is defined by a token, the token might have had a different shape. Litland explores ways of repairing this, for example by distinguishing an object from the object-as-it-actually-intrinsically-is. The aim is to preserve the intuitive appeal of token-based or ostensive definitions without letting contingent accidents infect the essence. This part of the paper is exploratory, but it shows the sensitivity of the approach. Real definition cannot simply mimic ostension or abstraction naively. It must respect constraints of internality and essence.

Only after all this machinery is built does Litland return to essence proper. He now tries to define essentialist operators in terms of real definitions. The core idea is straightforward in spirit. It is essential to s that φ if φ can be derived from an arbitrary definition of s, with only certain carefully controlled resources. Litland realises that if one is careless, one will smuggle accidental material into essence. For instance, if one starts from the definition of 2 via Batman and Robin, one should not be able to conclude that it is essential to 2 that it is abstracted from Batman and Robin. That would make the caped crusaders part of the essence of 2, which is absurd. So he imposes restrictions. One may reason from an arbitrary definition, but only using certain rules, such as factivity and what he calls P-restriction, and any constants in the resulting essential claim must appear inside definitional contexts in the derivation. 

Let me explain. Often we define things in different ways depending on context. For example, a teacher might define a “successful lesson” as one where all pupils complete the task. Another might define it as one where pupils show genuine understanding. A third might define it as one where pupils are engaged and thinking. These are different definitions of the “same” thing.

Now suppose we want to ask a deeper question, what is essential to a successful lesson, what really belongs to its nature, not just what happens to be true in one situation or under one definition. The danger is that we might take one particular definition and treat whatever follows from it as essential, even if it only reflects a local choice or a temporary policy. The rule is designed to stop that mistake. It says, in effect, you are allowed to reason from a definition, but only in a controlled way.

First, “factivity.” This just means you are only allowed to rely on definitions that are actually correct or accepted within the inquiry. You cannot just invent a definition and treat it as if it reveals the nature of the thing. If someone defines “a good student” as “someone who never questions the teacher,” that definition might exist, but we would not treat it as factive, as revealing anything genuine about the nature of being a good student.

Second, “P-restriction” which just means that when you draw conclusions about what is essential, you must stay within the definitional setting you started from. You cannot smuggle in new elements that were never part of the definition. Suppose I define a triangle as a three-sided shape. From that, I can reasonably say that having three sides is essential to a triangle. That is fine. But suppose I also note that the triangle on my desk is drawn in blue ink. I cannot then say that being blue is essential to triangles. Why not? Because “blue ink” was never part of the definition. It is just an accidental feature of this particular case. The rule about constants makes this more precise. It says that any specific items you mention in your conclusion must already appear within the definitional reasoning. You cannot suddenly introduce something new and treat it as if it were part of the essence.

So the overall idea is this.We often have multiple ways of defining or describing the same thing, what Litland calls a “manifold” of definitions. The goal is to identify what is stable across these, what keeps showing up no matter which legitimate definition we use. The restrictions are there to make sure we only count those stable features as essential. Without these rules, it would be very easy to mistake local or accidental features for essential ones. We would end up saying things like “it is essential to a successful lesson that all pupils sit in rows,” just because one particular definition or practice happened to include that. The rules force us to ask, does this feature come from the definition itself, and does it survive across different legitimate definitions, or is it just a contingent detail. So it ensures that when we say something is essential, we are tracking what really belongs to the thing across its different legitimate ways of being defined, rather than what happens to be true in one narrow or accidental setting.

This yields a distinction between immediate and mediate essence. Immediate essence concerns what follows directly from the definitions of the item itself. Mediate essence would concern what follows when one also brings in the definitions of the items occurring in those definitions, and then the definitions of the items occurring there, and so on, across a well-founded system of definitions. Litland only sketches this extension, but the idea is important. It mirrors the earlier distinction between immediate and mediate definition, and it gives the resulting essentialism a layered structure.

One of the reassuring features of the paper is that Litland checks whether the resulting essentialist operator behaves as one might expect. He argues informally that it satisfies a T-principle, roughly that if something is essential then it is true. He also argues that a version of a 4-principle is plausible, roughly that if it is essential that p, then it is essential that it is essential that p. He is less confident about a 5-principle and leaves that open. (The 5 principle says: If part of what something is involves another thing, then it depends on that thing.) It is a way of making essence do real explanatory work, rather than just describing things loosely.

Liland is trying to show that a real-definition-first account can support recognisable logical behaviour. Litland is reviving and extending a marginal suggestion from Fine. Instead of taking essence as the basic metaphysical notion and using it to define real definition, he takes real definition, more precisely immediate full real definition, as basic and then tries to reconstruct essence from it. 

This has several attractive consequences. It handles cases of multiple equally good definitions more naturally. It allows one to think of essence as what survives across a manifold of definitions rather than what is simply read off one privileged definitional route. It provides a way of defining logical operations inferentially within the same general framework. And it encourages a picture in which metaphysical structure is not just a static set of essential truths but a web of real definitions, inferential roles, and well-founded systems of dependence.

At the same time Litland is clear that much more technical work remains. The proof theory must be fully developed, the comparison with existing logics of essence must be carried out, and ideally a model theory should be given. But as a philosophical proposal it is already quite powerful. It suggests that the route from definition to essence may be more illuminating than the route from essence to definition. And it shows, through the ideas of multiplicity, immediacy, fullness, inferential role, and definitional unity, how much richer the metaphysical landscape becomes when one allows that things may be really definable in more than one way, yet still have a stable essence given by what all those ways have in common.