Interview by Richard Marshall.
Joëlle Proustis a Janus-faced mutant naturalistic philosopher of metacognition. She broods on what metacognition is, on two different ways 'meta' could be used, the problem with attributivism, on the mistake of thinking mental and ordinary action have the same normative structure, on how 'acceptance' can help brng out the relevant contrast, on where she parts company with Cohen and Stalnaker, on whether non-human animals can mind read, on why Peter Carruthers is wrong to conflate mind-reading with metacognition, on the evidence that non-human animals do metacognise, on internalism and the limits of transparency and on how her approach impacts on puzzles like Moore's paradox. Just one more reason to love Paris...
3:AM:What made you become a philosopher?
Joelle Proust:In the seventies in France, continental philosophy and the history of philosophy were central academic topics. I was recruited by the CNRS (the National Research organization) to write a book – which was based on my habilitation thesis - on a question in the history of logic, more precisely, on the history of the concept of an analytic proposition. Using the methods of structuralist historians such as Martial Guéroult, I offered evidence that the concept of the analytic was constrained, among others, by the conceptions of the role of logic, of meaning, and of validity that authors such as Kant, Frege, Bolzano and Carnap had favored as a result of their own specific dominant theoretical insights and argumentative goals. I coined the term "comparative topics" to characterize the level of abstraction at which these authors could be interpreted as responding to their own hierarchized constraints and epistemic goals.
Independently of this work, which was published in French under the title "Questions de forme” (Questions of form), my prior studies in experimental psychology in domains such as signal detection theory and learning, and my research stay at UCB in the late seventies, had prepared me to fully appreciate, as did many others at that time, the huge potential impact that the "cognitive revolution" could have on the philosophy of mind. Once my doctoral thesis was completed, I engaged in the fascinating philosophical project of "naturalizing" intentionality, i.e. subjecting the philosophical concept of mental representation to forms of empirical validation. It took this historian of logic several years of hard work and effort, however, to become this strange, Janus-faced mutant: a naturalist philosopher.
3:AM:Metacognition is roughly where an agent thinks about their thinking isn’t it. So before we plunge into your work in this area, could you sketch out the philosophical issues that generally arise from this notion.
JP:Metacognition, as you say, refers to the cognitive activities meant to control and regulate one's own cognitive performances and dispositions, for example deciding to memorize a text, or solve a math problem, and determine whether, and when, the action is successful. So metacognition has to do with predicting one's own ability to perform a given mental action, with deciding how to act, and with evaluating the outcome, etc. From a philosophical viewpoint, it requires the clarification of epistemological issues, such as the nature of epistemic norms (e.g. truth, intelligibility, plausibility, coherence, and so on) and their relation to instrumental norms (such as utility), and social norms (such as conformity). The existence of metacognition also raises the philosophical question of the very nature of epistemic self-evaluation: is it primarily a matter of experience? For example, are the feelings elicited by one's own thinking processes (such as the sense of effort) essential to engaging in metacognition? Or is metacognition rather a matter of having the relevant repertoire of mental concepts (such as knowledge, doubt, illusion) needed for judging one's own performance? This type of question also extends to the nature of knowledge: is metacognition a form of "knowing that", or does it rather consist in "knowing how"? Finally, metacognition raises ethical questions, for instance: are all epistemic agents equally equipped to think correctly, and, hence, responsible for their judgments, as Descartes claimed? Or rather, is their social environment responsible for the existence and appropriate use of their critical abilities? This kind of question can now be posed in much more exact terms, thanks to the empirical science of metacognition.
3:AM:There seem to be two different ways in which the ‘meta’ part of metacognition is used. Could you tell us about this and why it’s important to understand that there are these two different ways of using the term?
JP:The meaning of "meta" in the word "metacognition" has been at the center of a controversy. On a classical view, "meta" implies "aboutness". Just as a metarepresentation is about a first-order representation, "metacognition" refers to knowledge about one's own knowledge, or "thinking about one's own thinking". On less classical views, "meta" can apply to the mere procedural relation between a control level and the activity being monitored. What prima facie looks like a merely terminological issue has, strangely enough, turned into a justification of what metacognition "actually is". Let’s consider, first, the view that "meta" means "being about". From this viewpoint, thinking about one's own thinking should involve the ability to attribute mental states to oneself, that is, to metarepresent one's own beliefs. A metarepresentation is usually understood as offering the ability to decouple the beliefs one has (or others have) from alternative beliefs; for instance, realizing that one was wrong involves forming different metarepresentations: I thought that my belief that it was raining was correct; in fact, I now realize that my belief was false: it was not raining.
Note that, from this viewpoint, the function of metacognition is centrally associated with identifying one's own belief contents. If "meta" is rather taken to apply to the relation between a controller and the activity controlled, then metacognition has much broader scope: it applies, in principle, to any control system able to predict one's own epistemic success (or failure), by whatever means. A major task for philosophers of mind is to determine the nature of these means. There is considerable evidence, now, that metacognitive evaluation is rarely formed on the basis of a semantic analysis of the contents of the thoughts subjected to evaluation. It often consists, rather, in predictive or retrospective "noetic feelings", a form of subjective experience that many non-mindreaders – such as animals and human infants - may well enjoy, even though they are unable to form metarepresentations about their thoughts. For example, feelings of familiarity, entertained by infants very early in life, and by most vertebrates, clearly do not require representing one's own perceptual beliefs.
3:AM:What’s the problem with attributivism?
JP:Self-evaluation, on the attributive view, is taken to be a matter of discriminating the attitude one is having, of attributing this attitude to oneself, and of evaluating it in the same way one would when attributing an attitude to another person. A definition of metacognition in these terms, however, captures neither every aspect of metacognition, nor even its essential aspects. Not every aspect, because it ignores the whole domain that Asher Koriathas called "experience-based" metacognition. For example, attributivism does not acknowledge the role of noetic feelings and of non-conceptual heuristics in determining when to stop learning, or when discrimination of an item would require taking another look.
Nor does it account for central aspects of metacognition. First, it does not explain the feature of "activity-dependence": why should we need to engage in a first-order task to predict whether we will be able to complete it successfully? Metarepresentational attribution does not require such an engagement. Second, it cannot account for the existence of specific metacognitive illusions, where, for example, a sense of greater ease in processing sentences with larger fonts disposes agents to judge their contents more likely to be true. It cannot explain, either, why correcting illusions has only a temporary influence on performance. Finally, attributivism is at pains to explain the normativity inherent in metacognition. Attributing an epistemic state is a concept-based description. How are such concepts recognized to have a normative meaning? How can they motivate agents to act mentally? Attributivism is silent about these crucial issues; it has no theory of truth-sensitivity, nor can it account for normative guidance – the ability of agents to select the cognitive actions that they feel able to perform.
3:AM:How would you define metacognition then?
JP:I would define metacognition through its evaluative function, while also recognizing that epistemic self-evaluations can be descriptively conveyed to others through self-attributions. Hence, I propose using an inclusive definition: Metacognition is the set of capacities through which an operative cognitive subsystem is epistemically evaluated or described by another subsystem in a context-sensitive way. Attributing competence in a domain to oneself generally depends on prior self-evaluations of performance on tasks involving that domain. In normal cases, then, having a metacognitive experience allows agents to report on it. It may happen, however, that judgments of competence are formed independently of any engaged form of self-evaluation. For example, gender-based or racial stereotypes can mislead agents into misattributing (or wrongly denying) epistemic competences to themselves and to others.
3:AM:In your bookyou think we make a mistake if we think that mental and ordinary action have the same normative structure. So what do you mean by actions having normative structures, are we talking about epistemic norms here?
JP:Every action can be evaluated with respect to a number of norms applying to the various characteristics of the action. In the case of ordinary action, for example, the action can be instrumentally adequate, be performed swiftly and economically, even have some aesthetic value, while also violating a moral or a social norm (and conversely). Mental actions have a specific set of norms – epistemic norms – that must (at least implicitly) guide selection and self-monitoring. These norms may guide action implicitly: for example, when one tries to remember a certain name one needs to sense that what is retrieved should be accurate for our cognitive action to be successful. Agents unable to sense a norm of accuracy for memory cannot control their own memory. Sensing that a norm of accuracy applies, however, does not amount to representing that this norm applies, just as sensing that you ought to select the shortest trajectory does not require mathematical knowledge.
3:AM:How do you see mental action working using these norms then?
JP:Mental actions are used to respond to situations where agents need to obtain information of a given kind: for example acquire knowledge, check their perception, solve a problem, and so on. Agents learn the various norms that can apply, and hence, the various mental actions that can be performed, either in a practical way, by exercising control of their perception, their memory, and their reasoning, or via formal training. As claimed by epistemologist William P. Alston, there are various ways of evaluating a proposition other than for accuracy. For instance, you can prefer favoring a norm of exhaustivity in trying to reconstruct, from memory, the shopping list you forgot at home. Propositions can also be evaluated as coherent, relevant, plausible, informative, and so on (or not). Each type of mental action has its own correction conditions. Each has its own utility. But once the normative conditions are selected, performing and evaluating the action is no longer a matter of utility, but only of satisfying the epistemic requirements that constitute this action.
3:AM:How does metacognition for action contrast then with its use for mental action?
JP:Both ordinary actions and mental actions need some form of anticipation and comparison between expected and observed outcomes. In both cases, agents need to anticipate the crucial steps of the developing action, and to compare sensory and expected feedback in each of them. However, both types of action do not engage the same type of norms, and hence, do not require the same type of norm-sensitivity. This difference is reflected in the predictive cues that correlate with action success in each case. A physical action is mainly monitored through perceptual imagery. Spotting a discrepancy between what they observe and what they expect allows agents to revise an ongoing physical action. A mental action, however, cannot be monitored in this way, because there is very little feedback from the environment available. What can be compared in this case is rather the predicted and observed "dynamic signature" of the activity. For example, being slower than usual to retrieve a word or to solve a problem predicts failure and may elicit the motivation to stop trying. Another source of information might consist in the facial feedback induced by the task, with zygomatic major activity (using the muscle involved in smiling) predicting success, and contraction in the corrugator supercilii (the frowning muscle) expressing a negative evaluation.
3:AM:How does the issue of ‘acceptance’ help bring out the contrast between ordinary action and mental?
JP:Because it responds to epistemic norms, a mental act of acceptance has no equivalent in ordinary action. Expressing one's acceptance in behavior, like nodding to indicate one's agreement with a proposal, on the present view, is part of a cognitive action. Similarly, speech acts express mental actions. Jonathan Cohen and Robert Stalnaker, however, have proposed that acceptance relaxes truth requirements as a function of context, and may be correctly performed in cases where the truth is only approximate, or even when one is merely pretending to have a true belief. On this analysis, acceptance uses a double standard by subjecting epistemic correction to utility.
The Cohen and Stalnaker approach to acceptance, however, can be resisted on the basis of two important distinctions. First, epistemic acceptance and strategic acceptance are two different types of decision. One is epistemic, and based on one's best judgment; the other is practical, and has to do with the anticipated risks and benefits for guiding one's actions on a given task. This distinction, documented in behavioral and neural evidence, is of major philosophical importance. Second, as mentioned above, there are several types of epistemic acceptance. Depending on the type of action that is relevant to a situation, you can accept a memo as accurately true if it exclusively contains true propositions, with uncertain facts being omitted), or exhaustively true (the memo must report all the true facts, with uncertain facts being included). You can accept a syllogism under a norm of coherence, its truth being irrelevant. These various epistemic norms allow us to use information in different ways according to the needs of our embedding ordinary actions: shopping, conducting a war, or doing science, involve different epistemic norms. The important point, however, is that the embedded epistemic acceptances are only sensitive to their own norms. Cohen and Stalnaker may have thought that acceptance was goal-sensitive because they correctly saw that there are various ways of endorsing a set of propositions, and that selecting a given type of endorsement is context dependent. They were wrong, however, to infer that epistemic standards as such can be contextually biased.
3:AM:Philosophers tend to approach this as something unique to humans, and tied in to our language ability and mindreading ability. You think there’s evidence suggesting that this is wrong don’t you? Can you first say a little about which philosophers/ philosophical traditions have held that humans are special in this way?
JP:A majority of traditional philosophers have tended to deny animals access to thought and rationality. Descartes and Locke considered that language is a necessary condition for thinking and acting rationally. David Hume, in contrast, observed in his Treatisethat animals are able to perform actions similar to those of humans, and hence, should have minds similar to ours. Cognitive science has offered evidence favoring Hume's argument: most of the basic abilities of humans, from perception to learning and reasoning, are present in non-humans. Among those philosophers who hold that a mind requires representational abilities, there is no consensus, at present, on the type of representation which is used by other animals. While it is more and more accepted that vertebrates have concepts and form beliefs based on them, belief possession is no longer seen as a precondition for thinking. Non-conceptual attitudes, such as registrations or affordance-sensings, are seen as alternative ways of guiding behavior. Such attitudes might have the function of evaluating the valence and intensity of affordances, and triggering the corresponding modes of acting. They satisfy the time and resource constraints usually identified with system 1 processes, and may work either independently, or in conjunction with system 2 propositional attitudes.
Granting that some non-humans may form both concept-based and non-conceptual attitudes, are they able to attribute false thoughts to another, i.e. to read others' minds? There is behavioral evidence that chimpanzees and dogs can understand that others have, depending on their position in space, limited perceptual access to events of interest. There is no evidence, however, that they can represent in a more general way that others have wrong beliefs. A plausible explanation for this is that the latter ability is closely connected to the need to expose the non-reliability of testimony, a need which crucially involves linguistic communication.
Once the existence of two representational systems in primates has been sufficiently documented, one cannot argue any longer, as does Peter Carruthers, that “It is the same system that underlies our mindreading capacity that gets turned upon ourselves to issue in metacognition”. Indeed, metacognition is not meant to expose non-reliability in others, but to detect one's own epistemic errors soon enough to make the correct decisions or revisions.
3:AM:Can you sketch out the evidence showing that non-humans can metacognise?
JP:Various non-human species that are not adapted to read minds, such as bottlenosed dolphins (Tursiops truncatus), and rhesus macaques (Macaca mulatta), are able to evaluate whether they are able to discriminate two visual stimuli. Monkeys can make a prospective judgment of memory in a serial probe recognition task. In such tasks, the animals are offered the opportunity to "opt out" from a perceptual or memory task when they feel unable to perform it. The animals’ response patterns strikingly resemble those of human subjects. Granting the validity of these experiments, they are compatible with the view that metacognition is a specific adaptation, whose phylogenetic distribution overlaps, but does not coincide, with the ability to read minds.
Important methodological concerns have been raised against a metacognitive interpretation of these findings: is it not reward, rather than the animals’ judgments of confidence that guides decisions? Second, are not so-called “metacognitive judgments” actually prompted by associations between environmental cues? Third, are not difficult trials merely aversive ones? These various worries have been successfully addressed by new tests. Reward has been delivered to animals after ten trials rather than after each one with no change in responses. Generalization tests, where the animals need to predict performance in unrelated tasks have shown that a disposition to opt-out is not dependent on the associative strength of the stimuli involved.
Finally, capuchin monkeys have been shown to be able to sort stimuli into three categories, A, B, and middle, while failing to use uncertainty as a motivation to decline difficult trials, as rhesus monkeys do, although they thereby incur the cost of long timeouts. A threshold task, which does not allow a “middle” category to emerge, has elicited adaptive uncertainty responses in rhesus monkeys. Convergent evidence in favor of animal metacognition has come from the neuroscience of metaperception, where animals have to decide to perform a task of discrimination, or to opt out when too difficult. The neural activity recorded in the orbitofrontal cortex of rats was found to correlate with anticipated difficulty, i.e. with the predicted success in categorizing a stimulus (with some populations firing for a predicted near-chance performance, and others firing for a high confidence outcome). Furthermore, it was shown that this activity did not depend on recent reinforcement history, and could not be explained by reward expectancy. Similar findings were found in monkeys. The dynamics of activation in certain neural populations allow animals as well as humans to predict – much earlier and more reliably than overt behaviour – how likely it is that a given cognitive decision will be successful.
3:AM:Does this mean then that non-humans can mind–readlike humans – and perhaps you could sketch out what we mean by ‘mind-read’ in this context as it’s a term of art isn’t it – or is there a different mechanism such as ‘feature-placing’ that non-humans are using? If non-humans metacognise using this other way of doing it, couldn’t humans be also doing it that way as well, at least in some circumstances?
JP:The evidence sketched above does not allow us to infer that non-humans can read other minds (or their own). My definition of metacognition for humans is inclusive, in the sense that it includes both procedural and concept-based metacognition. Animals, however, can only form procedural metacognitive evaluations. An animal can feel hungry without knowing anything else about hunger. Similarly, an animal can feel uncertain without knowing that it has a mind. Noetic feelings include, in humans, the sense of effort, the "feeling of knowing", "the feeling of being right", "the tip of the tongue experience". They are part of nonconceptual evaluations, which I now call "affordance-sensings", which are very similar to what Peter Strawson called "feature-placing representations" in the spatial domain). Humans, in contrast with non-humans, can enrich their noetic feelings through "mental" concepts and theories (about belief, perception, memory, and so on). This "analytic metacognition" allows them to form reliable evaluations in contexts where noetic feelings are either misleading or absent. It also allows them to verbally justify their evaluations. Granting the importance of analytic metacognition in humans, and its tight relations with self-justification, it is quite plausible that metacognitive competence plays a significant role in children's ability to read other minds. After all, sensitivity to error plays a dominant role in false belief tasks; such sensitivity must first have emerged as part of the child's own cognitive activity. This is an important question that we are currently investigating.
3:AM:Is your view of metacognition an argument supporting internalism of some sort?
JP:Epistemic internalism is the view that determining what one knows, and how we can be certain that we know, are typically questions that a responsible thinker should raise. From an internalist viewpoint, these questions can be answered on the basis of the thinker's own epistemic abilities and cognitive resources. Metacognitive abilities do indeed seem to provide ammunition for epistemic internalism, in that they offer both non-inferential introspective access to one's mental agency and a way of evaluating its adequacy relative to criteria such as truth, efficiency and rationality. A metacognitive agent, furthermore, seems to enjoy immediate, privileged, and transparent access to her own mental abilities and contents.
But studying how a metacognitive system works shows the limits of transparency. First, noetic feelings are not generated by the contents of a task, but merely by the dynamic properties of the processes involved, which themselves are unrelated to the semantic content. Second, the ability of noetic feelings to reliably track error depends on factors that the agent is generally not in a position to appreciate, such as the quality and quantity of feedback collected in his/her prior engagements with the same task. Worlds where the feedback is positively or negatively biased respectively generate over- or under-confident metacognitive evaluations of task success. In worlds providing no feedback or no incentive to perform mental actions, noetic feelings fail to develop. This suggests that epistemic justification needs to interweave internalist and externalist requirements. Agents are responsible for making appropriate epistemic decisions based on their noetic feelings and relevant self-knowledge; but they are also part of a culture that may or not provide them the type of critical feedback that allows metacognition to be appropriately calibrated.
3:AM:Does your approach add understanding to debates about epistemic entitlement regarding one's own true beliefs – would something like Moore’s paradox: (1) ‘ It’s raining but I don’t believe it is’ be explained by your approach.
JP:On the so-called "Presentational view", the verb "believe" (in the sentence "I don't believe it is [raining]") is taken to be about the weather condition, rather than about the speaker's state of mind. In this case, there is indeed a direct contradiction between the conjuncts. This view is related to the recognition of the transparency of beliefs, according to which determining what one believes only requires attending to how things are. Other interpretations of (1), however, have rejected a view in which the conjuncts contradict each other. A speaker can express P as true in the context of perception, testimony, or group epistemic agency, and report his/her own individual state of disbelief. There is a mediation between the conjuncts, on this view, which is that the first conjunct, as a speech act, should be reflected in the second, which reports the intentional state that normally triggers the speech act. On this interpretation, however, the conjuncts do not have the same content: the first expresses how the world is, the second reports one's own intentional states about the world. Hence, there is no direct contradiction between the conjuncts. These are two independent facts. There is rather a form of pragmatic incoherence, because asserting P normally requires believing that P.
Metacognitive studiesalso offer an account in which there is no real contradiction between the first and second conjuncts. They do not focus on the relation between a speech act and its mental precondition, but rather on the transparent process of believing. Why do we look at the world to know whether P? Not only to gain information about P, but also to know how certain our P-belief is: when P is being accessed, or identified, an additional type of information is gained, having to do with the quality of the information that is being used in forming our belief.
This information, as discussed above, consists in the dynamic signature of the belief-forming process. We now have an additional difference between the conjuncts. Forming a belief about P generates a feeling of confidence in one's acquired belief content. The feeling has a valence and an intensity gradient, which are partly modulated by one's anticipations relative to P. Reporting a belief verbally requires dichotomizing one's belief: either P or not P. Granting that an uncertain belief has just been formed, if the agent is not allowed to report his/her level of confidence, it may be rational for him/her to resist a dichotomic report. His/her refusal to be committed to either P or not P is omissive: (s)he may not go as far as claiming that (s)he believes not-P. Considering that verbal reporting has, in addition to its epistemic dimension, an additional strategic dimension, (whereby, for example, one's own reputation may depend on one's decision), it may be appropriate, in some contexts where a belief is formed at a threshold level of reliability, to offer an omissive public report.
3:AM:And for the readers here at 3:AM are there five books you could recommend to take us further into your world?
• Beran, M. J., Brandl, J., Perner, J. & Proust, J. (eds.), (2012), The Foundations of Metacognition, Oxford: Oxford University Press.
• Bermúdez, J. L. (2003). Thinking Without Words. New York: Oxford University Press.
• Chaïken S. & Trope, Y. (Eds.). (1999). Dual-Process Theories in Social Psychology. London: The Guilford Press.
• Recanati, F. (2007). Perspectival Thought. A Plea for (Moderate) Relativism. Oxford, Oxford University Press
• Sosa, E. (2007). A Virtue Epistemology: Apt Belief and Reflective Knowledge. Oxford: Oxford University Press.
ABOUT THE INTERVIEWER
Richard Marshallis still biding his time.
Buy his book hereto keep him biding!