Peter Carruthers interviewed by Richard Marshall.


Peter Carruthersis a philosopher with intriguing theories about our minds. His new book is the latest of a series of books about human nature, the philosophy of psychology and consciousness. He lives and works in Washington DC and in this interview he tells us that many of our views about our own minds are just wrong.  

3:AM:You're a philosopher whose work interfaces with cognitive science. Can you introduce yourself and how you've ended up thinking that philosophy and psychology complement each other. Were you a philosophical child etc or was it something you grew into or did something happened to turn you into one?

Peter Carruthers:I got into philosophy as a teenager hoping that it would make me think and reason more clearly, and because I wanted to know the meaning of life. I guess it did help with the former. But I simply stopped thinking about the latter once I met my wife in my second year in college. I was also always interested in figuring out how the mind works, and applied to university initially to pursue a dual degree in philosophy and psychology. But this was back in the 70s, and the psychology classes were all about rats and behavioral conditioning, which I found deathly dull. So I switched to pure philosophy. It was only a dozen or more years later that I started reading work in cognitive science again, initially through the writings of philosophers like Dan Dennettand Thomas Nagel, which draw on psychological results. Their stuff was so much more exciting and interesting than the Wittgensteintexts that had formed the basis of my philosophical training that I gradually found myself switching direction. I have since come to think of myself as a sort of theoretical psychologist (by analogy with theoretical physics, which takes other people's data and tries to make sense of them). I also believe very strongly that good philosophy needs to be empirically informed, at least.

3:AM:Could you give an overview of where you think the connections are between the disciplines? Many folk will think that psychology and philosophy are not asking the same kind of questions, but you seem to believe that they do overlap. How do you see this situation?

PC:I think the difference is largely one of scope. Thus many of the questions that occupy cognitive scientists are quite fine-grained, about this or that mental phenomenon. (But not always: many cognitive scientists, too, are troubled by the issue of how consciousness can exist in a physical world.) In contrast, many of the questions that occupy philosophers are quite general: do mental states form part of the furniture of the universe, or are they a sort of useful fiction? Are mental states causes of physical phenomena? And so on. From this perspective, philosophy and cognitive science can flow into one another: with philosophy taking data and fine-grained theories from cognitive science and attempting to integrate them into a wider theoretical framework, and with cognitive science sometimes looking to philosophy for help in providing such frameworks, or perhaps for some useful distinctions.

3:AM:Linked to this, are you then part of the experimental philosophy movement that gets linked with Josh Knobeand others?

PC:No. It requires a long apprenticeship to design and implement experiments effectively. While there has been some good work done by people in the experimental philosophy movement, whenever I talk to psychologists about this stuff I find myself embarrassed for my discipline by their reactions. I'm not averse to philosophers being involved in experiments, of course, but it is probably wise always to do it in collaboration with someone who has had the necessary training.

3:AM:One of the interesting things you argue is about introspection. You take a rather counter intuitive view that what we introspect is not infallible. Can you say what this position is and why it's controversial? There have been some cool experiments about this that you write about, readers would be interested in some of these I think.

PC:What I actually claim is something much stronger than this. For many philosophers today allow that introspection is fallible, and is subject to errors resulting either from pathology or inattention. What I claim is that we make systematic errors about our own thoughts, and that the pattern of errors reveals something about the mechanisms that normally give us access to those thoughts. (Compare the way in which visual illusions are used by cognitive scientists to give us insight into the mechanisms involved in visual perception.) In particular, I claim that people make errors whenever they are provided with cues that would lead them to make a similar error about the thoughts of a third party. This suggests, I think, that they are using the same mental faculty for both (often now called the 'mindreading' faculty), relying upon the same sorts of cues.

For example, people who are induced to nod their heads while listening to a message (ostensibly to test the headphones they are wearing for comfort and staying-power) express greater confidence in the message thereafter than those who have been induced to shake their heads while listening. This is just what we would think when observing other people: if they nod while they listen we assume they agree, and if they shake their heads while they listen we assume they disagree. Likewise, right-handed people who are induced to write statements with their left hands express lower confidence afterwards in the statements that they have written than people who write with their right hands. This is because the shaky writing makes the thoughts look hesitant. (And people who look at the written statements of others will make just the same judgments about the writers' states of confidence.)

People are completely unaware that they are always interpreting themselves in just the same way that they interpret others, however. Indeed, they think that they are directly introspecting their own thoughts. (I argue in my book, The Opacity of Mind, that there are reasons why the mindreading faculty should have been designed in such a way as to produce this illusion.) As a result, people will smoothly and unhesitatingly confabulate about their thoughts, telling of thoughts that we know they didn't really have.

For instance, in one study people were presented with pairs of pictures of female faces, and asked to pick the most attractive one. When they did so, the pictures were laid face down on the table for a moment, before the chosen picture was handed to subjects and they were asked to say why they had chosen it. However, in some trials, through the experimenter's sleight of hand, the picture that they were then looking at was the one they had rejected, not the one they had chosen. The results were quite remarkable. First of all, hardly anyone noticed! Moreover, they went on to tell why they had chosen that picture, often citing factors that we can be quite sure were no part of the reason for their choice. (For example, saying, 'I like her ear rings', when the woman in the chosen picture hadn't been wearing ear rings.) When people's answers in the actual-choice and sleight-of-hand conditions were analyzed, the experimenters could discover no differences between the two. People's reports had the same degree of emotional engagement, specificity, and so on, and were expressed with the same degree of confidence. I take this study, and many others like it, so show that people have no direct access to the factors that determine their liking for things.

3:AM:In your new book you put forward a theory  - is it the idea that there's not introspection and perception of the world but a single thing that does both?

PC:Not quite. I do argue that there is no introspection of our own thoughts (our judgments, beliefs, intentions, decisions, and so on). But what I argue is that there is a single 'mindreading' faculty that enables us the perceive our own thoughts as well as the thoughts of other people. This faculty evolved initially for social purposes, enabling us to anticipate (and sometimes to manipulate) the behavior of other people, as well as to better coordinate cooperative activities. But it can likewise be turned on the self, relying on the same channels of information that are used when interpreting the behavior of others. Sometimes we attribute thoughts to ourselves by literally perceiving our overt behavior. But often we rely on sensory cues that utilize the same perceptual channels, such as our own visual imagery, or our own inner speech.

3:AM:Another big subject you've looked at is creativity. You have a theory that creativity isn't a uniquely human thing but is due to a relatively simple mechanism that even moths have. Can you say what your theory is and what evidence you've found supporting it?

PC:I should say that my work on this is much more tentative and exploratory than my work on self-knowledge. And it is a theory of just one component of creativity, namely, the 'generative' component. Thus it is common for theorists to distinguish between two phases in creative activity. One is generative, when new ideas are thrown up for consideration. The second is evaluative, then these ideas are considered, explored, developed, and (if they are judged worthy) expressed or implemented.

There is quite a bit of work suggesting that the generative process is stochastic, or semi-random, in character. For example, the most creative individuals also tend to be the most productive individuals, and such people have more 'duds' or failures than others, just as they have more successes. What I have done is to suggest that this process may co-opt and re-use much more ancient mechanisms for the stochastic generation of actions. For it is known that many species of animal can engage in so-called 'protean' behavior (especially when fleeing from a predator). A fleeing gazelle, for example, will execute an apparently random sequence of twists and turns and leaps in the course of its flight. There is a good reason for this: the best way to make your actions unpredictable to a predator are to make them as close to genuinely random as possible. (It is for this reason that the submarine commanders in the Second World War would throw dice to determine the pattern of their zig-zag patrols, to make themselves unpredictable to the submarine-hunting vessels up above.)

So the paradigm example of creativity, from this perspective, is fast online improvisation in jazz. Those who study such performances report that the players seem to be stochastically selecting among well-rehearsed notes and phrases, while operating within a set of local constraints (such as permissible keys). And notice that jazz improvisers will often report that they are surprised by their products, suggesting that they were unplanned, but rather proceeded directly from stochastic selections among actions.

3:AM:What are the consequences of your theory for, say, our views about the creative artist or scientist? Do you think it diminishes the significance of creativity by making it kind of mechanical and simple. Should we appreciate music less or differently than before if your theory is understood and accepted?

PC:No, I don't think the theory should have any of these consequences. Creativity doesn't have to be deeply mysterious in order to be valuable. And much of the real work of the creative artist occurs downstream of the initial generative phase, when the ideas are evaluated and implemented, or upstream when knowledge is being acquired or skills are being developed and rehearsed.


3:AM:Your new book looks at a whole range of issues about the opacity of mind. One of these issues is cognitive dissonance and its interpretation. Can you explain why this is an important subject and why you come to a fairly surprising theory about this.

PC:These data count powerfully against the existence of direct introspective access to our judgments and beliefs, in my view. But this will take a little while to explain. Bear with me. The basic finding is a long-standing one: people who have been induced to write what are called 'counter-attitudinal' essays (arguing against something they are known to believe) will thereafter shift their reported attitude in the direction that they have argued if (but only if) their freedom of choice in writing the essays is made salient to them. In the 'low choice' condition, for example, subjects might be told something like this: 'Thank you for agreeing to participate in this exercise. A university committee is considering a rise in fees next term, and needs examples of arguments on both sides of the issue. We would like you to write an essay laying out the arguments in support of a fee rise. Thank you for your cooperation.' In the 'high choice' condition, in contrast, the experimenter might say, 'Of course it is entirely up to you whether to write this essay' (most still comply; if they don't do so immediately, the experimenter might say, 'We would be very pleased if you would; it is important to have examples of arguments on both sides of the issue, and we don't have enough on the side of raising fees; but of course it is entirely up to you'). Alternatively, subjects might be asked to sign a consent form on top of the essay sheet that reads, 'I hereby participate in this activity of my own free will', or something of the kind.

The effects in experiments of this kind tend to be highly significant and quite robust, even about matters (such as fee levels for university students!) that subjects regard as of high importance. In a typical experiment 'high choice' subjects might shift their reported attitudes from 'strongly opposed' to the fee increase to only 'slightly opposed' or even 'slightly in favor'' (whereas 'low choice' subjects shift their reported attitudes not at all). We know that this has nothing to do with the quality of the arguments produced by the two groups, because there are no such differences. We also know that the 'high choice', but not the 'low choice', subjects are put in a bad mood by the end of the essay writing, and that once they have reported their change in attitude they are no longer in a bad mood.

The traditional explanation of the finding is in terms of 'cognitive dissonance'. The idea is that people sense the inconsistency between their freely undertaken advocacy of a fee increase and their underlying attitude, and this makes them feel uncomfortable. Since they cannot change what they have done, they thereafter change their attitude, thus removing the feeling of discomfort.

But we now know that this explanation isn't correct. For 'high choice' subjects will shift their reported attitude just as much even if they write a pro-attitude essay (arguing against a fee increase, for example), provided that they believe that their action will have bad effects. This was beautifully demonstrated in a study in which subjects were told of the recent [fictional] discovery of a so-called 'boomerang effect'. They were told that the committee making the decision would be reading a significant number of essays before deciding. Essays read late in the sequence would persuade in the normal way. But essays read early in the sequence would boomerang: an essay arguing for a rise in fees would be apt to convince the readers not to raise fees, whereas an essay arguing against a rise in fees would be apt to persuade the readers to raise them. The subjects were only told about the order in which their essay would be read after writing their essays. Seemingly drawing a number out of a hat, subjects were told that their essay would either be read second, or second-to-last.

In this experiment, 'high choice' subjects who wrote counter-attitudinal essays showed no change in attitude in the boomerang condition (whereas they showed the usual degree of change in the no-boomerang condition). In contrast, 'high choice' subjects who wrote pro-attitude essays in the boomerang condition shifted significantly. Although they had written essays arguing that fees should not be raised (which is what they believed), they thereafter reported thinking that it wouldn't be bad if they were. The real cause of the phenomenon, then, is the sense that one has freely done something bad (since what one has done seems likely to cause fees to rise), not that one has freely done something inconsistent.
Moreover, we also now know that subjects don't change their underlying attitude in advance of being given the questionnaire on which to express it. For subjects will also use denials of responsibility to reduce dissonance, or they will deny that the issue is an important one. And if they are given a number of such options, they will use whichever one is offered to them first, without using any of the others.

So the true explanation of the phenomenon, in my view, is this. Subjects are feeling bad because they see themselves as having freely done something bad (not necessarily on a conscious level, of course). When presented with the attitude questionnaire, they imagine responding in various ways: 'Should I circle the 2 [on a 9-point scale, meaning 'strongly oppose'], or the 3, or the 4, or the 5?' Imagining themselves circling the 5 (the neutral point) presents their essay-writing action to them as being not bad (because the fee rise that they might have helped to cause would not then be thought to be bad). So they experience a little flash of pleasure at the thought of taking that action rather than the others, and so they go ahead and do it. Seeing themselves say that they aren"t opposed to a fee increase they believe that is what they think, and hence their negative mood disappears. This is because they are no longer appraising what they have done as bad.

Note that this explanation only works if subjects don't have introspective access to their real antecedent belief about the matter. For if they did, then at the same time that they circle the 5 they would be aware that they are lying, and this would make them feel worse, not better. Note, too, that a question about one's attitude is precisely the sort of thing that ought to bring it to consciousness, if such a thing can ever happen. But plainly it doesn't, since otherwise the effect wouldn't occur. Hence these findings provide powerful evidence, in my view, that beliefs can never be directly introspected.

3:AM:What other issues do you think are central to your approach to understanding the mind and ourselves?

PC:Perhaps the main issue concerns the architecture of the mind as a whole, especially its 'central' portion that deals with abstract thoughts (non-perceptual judgments, decisions, and the rest). Philosophers are virtually united in believing that there is a sort of central arena in which these thoughts can become activated and interact directly with one another, and I think most people tacitly accept something similar. But there is a lot of work in cognitive science to suggest that this picture is radically mistaken. Granted, there is a central arena of sorts, but it is a sensory-based one, realized in the so-called 'global broadcast' of attended sensory information to many different areas of the brain. This attention-based global broadcasting mechanism has been co-opted in humans and some other animals to form the basis of a working memory system. Hence we can call up, sustain, and manipulate visual images in this workspace. And likewise we can generate items of inner speech that become globally accessible in the same sort of way. These sensory-based representations can carry conceptual content. So one can hear oneself as saying [to oneself] that one should make a trip to the supermarket, or whatever, just as we hear meaning in the words of other people. But an item in inner speech is not itself a judgment, or decision, or any other form of thought. Rather, at best, it expresses and is caused by such a thought (although in fact we know that the relationship between speech and the underlying attitudes is complex and pretty unreliable).

Of course we hear ourselves as entertaining specific sorts of attitude, too, through the interpretive work of the mindreading faculty, just as we perceive other people as judging that it is about to rain (as they fumble with an umbrella while looking at the clouds), or as deciding that it is time to leave, or whatever. But on reflection, we should no more think that we have direct non-interpretive access to our own attitudes than we have such access to the attitudes of other people. What we really have access to is sensory-involving events of various sorts. And the only 'arena' in which all our attitudes can interact in a global way is indirect, through their contributions to the contents of sensory-based working memory.

I think we intuitively identify ourselves with the conscious events that we experience as occurring in working memory, and we tend to believe that these events include such things as judgments and decisions. But in my view, they don't, and these sensory-involving events are merely the effects of the activity of the self, rather than constituting the self. This occasions a radical change in perspective on ourselves. For the self and its attitudes is something completely submerged from view, directing and orchestrating the show of sensory events that parade before us in working memory.

3:AM: So in terms of our image about ourselves, how do your theories change what might be the folk-belief about ourselves and how far does it preserve this image? I guess one big area is that of human values which some see as being threatened by this approach to human ethics. So would it be fair to place you squarely in the Hume, Nietzsche, naturalism camp?

PC:I haven't really begun to explore the implications of my recent work on self-knowledge for human ethics. But the theory does suggest that our folk conception of ourselves is radically in error. This is because, outside of the broadly sensory domain (perception, imagery, inner speech, emotional and bodily feelings, and so on) none of our mental states is ever conscious. In particular, there are no such things as conscious non-perceptual judgments, no such things as conscious intentions, and no such things as conscious decisions. (And this holds, I argue, irrespective of what sort of theory of consciousness one endorses.) So our conception of ourselves as conscious agents is radically wrong. Rather, although there are many conscious events that contribute to agency, there is no such thing as conscious choice or conscious decision.

3:AM:How much non-professional reading helps you with your thinking, eg science fiction, novels, art, music – are there cultural things that you find helpful in formulating your thoughts and ideas?

PC:I have barely read a novel since my wife and I had kids! (Although I used to read a lot.) All my reading time now is devoted to philosophy and cognitive science. I visit the Smithsonian museums fairly often (we live just outside Washington DC), but I don't think my interests in art and music have the slightest connection with my work. Nor, come to that, do my interests in sport. (I am an American Football fan, and regularly attend my college's games.)

3:AM:Finally, it seems that we're living in complex and tough times. Philosophers like yourself seem to be working in areas of real importance, working out why we think what we do and how we do it etc. So how might the stuff you are doing help us understand better the challenges of the complex world we're in?

PC:There is a great deal of good and potentially useful work that has been done by studying areas of human weakness, as well as policies or techniques that might enable better decisions to be made. One has to do with institutional defaults. In some areas of Europe, for example, there is no shortage of organs for those needing transplants, whereas here in the United States the shortage is chronic. The difference? It turns on whether the law requires people to opt out of being an organ donor (as in some countries in Europe) or to opt in (as in the US). Yet the two rules are equally consistent with the principle of respect for people’s freedoms and religious beliefs. There are similar findings regarding the effects of standard plate sizes on the amounts that people eat, and so on. In fact I am teaching a new course at the moment entitled, 'Know Thyself: wisdom through cognitive science', which looks at a range of findings from the cognitive sciences and attempts to extract practical morals (or rather, it gets the students to try to extract those morals).

But in the most general terms I think it is crucial that people should realize that they don't know themselves nearly as well as they believe they do. This is what The Opacity of Mindis about. We should be much more humble in our attitudes to our own powers of reasoning and decision making, and much more open to learning about the factors that really have an impact on the outcomes, for the most part outside of our awareness.


Richard Marshallis still biding his time.