Interview by Richard Marshall.


Thomas K. Metzingeris full professor and director of the theoretical philosophy group and the research group on neuroethics/neurophilosophy at the department of philosophy, Johannes Gutenberg University of Mainz, Germany. From 2014-2019 he is a Fellow at the Gutenberg Research College. He is the founder and director of the MINDgroup and Adjunct Fellow at the Frankfurt Institute of Advanced Studies, Germany. His research centers on analytic philosophy of mind, applied ethics, philosophy of cognitive science, and philosophy of mind. The MIT Pressare about to publish a 2-volume set of the most comprehensive collection of papers on consciousness, brain, and mind available edited by himself and Jennifer M Windt. It gathers 39 original papers by leaders in the field followed by commentaries written by emerging scholars and replies by the original paper’s authors. Taken together, the papers, commentaries, and replies provide a cross-section of cutting-edge research in philosophy and cognitive science. Open MINDis an experiment in both interdisciplinary and intergenerational scholarship, a Robin Hood style project in which he got the money from somewhere else and threw out the whole material FOR FREE first, for all the poor countries in the world, demonstrating that they could do this faster and better than any electronic journal or publisher. In this interview he thinks aloud about his long standing interest in consciousness, the epistemic agent model of the self, the ego tunnel as a metaphor of conscious experience, the problem with the idea of a 'first-person' point of view, introspective Superman and Superwoman as advanced practitioners of classical mindfulness meditation, why nothing lives in the ego tunnel, what the rubber hand illusion shows, why we're unconscious and mind-wander most of the time, what the narrative default-mode does, the impact of culture on the ego tunnel, why trendy 'illusion talk' annoys him, what dreaming shows us, why AI is ethically dangerous, why meditation and spirituality need the cold bath of good analytic philosophy and the challenges facing young philosophers of cognitive science and what he is trying to do to help them. Take your time with this one: after all, that ego tunnel, it's you...

3:AM:What made you become a philosopher?

Thomas K Metzinger:At 19, I could never understand how someone would embark on their life without having first confronted and clarified the truly fundamental questions. But then, I was quite disappointed in the Frankfurt philosophy department. Of course, I strongly sympathized with Habermas and the philosophers representing the Frankfurt school, but I also saw the lack of conceptual clarity, and perceived the not-so-revolutionary self-importance in the epigones of Horkheimer, Adorno, and Habermas. Of course, a lot of the debates were completely beyond me in the beginning but I did sometimes sense some complacency and pretentiousness as well - young people are quite sensitive to this. But most of all, the whole thing was not politically radical enough for me. At 19, I basically held the position that if you were intellectually honest and really wanted to get in touch with political reality then you had to smell tear-gas. (It was the time of big protests and violent demonstrations about the expansion of the Frankfurt airport and squats in the Westend. To be fair, I should also admit that most of the time – unlike our future Foreign Minister and Vice Chancellor Joschka Fischer – I belonged to those guys who started running first.) I almost dropped out of philosophy, when, in a seminar on Descartes’ Passions de l’Ame, a young lecturer by the name of Gerhard Gamm first made me see that if the mind has no spatial properties, then there could never be a spatial locusof causal interaction with the world, neither in the pineal gland, nor anywhere else in the brain or the physical world. That point got me hooked. I ended up writing a thesis on recent contributions to the mind-body problem, from U.T. Place to Jaegwon Kim, and got drawn deeply into Anglo-Saxon analytical philosophy of mind.

But of course, there is an earlier personal history too. I guess I was always a slightly critical spirit, and I clearly remember how, at the age of 8, I cried when I first understood that everybody had to die. And how disappointed I was by those adults’ absolutely silly attempts at comforting me – they seemed strange to me, and just didn’t get the point. At around 12 years of age a little scholar awakened in me as I bought and immediately devoured my first two books ever. By today’s standards, these were really bad books – popular science books reporting on purported parapsychological results, titled “Radar of the Psyche” and “PSI: Psychic Discoveries behind the Iron Curtain”. But the fledgling investigator in me immediately came to the firm conclusion that all of this was not only highly relevant, but also that quite obviously there was a truly scandalous lack of rigorous research! So while all the other boys wanted to become locomotive drivers, space explorers or sky divers I decided I had to become a professor of parapsychology. And, as I soon found out, there was only one single professorial chair in all of Germany! At around 15, in high-school, we read quite a bit of Sartre and Camus. Today I think it is pedagogically dubious to give this type of philosophy to young people in the age of puberty. The next serious pedagogical mistake was then made by my father, who actually had a copy of Aldous Huxley’s “Doors of Perception” lying around. My father died last year, and only recently have I come to notice how big his influence on my early intellectual development actually was. At 16 (the parapsychologist long having turned into a staunch Trotzkyist) I was ill, had to stay in bed for three weeks, and was terribly bored. On his nightstand I found a copy of Georg Grimm’s Die Lehre des Buddho. Another turning point in my intellectual life, and in more than one way. Since then I have always read some Indian philosophy on and off, also travelled in India and Asia quite a bit, but I have, perhaps wisely, perhaps unfortunately, always kept this on the private, purely amateur level. I have never systematically integrated it with my teaching and or into academic publications, as an increasing number of Western philosophers of mind are now doing, and in very interesting and innovative ways. Just after finishing high-school I flew to Montreal, hitch-hiked down to New York, back up via the Transcanada Highway all the way through to Vancouver, down to Berkeley and back, 21.800 km all in all. I clearly remember that during those 5 months there were two books in my backpack all the time: Theodore Roszak’s “The Making of a Counter Culture” and Patanjali’s “Yoga Sutras”. Then I hit the Frankfurt philosophy department.


3:AM:You’re interested in the philosophy of consciousness and the self.

TM:Yes, it is true that I have had a long-standing interest in consciousness. In 1994 I founded the Association for the Scientific Study of Consciousnesstogether with Bernard Baars, the late William Banks, George Buckner, David Chalmers, Stanley Klein, Bruce Mangan, David Rosenthal, and Patrick Wilken. I hung around in the Executive Committee and various committees of the ASSC for much too long, even acted as president in 2010, but a couple of thousand e-mails and 19 conferences later it is satisfying to see how a bunch of idealists actually managed to create an established research community out of the blue. The consciousness community is now running perfectly, it has a new journal, brilliant young people are joining it all the time and my personal prediction is that we will have isolated the global neural correlate of consciousness by 2050. I also tried to support the overall process by editing two collections, one for philosophers and one of an interdisciplinary kind: Conscious Experience(1995, Imprint Academic) and Neural Correlates of Consciousness(2000, MIT Press)


In the beginning of the ASSC, foundational conceptual work by philosophers was very important, followed by a phase in which the neuroscientists moved in with their own research programs and contributions. Observing the field for more than a quarter century now, my impression is that what we increasingly need is not so much gifted young philosophers who are empirically informed in neuroscience or psychology, but more junior researchers who can combine philosophical metatheory with a solid training in mathematics. The first formal models of conscious experience have already appeared on the horizon, and as we incrementally move forward towards the first unified model of subjective experience there are challenges on the conceptual frontier that can only be met by researchers who understand the mathematics. We now need open mindedyoung philosophers of mind who can see through the competing formal models in order to extract what is conceptually problematic - and what is really relevant from a philosophical perspective.

In the early Sixties it was Hilary Putnam who, in a short series of seminal papers, took Turing’s inspiration and transposed concepts from the mathematical theory of automata – for example the idea that a system’s “psychology” would be given by the machine table it physically realizes – into analytical philosophy of mind, laying the foundations for the explosive development of early functionalism and classical cognitive science. One major criticism was that some mysterious and simple “intrinsic” qualities of phenomenal experience exist (people at the time called them “qualia”) and that they couldn’t be dissolved into a network of relational properties. The idea was that there are irreducible and context-invariant phenomenal atoms, subjective universals in the sense of Clarence Irving Lewis (1929:121-131), that is, maximally simple forms of conscious content, for which introspectively we possess transtemporal identity criteria. But the claim was shown to be empirically false and nobody could really say what “intrinsicality” actually was. Then we got the interdisciplinary turn, neural networks, dynamical systems theory, embodiment, emotions, evolutionary psychology and cognitive robotics became directly relevant, providing valuable “bottom-up constraints” for philosophy of mind. Simultaneously we saw the great renaissance of consciousness research – many of the best minds began to share the intuition that it was now becoming empirically tractable, ripe for a combined interdisciplinary attack. Today, the old philosophical project of developing an “universal psychology” - for example, a general theory of consciousness that is hardware- and species-independent – may return in new guise. Perhaps Giulio Tononi’s Integrated Information Theory or the Predictive Processing Approach following Karl Friston’s 2010 proposal for a unified theory of brain dynamics already supply us with first analytical building blocks for a truly general and abstract theory of what consciousness and an individual first-person perspective really are, in all systems that have them. It may also be that in the end we arrive at a new and deeper understanding of why there just cannot be a universal theory of what “phenomenality” or “subjectivity” are. But it seems clear that armchair metaphysics won’t help. Today it needs people who can penetrate the mathematics of theoretical neurobiology.

3:AM:Your claim is that no one is or ever has been a self, that the self is a myth. Rather you say that we are transparent self-models. Can you unpack the general claim for us?

TM:Oh no, that would be a serious misunderstanding. I certainly don’t say that you are the phenomenal self-model active in the brain right now! You are the whole person, abstract, social, psychological properties and all, the person that now asks this question. That person as a whole is the epistemic subject, it wants to know things, t is a cognitive agent. In this person’s head, however, there is a complex subpersonal state, namely the ongoing neurocomputational dynamics generating a phenomenalself. Often, but not always, the conscious representational content of this model is one of a person and of an epistemic agent.

This epistemic agent model, or EAM for short, is a highly specific and philosophically interesting content-layer in phenomenal self-consciousness – or at least I think so. For example, the notion of an EAM might pick out more precisely what we really mean when we discuss the “origin of the first-person perspective”. One central point is that the transition from subpersonal to personal-level cognition is enabled by this specific form of conscious self-representation, namely, a global generative model of the cognitive system as an entity that activelyconstructs, sustains and controls knowledge relations to the world and itself. In recent publications I have shown that we only possess an EAM for about one third of our conscious life-time.

However, under SMT (the “self-model theory of subjectivity”) the crucial and more basic point is that, due to the phenomenal transparency of the self-model in our heads we – the whole organism - identifywith its content. If it is a model of a person, then the organism begins to behave like a person. I have always been interested in this dynamic relationship between the epistemic subject (the person or biological organism as a whole) and the phenomenal self (constituted by a subpersonal state in the brain): How exactly do we first become epistemic subjects by functionally and phenomenologically identifying with the conscious model of a knowing self in our brains and operating under it? My second German monograph was called Subjektund Selbstmodell(plus an obscenely long subtitle: “The Perspectivalness of Phenomenal Consciousness against the Background of a Naturalistic Theory of Mental Representation”) – and it was perhaps a mistake to call the expanded English version Being No One. But at the time I thought Americans just need something like that.


I should perhaps also add that SMT, for me, is not a ready-made theory, but a self-defined research program that I have been pursuing for three decades now. A lot of my work can be seen in this context: For example, given the background of the failure German idealist models of self-consciousness (e.g. in Fichte), it was always clear to me that the bulk of human self-consciousness really is a “pre-reflexive” affair, as many agree today. So it was natural to search for the minimalform of selfhood, and the interim result – coming out of our virtual reality experiments and interaction with dream researchers – is that it is what I call “transparent spatiotemporal self-location”, and in principle independent of any form of bodily or mental agency, as well as of emotional or explicit spatial content.

For example, Jennifer Windt has interestingly shown that there can be a robust form of phenomenal self-consciousness in so called “bodiless dreams”, in which the self is consciously represented as an extensionless point in space. Another example of how I have tried to incrementally fill in the holes was to go directly into what has recently been called a “reputation trap”, namely, by reading up on the (thin) empirical literature on out-of-body experiences and discussing it in Being No One. I can recommend trying to actively ruin your reputation. It generates innovative research: Olaf Blanke had caused an OBE by direct electrical brain stimulation in 2002 and was in search of a theoretical framework, we got together and all of the experimental work on full-body illusions, on virtual embodiments in avatars and robots came out of it. Josh Weisberg has dubbed this the “method of interdisciplinary constraint satisfaction”. As a philosopher, you define constraints for any good theory explaining what you are interested in, then you go out and search for help in other disciplines. You find out that these people are much smarter than you and that you were completely wrong in almost anything you had thought about the issue so far. It is a really rewarding strategy, because it not only ruins your reputation and minimizes your chances on the job market, it also leaves you in a state of complete confusion.

Speaking of which, the last time I did this was by looking into empirical work into “mind wandering” and “spontaneous task-unrelated thought”. Moving upwards from the different levels of content constituting the experience of “embodiment” the human self-model it is now time to look at the cognitiveself-model for some time. As it turns out, cognitive agency (as opposed to what many philosophers intuitively think) is a very rare phenomenon, as is mental autonomy. This discovery may help to get us closer to an empirically grounded and much more differentiated conceptual understanding of what we were actually asking for when, in the past, we talked about “a” or “the” first-person perspective. I think there is phenomenal self-consciousness without perspectivalness and I am interested in the transition.

3:AM:Why do you use the metaphor of conscious experience as an ego tunnel? Why a tunnel?

TM:Because phenomenal properties supervene locally. Conscious experience as such is an exclusively internal affair: Once all functional properties of your brain are fixed, the character of subjective experience is determined as well. If there was no unidirectional flow of time on the level of inner experience, then we would live our conscious lives in a bubble, perhaps like some simple animals or certain mystics – locked into an eternal Now. However, our phenomenal model of reality is not only 3D, but 4D: Subjective time flows forward, the phenomenal self is embedded into this flow, an inner history unfolds. That it is why it is not a bubble, but a tunnel: There is movement in time. But of course one of the interesting characteristics of the Ego Tunnel is that it creates (as Finnish philosopher Antti Revonsuocalled it) a robust “out-of-the brain experience”, a highly realistic experience of notoperating on internal models, but of effortlessly being in direct and immediate contact with the external world – and oneself.


Please note how The Ego Tunnelis the only non-academic book I have ever written. It was aimed at the interested lay person, an experiment in the public understanding of philosophy. The very first sentence of this book reads: “This book has not been written for philosophers or scientists.” And the Ego Tunnel-metaphor is of course inspired by VR-technology. If what I just said is correct then experiential externality is virtual, as is the “prereflexive” phenomenology of being infinitely close to oneself - phenomenal consciousness truly is appearance. Virtual reality is the representation of possibleworlds and possibleselves, with the aim of making them appear as real as possible – ideally, by creating a subjective sense of “presence” and full immersion in the user. Interestingly, some of our best theories of the human mind and conscious experience itself describe it in a very similar way: Leading current theories of brain dynamics like Karl Friston, Jakob Hohwyor Andy Clarkdescribe it as the constant creation of internal models of the world, predictively generating hypotheses – virtual neural representations - about the hidden causes of sensory input through probabilistic inference. Slightly earlier, philosophers like Revonsuo and myself have pointed out at length how conscious experience is exactly a virtual model of the world, a dynamic internal simulation, which in standard situations cannot be experienced asa virtual model because it is phenomenally transparent, diaphanous in the sense of G. E. Moore – we “look through it” as if we were in direct and immediate contact with reality. What is historically new, and what creates not only novel psychological risks, but also entirely new ethical and legal dimensions, is that one virtual reality now gets ever more deeply embedded into anothervirtual reality: As VR-technology hits the mass consumer market in 2016the conscious mind of human beings, which has evolved under very specific conditions and over millions of years, now gets causally coupled and informationally woven into technical systems for representing possible realities. Increasingly, it is not only culturally and socially embedded, but also shaped by a technological niche that over time itself quickly acquires a rapid, autonomous dynamics and ever new properties. This creates a complex convolution, a nested form of information flow in which the biological mind and its technological niche influence each other in ways we are just beginning to understand. Michael Madary and myself have just published the first Code of Ethical Conduct for Virtual Reality ever. In doing this, our main goal was to provide a first set of ethical recommendations as a platform for future discussions, a set of normative starting points that can be continuously refined and expanded as we go along.

3:AM:There seems to be a problem with linking third person descriptions of the mind to first person ones. Is this because we can examine the ontology of the ego tunnel from the objective, third person, scientific point of view but can only experience it from the first person point of view? Is this why we can’t we experience the models as models?

TM:Not quite, it is a bit more complicated. First, it is completely unclear what “from the first-person point of view” really means – all we have is a visuo-grammatical metaphor. Sometimes, in my darker moments, I think that there is an ongoing conspiracy in the philosophical community, an organized form of self-deception, as in a cult, to simply all together pretend that we knew what “first-person perspective” (or “quale” or “consciousness”) means, so that we can keep our traditional debates running on forever. Second, a conscious model is only transparent if our introspective attention has no access to its underlying neurodynamic construction process, to earlier processing stages in the brain (to its “non-intentional” or “vehicle” properties, if you prefer more traditional terminology). Unconscious models – the large majority of representational processes unfolding in the brain – are neither transparent nor opaque. What many people don’t see is that there are abundant examples of phenomenal opacity: It is one of the most interesting features of the human conscious model of reality that, first, it can contain elements that are not experienced as mind-independent, as unequivocally real, as immediately given, and second, that there is a “gradient of realness” in which one and the same content can be experienced transparently or in an opaque fashion.

Examples of phenomenally opaque states are sensory illusions, pseudo-hallucinations, two-dimensional images floating in the space in front of you (as in synaesthesia or in hypnagogic imagery), and most importantly, the phenomenology of conscious thought. In these cases, it is part of the phenomenology itself that you are confronted with or operating on representations which might be true or false. Often there is a subjective character of misrepresentationality, like “This isn’t real!” or “This certainly is not mind-independent!” And about the phenomenology of “realness” - I believe we should really take our own phenomenology more seriously. What a good theory of conscious must explain is the variancein this subjective sense of realness: There clearly is a phenomenology of “hyperrealness”, for example during religious experiences or under the influence of certain psychoactive substances. But, as recent research shows, there can also be extremely high-degrees of “existential certainty” during certain types of epileptic seizures. On the other hand, during traumatic experiences or, for example, in patients suffering from depersonalization/derealisation disorder we find a dreamlike, dissociated quality – the world and one’s own body may be experienced as misrepresentational and “unreal”.

In ordinary life, the phenomenology of embodied emotions is an excellent example for dynamic changes between transparency and opacity: You can “directly perceive” that your wife is cheating you, or you can become aware of the possibility that maybe it is youwho has a problem, that your “immediate” emotional representation of social reality might actually be a misrepresentation.

In short, I believe that if we would carefully apply the distinction between transparency and opacity to the different layers of the human self-model, looking at self-consciousness in a much more careful and fine-grained manner, then we might also arrive at a new answer to your original question: What a “first-person perspective” really is.

3:AM:Superman, with his enhanced speed, would presumably not be able to remain conscious on your view would he, because he’d be able to see the models?

TM:False. Introspective Superman would enjoy a non-centred phenomenology of “global opacity”: Everything would appear to him as what – at least under certain currently popular descriptions - it actually is from a third-person perspective, namely the content of one big representation, like one big thought or one big pseudo-hallucination. He would lose the phenomenology of naïve realism, and most importantly, what I have termed “the phenomenology of identification”. Introspective Superman’s biological body would not experientially identify with the content of his current conscious self-model any more.

“Introspective Superman” …. - I like your thought experiment! It has great potential, because it brings out a nice logical possibility that has been explored in Buddhist philosophy or in Advaita for many centuries: Imagine Introspective Superwoman, an advanced practitioner of classical mindfulness meditation, plus a global, opaque state of consciousness that is like “lucid waking” (like a dream in which you have become aware that you are dreaming, but during the wake state). Her attentional capacities would be so strong, that there would be a continuous phenomenology of representationality, and no naïve realism – neither about the world-component nor about the self-component of the model. That is, there would be no identification with an epistemic agent or an “introspecting self” – one would rather predict a phenomenology of “the whole world effortlessly looking into itself”.

Your “Introspective Superman”-scenario also allows us to ask new questions: Would there be a possibility to keep even the phenomenology of identification, but to tie it to phenomenality per se, namely, by turning the process of conscious experience itselfinto the unit of identification? In this phenomenal configuration, which is clearly possible from a logical point of view, experiential states would still arise, but they would not be subjectiveones, because the underlying subject-object-structure of consciousness had been dissolved. In other words, what, today, we vaguely call the individual “first-person perspective” would have disappeared, because its origin (I call it the “unit of identification”) was now not eliminated, but maximized. Perhaps this is how we should imagine your Introspective Superman? Phenomenologically, such an aperspectival form of consciousness would not be a subjectiveform of experience any more – rather a globalized conscious experience of “the world looking”. One interesting, and remaining, philosophical question would be if, for this class of states, we would still want to say that they constitute a form of conscious self-representation.

3:AM:Why isn’t what you call the Ego what we might think of as being the self? Doesn’t anything live in the ego tunnel?

TM:Nothing lives in the Ego Tunnel, just as nothing lives in a 3D-movie – even if the audience is completely immersed and fully identifies with the hero. You know, I am not bound to terminological conventions – we could always say “What we have called the self in the past really is X” (where I have tried to offer a theory on what that X is). We could call all systems currently operating under a transparent phenomenal self-model, or all those having the “ability”, the functional potential, “selves”. If that gives things a more politically correct ring from your preferred ideological perspective, or if, psychologically, it helps you with mortality-denial – fine. There is just no entity there, no individual substance, and scientifically we can predict and explain everything we want to predict and explain in a much more parsimonious way. If you are interested in a short sketch of some metaphysical options, there is a chapter called “The No-Self-Alternative” In Shaun Gallgher’s Oxford Handbook of the Self.

3:AM:What makes consciousness a subjective phenomenon and how do you think experiments such as the rubber hand experiment helps show that the self is purely experiential?

TM:Consciousness is phenomenologicallysubjective whenever there is a stable, consciously experienced first-person perspective. To have a first-person perspective in this sense, I have argued, a cognitive system needs a model of the intentionality relation itself: It needs an internal model of itself as currently directed at an intentional object, for example a set of satisfaction conditions (a representation of an action goal, as in practical intentionality) or a set of truth conditions (an object of knowledge, as in theoretical intentionality). Such a flexible, continuously changing model of being dynamically directed at various intentional objects allows a system to consciously experience itself as being not only a part of the world, but of being fully immersed in it through a dense network of causal, perceptual, cognitive, attentional, and agentive relations. The core-idea behind this notion of a “phenomenal model of the intentionality relation” is that the decisive feature characterizing the representational architecture of human consciousness lies in continuously co-representing the representational relation itself. This PMIR, however, has nothing to do with possessing a concept of “intentionality” and it also is not something static, abstract, or timeless – I rather think of an embodied, dynamic, circular flow of causality underlying our phenomenal experience of being directed at the world, our inner image of what Margaret Anscombe once called the “arrow of intentionality”. Subjectivity means to catch yourself in the act.

And this might be what it means for consciousness to be subjective in an epistemologicalsense: In our own case it is the ability to represent knowledge under highly specific, neurally realized data-format. Subjectivity is an ability, the capacity to use a new inner mode of presenting the fact that you currently know something to yourself. For a human being, to possess a consciously experienced first-person perspective means to have acquired a very specific functional profile and distinctive level of representational content in one’s currently active phenomenal self-model: It has, episodically, become a dynamic inner model of a knowing self. Recently, I have begun to call this an “epistemic agent model”. The point then is that representing facts under such a model creates a new epistemic modality. All knowledge is now accessed under a new internal mode of presentation, namely, as knowledge possessed by a self-conscious entity intentionally directed at the world. Therefore, it is subjective knowledge. This notion of a conscious model of oneself as an individual entity actively trying to establish epistemic relations to the world and to oneself, I think, comes very close to what we traditionally mean by notions like “subjectivity” or “possession of a 1PP”.


It is the rubber hand illusion that got me into all of this virtual reality research, the now famous VERE-project, and the attempt to create robust full-body illusions in Olaf Blanke’s lab in Lausanne. I went to these neuroscientists and basically said: “For philosophical reasons having to do with pre-reflexive self-consciousness and the theory of embodiment, I urgently want reproducible out-of-body experiences in healthy subjects and a whole-body variant of the rubber-hand illusion!” They said: “We don’t really understand what you mean, and besides, the brain never sees the whole body from the outside – this is impossible!” What the rubber hand demonstrates is how our Bayesian brains are very sensitive to statistical correlations in the environment, and how the phenomenology of ownership just follows suit if the underlying model of reality changes. Here is a figure from our 2007 paper in SCIENCE:


Creating a whole-body analog of the rubber-hand illusion. (A) Participant (dark blue trousers) sees through a HMD his own virtual body (light blue trousers) in 3D, standing 2 m in front of him and being stroked synchronously or asynchronously at the participant’s back. In other conditions, the participant sees either (B) a virtual fake body (light red trousers) or (C) a virtual noncorporeal object (light gray) being stroked synchronously or asynchronously at the back. Dark colors indicate the actual location of the physical body or object, whereas light colors represent the virtual body or object seen on the HMD. (Image used with kind permission from M. Boyer.)

The self-model theory is not simply one philosophical model among others. It has been laid out as an interdisciplinary research program right from the beginning, as firmly anchored in scientific data as possible. If the basic idea of the self-model theory is on the right track, it yields a whole range of empirical predictions that would have to be experimentally testable. One of these predictions is that it must in principle be possible to directlyconnect the conscious self-model in the human brain to external systems – for instance to computers, robots, or also to artificial body images on the Internet or in virtual realities. This prediction has recently been corroborated. Under the conceptual assumptions of the self-model theory it must in principle be possible to couple the human self-model in a causally direct way with artificial organs for acting and sensing, while bypassing the non-neural, biological body. Through this, we could not only experientially, but also functionally situate ourselves in technologically generated environments in completely novel ways. For the last five years I have been working in a research project funded by the European Union, the VERE project, in cooperation with scientists and philosophers from nine countries. One of the research goals of this ambitious project was to go beyond the classical experiments from the year 2007 and stably transfer our sense of selfhood to avatars or robots that can perceive for us, move, and interact with other self-aware agents (“VERE” is the acronym of Virtual Embodiment and Robotic Re-Embodiment). But my official philosophical position still says that we will never really succeed in this.

In an ambitious pilot study, our Israeli colleagues Ori Cohen, Doron Friedman, and their collaborators in France demonstrated that it is possible to read out action intentions of a test subject using real-time functional magnetic resonance imaging. These can then directly be transferred as high-level motor commands to a humanoid robot, which transforms them into bodily actions, while the test subject can simultaneously witness the whole experiment visually through the eyes of the robot. This process is based on generated motor imagery, allowing test subjects to “directly act with their PSM”, [by remote-controlling a humanoid robot in France from a scanner in Israel.

This technical development is philosophically interesting for a number of reasons, for not only does it enable us to act in the world, to a large extent, “bypassing the biological body”, but also, to test theories about the emergence of the sense of selfhood more precisely than ever before. Many of these developments are historically new. I still believe that gut feelings, the sense of balance, and spatial self-perception are so firmly coupled to our biological body that we will never be able to leave it experientially on a permanent basis. The human self-model is anchored on interoception, it cannot simply be “copied out” of the brain. All that can be done is that new kinds of tools become functionally integratedwith the self-model in our brain – not only rakes or sticks, but also avatars or robots, for example. But I must confess that I am starting to have doubts. For firstly, it could be that simply differentand newly extended forms of self-consciousness which will in the future be generated by ever more densely couplings between self-model and avatars or robots – and secondly, technological progress in this area happens surprisingly fast.

For philosophers, this type of technological development – the development of what I call “self-model interfaces” - is interesting, and for several reasons: firstly, because of its ethical and cultural consequences, but also because it constitutes a historically new form of acting. I have introduced the notion of a “PSM-action” to be able to describe this new element more precisely. PSM-actions are all those actions in which a human being exclusively uses the conscious self-model in his brain to initiate an action. Of course, there will have to be feedback loops for complex actions, for instance, when seeing through the camera eyes of a robot, perhaps adjusting a grasping movement in real-time (which is still far from possible today). But the relevant causal starting point of the entire action is now not the body made of flesh and bones anymore, but only the conscious self-model in our brain. We simulate an action in the self-model, in the inner image of our body, and a machine performs it. For philosophical theories of self-consciousness this is interesting, because it allows us to investigate the “prereflexive” mechanism of identification with a body more closely.

We are systems that continuously extract the causal structure of the world in an attempt to predict what our next sensory input will be, and we use our bodies in active inference and our attentional mechanisms to constantly optimise the precision of our predictions. I think it is a great merit of philosophers like Jakob Hohwy and Andy Clark that they have made the seminal and ground-breaking work of British mathematician and theoretical neurobiologist Karl Friston accessible to the philosophical community. I also predict that in the empirically informed quarters of philosophy of mind we will soon see a whole new round of the classical internalism/externalism debate. Under this new approach, self-consciousness is an ongoing process of predicting global properties of ourselves, using a unified model – the self-model. Most of this is not introspectively accessible, most self-knowledge we have is unconscious self-knowledge. It is only the conscious partition of this dynamic process that we can direct our attention to – and it certainly is not all “purely experiential”, as you say. Of course, there is systematic self-deception, there are cognitive biases like male overconfidence bias and unrealistic optimism plus an individual bias blind spot, there is also an illusion of transtemporal identity, and there is a lot of philosophically relevant empirical evidence for functionally adequate forms of misrepresentation. But all this doesn’t make self-consciousness “purely experiential”: We simply would not be here if the self-models in the brains of our ancestors had not extracted the relevant causal structure of our bodies, of peripersonal space and our physical environment, and that of other minds and our group sufficiently well. Our internal models condense millions of years of interacting with this world, in many domains model evidence and statistical reliability are extremely robust – that is why we have even come to explicitly model ourselves as “knowing selves”, Homo sapiens sapiens.

3:AM:Typically conscious experience experiences as a subject that is the centre of a world. A couple of questions arise from this. The first is: what does it mean to say that conscious experience is not just related to the world but is related to it as knowing selves and can there be mental states that are not conscious, or are all mental states conscious at some level even in those patients who deny they exist? How does examining patients who deny they exist help this philosophical question?

TM:First, a lot of evidence shows that most of our cognitive processing is unconscious – phenomenal experience is just a very small slice or partition of a much larger space in which mental processing takes place. As a first-order approximation, I would say that phenomenality is “availability for introspective attention”: Consciousness is a property of all those mental contents to which you can in principle direct your attention. That is a good working concept to start with: It is not necessary to form a concept or inner judgment, just the availability for subsymbolic metarepresentation and optimization of precision expectations is enough. That is why many animals are phenomenally conscious, even though they may not have “thoughts” in a more narrow, philosophically charged sense. Also note how the “intro” in “introspective attention” does not necessarily imply that this attention is directed to the organism’s self-model: Directing attention to some sensory content that is internally represented as an aspect of an external, mind-independent object – the blueness of the sky, the redness of an apple – still is “intro”spective in the sense that, from a third-person perspective, it only operates on an exclusively internal model in your brain. At any point in time, there will be a Markov blanket separating your currently active conscious model of reality from extra-organismic reality or other parts of the brain, all behaviour that is based on your conscious experience alone can be predicted from events inside of this statistical boundary. This means that, epistemologically and methodologically speaking, nothing outside of it adds any information in terms of predictability. All attention is introspection. Of course, subjectively, all this may be experienced as the “direct” and unmediated perception of an outside world.

One interesting question, closely related to the one you posed above, could be if the same principle holds for unconscious mental states and processes, for things that are outside of the conscious model of reality, but which we would still term “mental” or “intentional” states. Another interesting question would be how much of our behaviour is really based “on your conscious experience alone”. This may depend on our epistemic interests and the temporal scale, if you will, the “time-window” which we choose to look at the human mind. Perhaps almost all mental states have a conscious and an unconscious part?

My PhD student Iuliia Pliushch and me have recently invented the “dolphin model of cognition”. Dolphins frequently leap above the water surface. One reason for this behaviour could be that, when travelling longer distances, jumping can save the dolphins energy as there is less friction while in the air. Typically, the animals will display long, ballistic jumps, alternated with periods of swimming below, but close to the surface. “Porpoising“ is one name for this high-speed surface-piercing motion of dolphins and other species, in which leaps are interspersed with relatively long swimming bouts, often about twice the length of the leap. “Porpoising” may also be the energetically cheapest way to swim rapidly and continuously and to breathe at the same time.

Pliushch and me think that, just as dolphins cross the surface, thought processes often cross the border between conscious and unconscious processing as well, and in both directions. For example, chains of cognitive states may have their origin in unconscious goal-commitments triggered by external stimuli, then transiently become integrated into the conscious self-model for introspective availability and selective control, only to disappear into another unconscious “swimming bout” below the surface. Conversely, information available in the conscious self-model may become “repressed” into an unconscious, modularized form of self-representation where it does not endanger self-esteem or overall integrity. However, in the human mind, the time windows in which leaps into consciousness and subsequent “underwater” processing unfold may be of a variable size – and there may actually be more than one dolphin. In fact, there may be a whole race going on! Just like your “Introspective Superman” the “dolphin model of cognition” has the advantage that it can be gradually enriched by additional assumptions. For example, we can imagine a situation where only one dolphin at a time can actually jump out of the water, briefly leaping out of a larger, continuously competing group. We can imagine the process of “becoming conscious” as a process of transient, dynamic integration of lower-level cognitive contents into extended chains, as a process of “cognitive binding” with the new and integrated contents becoming available for introspection. But we might also point out that individual dolphins are often so close to the surface that they are actually half in the water and half in the air.

What exactly is this process we call “conscious thinking” in the first place? Conscious thinking exists, for instance, also during the night, in states of dreaming. During dreams, we possess no control whatsoever over our thoughts and we are not able to control our attention volitionally. Sometimes there is the possibility to “awake” within a state of dreaming and regain mental autonomy. Such dreams are called “lucid dreams”, for in such dreams the dreamer realizes that he is currently dreaming, and hence he also regains control over thinking processes and the ability for volitional control of attention.

But what about conscious thinking during the day? Depending on the scientific study, our mind wanders during 30-50% of our conscious waking phases. At night, during our non-lucid dreams and those sleeping stages in which we have complex conscious thought but no pictorial hallucinations, we also lack the ability to suspend or terminate the thinking process – an ability of central importance for mental self-control. You cannot be a rational subject without veto-control on the level of mental action. Then there are also various types of intoxication or light anesthesia, of illness (e.g., fever dreams or depressive rumination), or of insomnia, in which we are in a sort of helpless twilight state, plagued by constantly recurring thoughts we cannot stop. In all these phases our mind wanders and we have no control over our thinking processes or our attention. According to a conservative estimate, the part of our self-model that endows us with real mental autonomy only exists during around one third of our entire conscious life. We do not exactly know when and how children first develop the necessary capacities and layers of their self-model. But it is a plausible assumption that many of us gradually lose it towards the end of their lives. If we consider all empirical findings regarding mind wandering together, we arrive at a surprising result that can hardly be underestimated as far as its philosophical significance is concerned: Mental autonomy is the exception, loss and absence of cognitive control is the rule.

As far as inner action is concerned, we are only rarely truly self-determined persons, for the major part of our conscious mental activity rather is an automatic, unintentional form of behavior on the subpersonal level. Cognitive agency and attentional agency are not the standard case, but rather an exception; what we used to call “conscious thinking” is actually most of the time an automatically unfolding subpersonal process. One interesting aspect is that we do not notice this fact – it is highly counterintuitive, at least it seems “a bit exaggerated” to most of us. Not only seems there to be a wide-spread form of “introspective neglect”, resembling a form of anosognosia or anosodiaphoria related to the frequent losses of cognitive self-control characterizing our inner life. The phenomenon of mind wandering is also clearly related to denial, confabulation, and self-deception. I once gave a talk about mind wandering to a group of truly excellent philosophers, pointing out the frequent, brief discontinuities in our mental model of ourselves as epistemic agents and one participant interestingly remarked: “I think only ordinary people have this. As philosophers, we just don’t have this because we are intellectual athletes!” The introspective experience and the corresponding verbal reports of one’s own mind wandering seem to be strongly distorted by overconfidence bias, by illusions of superiority and introspection illusion (in which we falsely assume direct insight into the origins of our mental states, while treating others' introspections as unreliable). Not only for philosophers of mind it is probably also influenced by confirmation bias related to one’s own theoretical preconceptions and culturally entrenched notions of “autonomous subjectivity”, by self-serving bias, and possibly by frequent illusions of control on the mental level.

When you are simply observing your breath, you are perceiving an automatically unfolding process in your body. By contrast, when you are observing your wandering mind, you are also experiencing the spontaneous activity of a process in your body. What physical process is that, exactly? A multitude of empirical studies show that areas of our brains responsible for the wandering mind overlap to a large extent with the so-called “default-mode network”. The default-mode network typically becomes active during periods of rest, and as a result, attention is directed to the inside. This is what happens, for instance, during daydreams, unbidden memories, or when we are thinking about ourselves and the future. As soon as a concrete task needs to be done, this part of our brain is deactivated and we concentrate immediately on the solution to the currently pending problem.

My own hypothesis is that the default-mode network mainly serves to keep our autobiographical self-model stable and in good shape: Like an automatic maintenance program, it generates ever new stories, which all have the function to make us believe that we are actually the same person over time. The default mode network has a high metabolic price, it costs the organism a lot of energy, and it has been shown that mind wandering diminishes your general quality of life – as a whole person, you pay a psychological price too. What is it that is so precious, that we pay such a high price for it? I believe it is the creation of a robust illusion of transtemporal identity. Only as long as we believe in our own identity over time does it make sense for us to make future plans, avoid risks, and treat our fellow human beings fairly – for the consequences of our actions will, in the end, always concern ourselves. My hypothesis is that exactly this was one of the central conditions in the evolution of social cooperation and the emergence of large human societies: It is yourselfwho will be punished or rewarded in the future, it is yourselfwho will either enjoy a good reputation in the future or be subjected to retaliation. What we need for that is an intact “narrative self-model”, an illusion of sameness. Then the “stabs of conscience” can make us even more self-conscious, integrating individual preferences with group preferences.

But on closer inspection, the narrative default-mode does not, I believe, actually produce thoughts. It continuously generates an inner environment, something I would describe as “cognitive affordances”, because they afford an opportunity for inneraction. They actually are only precursors of thoughts, spontaneously occurring mental contents, that, as it were, are constantly calling out “Think me!” to us. Interestingly, such proto-thoughts also possess something like the “affordance character” just mentioned, because they reveal a possibility. That possibility is not a property of the conscious self, and not a property of the little proto-thought currently arising – it is the possibility of establishing a relationby identifying with it.

Imagine you are trying to lose weight and attempting to concentrate on writing an article, but there is a bowl with your favorite chocolate cookies in your field of vision, a permanent immoral offer. If we are capable of rejecting such offers or to postpone them into the future, then we can also concentrate on that which we currently want to do. Now exactly the same principle also holds for our inner actions: If we lose the ability in question for a single moment only, we are immediately being hijacked by an aggressive little “Think me!” and our mind begins to wander. Often our wandering mind then automatically follows an inner emotional landscape. Speaking as a phenomenologist, it seems to me that a considerable portion of mind wandering actually is “mental avoidance behaviour”, an attempt to cope with adverse internalstimuli or to protect oneself from a deeper processing of information that threatens self-esteem. It will try, for instance, to flee from unpleasant bodily perceptions and feelings and somehow reach a state that feels better, like a monkey brachiating from branch to branch. Notacting, it seems, is one of the most important human capacities whatsoever, for it is the basic requirement of all higher forms of autonomy. There is outer non-acting, for instance in successful impulse control (“I will not grasp for this bowl of chocolate cookies now!”). And there is inner non-acting, exemplified by the letting go of a train of thought and resting in an open, effortless state of awareness, which can sometimes follow. There is thus an outer and an inner silence. Someone who cannot stop his outer flow of words will soon be unable to communicate with other human beings at all. Whoever loses the capability for inner silence, loses contact to himself and soon won’t be able to think clearly any more.


3:AM:And then how do we get to present ourselves as being the centre of this reality?

TM:I have said many years ago that the self-model differs from all other models in that it is functionally anchored in the body though a persistent causal link. There is a continuous flow of information from the upper brain stem and hypothalamus, which presents, for example, the current state of affairs in terms of homeostatic stability – as Antonio Damasio once put it, emotions present “the logic of survival” to us. It is important to understand that there is not only “embodiment” in the broad sense of the term, but that there also is an interoceptiveself-model: There are gut feelings, internal receptor systems in blood vessels, in tendons, muscles and joints, there is constant gravitational input through the vestibular organ creating a frame of reference. Interoceptive self-consciousness provides us with a much higher degree of invariance than the environment. The body model is grounded on these internal sources of information from which the brain can never run away, and which generally possess a high degree of reliability and which, as long as all goes well, provide us with a phenomenology of self-location in space, of permanence and stability over time. My speculative hypothesis is that what I call the “phenomenology of substantiality” is actually created by this low-level awareness of the very life process itself, the ongoing representation of successful self-sustainment. This then leads to theoretical intuitions that make it tempting to describe “the” self as ontologically self-subsistent, as a substance in the metaphysical sense.

3:AM:How does culture change the ego tunnel?

TM:An important question. Consciousness clearly is a culturally embedded phenomenon. But there are many readings of this claim. One concept about which I do not know enough, but which may be of great future relevance is that of a “cognitive niche”: Human beings construct cognitive niches, they are born into them, and they adapt to them. One beautiful new way of looking at the philosophical history of ideas, perhaps, is that can be seen as constructing the more abstract levels of humanity’s cognitive niche. Such self-constructed environments contain not only physical artefacts, not only representational systems that embody knowledge like writing systems, number systems or the specific skills and methods for training and teaching young human beings – I think they also contain conceptual tools with which we can ascribe properties to ourselves, eventually even changing the phenomenological profile of our subsymbolic self-model.

For example, it makes a difference if the concept of “consciousness” is available in your culture at all, if it is a pre-existing tool for cognitive self-reference, a semantic instrument that is offered to you by your linguistic community as it were, a potential scaffolding for social practices in which we can then all look at each other as “conscious” or “unconscious” beings. In 1988 the late Kathleen Wilkes published a paper in which she showed that in the very large majority of languages on this planet we do not even find an adequate counterpart for the English term “consciousness” (which only acquired it present meaning after what she called the “Cartesian catastrophe”). Why did all these linguistic communities obviously not see the need for developing a unitary concept of their own? Such question are one example of what “cultural embedding” can mean. If a culture develops the notion of a “person”, say, as of a “rational individual” and a “moral agent sensitive to ethical issues” – how does the availability and widespread use of such a term change the phenomenal self-model? How does it change its internal microfunctional profile, and what does it do the overt behavioural profile of its members?

Then there are other readings of “cultural embedding”. I have written a bit about how a subset of neurotechnology turns into phenotechnology, a “consciousness technology” that directly and primarily aims at changing the user’s phenomenology – think of virtual reality, robotic re-embodiment, or even molecular-level technologies like new psychoactive drugs. New scientific knowledge creates new technologies and new potentials for action. Market pressure and newly emerging cultural practices then begin to change the contents of consciousness itself.

That was one reason I founded a Neuroethics Research Group in Mainz a number of years ago: Neuroscience definitely needs a critical eye and a professional ethical assessment by philosophers. In Mainz we have mostly focussed on “cognitive enhancement”, new pills that purportedly make you smarter and more alert. I will not go into details here, but just add that after a decade or more in neuroethics my general conclusion is that many more proper analytical ethicists should move from philosophy into the new discipline of neuroethics. The issues are highly relevant (just think of pharmaceutical moral enhancement or new military applications), but from a philosopher’s perspective the level of debates is often a bit shallow and certainly has some room for improvement. It needs people who are not bound to some ideology or other and know proper analytical ethics well. This is a job for philosophers, and if we do not support policy makers and society as a whole by analysing and communicating the relevant options, then we may well get run over by new technological developments following on the heels of neuroscience.

How rapidly such developments can unfold has now become evident in the sphere of illegal psychoactive substances: In the English edition of “The Ego Tunnel” (which appeared in 2009) I cautiously predicted that the number of accessible illegal drugs could soon soar dramatically. In the expanded 2014 version of the book (not available in English) I already reported that in the three years following my prognosis, first 41, then 49, and in 2012 even 73 completely new synthetic drugs were seized in Europe alone; substances that had been completely unknown before. Now we see that the general trend underlying my prediction is unbroken: in the following year 81 novel psychoactive substances were seized for the first time, in 2014 the number was 101. If one looks at the respective annual reports of Europol and the European Monitoring Centre for Drugs and Drug Addiction, one is certainly justified in saying say that the situation is completely out of control by now. However, this observation now also applies to “cognitive enhancement”, off-label “brain-doping” with prescription drugs: As soon as a truly effective substance for boosting brainpower actually exists, even the most rigorous and strict forms of control aimed at their legal application will not work anymore. By now there are hundreds of illegal drug labs that would immediately copy the respective molecule and push it onto the illegal market. This is what you get for decades of denial, repression and systematic desinformation.

3:AM:Because your approach is representationalist and functionalist wouldn’t it be possible for something like the population of China, for example, or Google, to be conscious? If all conscious systems have to have phenomenal selves would we be committed to saying China had one, and Google?

TM:Phenomenal properties supervene on functional properties. If China or Google realized the relevant microfunctional properties that, for example, are instantiated by the minimally sufficient, global correlate of consciousness in our brains, then they would. We do not yet know what these properties are, and there is a possibility of principled mathematical intractability. But it is very clear that China or Google could never realize the time constants in neural algorithms, nor the complex, hierarchically nested dynamical profile of the conscious human mind or even the statistical physics making it possible. In 2016, asking such questions distracts from the really relevant issues. For example, the dramatic shifts in our notion of “representational content”.

3:AM:Dennettwrites about the self as an illusion – but you deny this don’t you?

TM:First, the notion an “illusion” is an introduced technical term: As opposed to a misrepresentation on the sensory level in the absenceof a stimulus - which would be a “hallucination” - it is a sensory distortion that actually has an external causal source, one that is internally modelled in an unusual way. As a matter of fact, many illusions may actually be “optimal percepts“from the perspective of an ideal Bayesian observer.

So “the self” could never be an illusion in that sense. Second, misrepresentations presuppose an epistemic subject, some entity that is wrong about something – but could in principle be right. But does this entity have to be “a” self? I am saying it is not a thing, but a process. The epistemic subject could be the person or biological organism as a whole, and it is well conceivable that such a system dynamically and fluidly operates under a conscious self-model without being phenomenally aware of this very fact. Many aspects of this self-model could be misrepresentational, shaped by evolution and society – unrealistic optimism, overconfidence bias, a robust misrepresentation of substantiality or personal identity across time, and so on. This also wouldn’t support trendy, omnipresent “illusion talk”. I must admit that I have occasionally been guilty of it myself, but it haunts me for years, just like the label “neurophilosopher” that journalists like to stick on me. Philosophy of mind is so much more than “neurophilosophy”!

As for the omnipresent “illusion talk”, I believe it keeps reappearing because people think: “Ah, I don’t really understand what all these present-day philosophers and neuroscientists are saying, but this stuff reminds me of something that I read in the New Age bookstore! Cool! Something that sounds politically correct and gives me a warm glow, something that sounds romantic, because it has something to do with all those popular books on Buddhism and Hinduism. Ego-death! Liberation! Giga-Bingo! Something pleasantly obscure and exciting that leaves a door open and makes all this scientific stuff a potential tool for death denial, something that confirms what I have always wanted to believe in!” Then people suddenly think you are a good person, and one of their own tribe.

But just begin talking about potential reductive explanations and the issue of intellectual honesty for a bit, then they suddenly say you are bad guy - a “neuro-nihilist”, boo! - what a negative and materialist version of reificationism, this is! You can see Evan Thompson’s new book for a recent version of this strategic innuendo. But I never believed the self was a thing. On page 1, the fourth sentence of Being No Onealready says “The phenomenal self is not a thing, but a process…” l must, however, say that really like Evan’s new book for many reasons, one of them being that he really endorses this basic idea and repeats the point over and over again.

3:AM:How do dreams help to show what is happening in the Ego Tunnel? Once we start thinking about dreams and wakefulness in your terms, doesn’t it make sense to ask whether we can have lucid wakefulness as well as lucid dreams?

TM:Dream research has always been a hobby of mine. My official position for many years has always been that the conscious wake state is a controlled hallucination, a form of “online dreaming”. Consequently, nocturnal dreams had to be offline states, and their relevance consisted exactly in the fact that, in humans, they seemed to be the only globalstates of conscious experience in which one could investigate what a complete subtraction of sensory input would lead to. In particular, I was always interested in what happens to the conscious body-model in such a functionally disembodied state, and I have looked quite a bit into the self-model in the dream state, unnoticed rationality deficits etc. But Jennifer Windtshowed that that was wrong: Dreams are weakly embodied states, perceptual information actually does influence the dream state and the dream body in particular. I greatly recommend her Magnum Opus “Dreaming” to anybody interested in these issues – everything one needs to know in one single book, the best entry point into the debate in existence and probably for many years to come.


In 2007 we wrote a chapter on what actually happens to the self-model when, in a dream, we realize that we are dreaming. From the perspective of my own theory this is an important question: What exactly happens in the conscious self-model during the transition from an ordinary dream to a lucid dream? Is there a philosophically interesting form of “insight” taking place, or are we just dreaming that we are dreaming? Everybody has heard about the phenomenon of “false awakening” (you dream that you have just woken up, suddenly realize that you are still asleep), but could there be “false lucidity” as well? As a philosopher, I have written that it would be an important contribution on the empirical side of things if the neural correlate for dream lucidity could be isolated. German researchers Martin Dreslerand Ursula Vosshave actually done this, and much earlier than I thought it could even happen. Another issue that I have always found important is what I call “minimal phenomenal selfhood”, the question of what the simplestform of self-consciousness is. I think dream research is highly relevant here. Why? New considerations indicate that a spatially extended body image and also an interoceptive self-model are not necessary conditions for self-consciousness in the strong sense. An extensionless point in space as a unit of identification seems to be sufficient for a stable form of self-consciousness. This point comes out clearly in Windt’s immersive spatiotemporal hallucination model of dreamingand in a recent open access paper I wrote in which I have tried to give some answers to the question of Why are dreams interesting for philosophers?The debate now shifts to the question if perhaps even pure temporal self-location could be sufficient.

3:AM:What are the implications of your approach to work on AI? There are already examples of machines having representational models – once we start making them with models that represent models, that can learn and so on aren’t we building the rudiments of your ego tunnel – aren’t there real dangers in this?

TM:Absolutely. I have just published a position paper on the issue with a group of young Swiss researchers, unfortunately only in Germanso far. I have also said for many years that AI and robotics will not get very far without grounded self-models – and that exactly this is also very close to the point at which this research becomes ethically critical. You may know that another side of me is that I have always tried to connect philosophy of mind with applied ethics, and that I have looked into cognitive enhancement, ethical rules for virtual reality, or robot ethicstoo. I am a vegetarian on ethical grounds for 39 years now, and one interest behind the self-model theory always has been to be able to understand what suffering is, to isolate criteria for what counts as an object of ethical consideration on a more abstract, hardware-independent level. As a consciousness researcher I do not believe that we will have artificial consciousness tomorrow or even on the day after tomorrow – but I may be underestimating synergies and there are smart people already aiming at it today. That is why I think it is important to think about the ethics of synthetic phenomenology right now, in the absence of time pressure and an agitated public, simply because the potential risks are so high.

I have quite a bit to say about this, but, just to put one central point very simply, I would argue for a principle of negative synthetic phenomenology (NSP) stating an ethical norm, which demands that, in artificial systems, we should not aim at the creation or even risk the unexpected emergence of conscious states falling into the phenomenological category of “suffering”. We should avoid increasing the overall amount of involuntary suffering in the universe and not recklessly trigger a second-order evolution before we have understood the deeper structure of our ownsuffering in much greater depth and detail. We should therefore not deliberately create or even risk the emergence of conscious suffering in artificial or postbiotic agents, unless we have very good reasons to do so. It is interesting to note how the interests of artificial subjects of experience are not effectively represented in any ethics committee or political process today, just like the preferences of future human beings and the very large number of sentient beings that will likely be following us on this planet are only barely taken into account. This has to change. In a nutshell, what we do not want is postbiotic systems that are forced to consciously identify with thwarted or frustrated preferences via a transparent self-model from which they cannot effectively distance themselves.

3:AM:Does meditation help understand the working of the ego tunnel? You are known to practice meditation. Has this been important to your philosophical practice?

TM:Actually this is a very private affair, something that really does not belong into the media, because it concerns my personal life. I would rather share the ten most dramatic failures and all the comical tragedies of my sexual life with you than my meditation experiences! But let me perhaps just point to a few very general issues.
First, if for example you just look at my most recent open-access paper on “M-Autonomy”, which claims that conscious thought actually is a subpersonal process, or my long-standing interest in applied ethics, ideas about the phenomenology of sufferingand how it can be minimized, or the whole ongoing research project of understanding the mechanisms by which a self-conscious creature identifies with the content of their self-model, then the influence of meditation practice may be quite obvious. But all that is quite trivial: Every philosopher has his or her own relevance criteria, plus their fundamental theoretical intuitions – and of course they are shaped and conditioned by the phenomenology they have undergone or systematically cultivated in their personal life. A rather uninteresting, contingent fact about myself, isn’t it?

More interesting is the issue of what the notion “understanding” in your question could actually mean. Can there be a philosophically interesting kind of non-discursive knowledge that stands the test of rigorous, analytical epistemology? I have my doubts, because I have been influenced by a more hard-headed context of “There is no knowledge outside of true sentences!” Self-deception is one of the new hot topics in interdisciplinary philosophy of mind, and if there is one best example for theory-contaminated autophenomenological reports, then it might possibly be first-person reports by meditators. The very large majority of serious practitioners of meditation I have met in my life implicitly adhere to some bizarre belief-system or other, many of them are intellectually dishonest in a very fundamental way, which often makes rational communication impossible.

Another question I have been interested in is if there could be what I sometimes call a ”fully secularized spirituality” – or if this is not even a coherent thought. I have never approached the issue on a technical level or in any of my academic publications, but a while ago I have written a popular essay aimed at an interested, general readership and just put it up on my website. The title is “Spirituality and Intellectual Honesty”, and it has had a strong and enduring impact in terms of the reactions from readers outside of academia which I am getting. It seems that many do share my intuition that, in the age of cognitive neuroscience and evolutionary psychology, this has become an absolutely central question in the background: Can a spiritual practice and intellectual honesty be reconciled in a new way?

One can certainly have doubts. Perhaps this is one reason for the interesting fact that there seem to be many more neuroscientists, cognitive scientists, or psychologists who seriously meditate than academic philosophers.
Just as with Indian philosophy, as a personal hobby that didn’t influence my academic work much, I have of course monitored empirical research into meditation and over the years met a number of the leading scientists themselves. I found that many of them try to appear as “secular and rigorous” as possible, but actually are on a mission of some sort. It is certainly true that there is a second wave of fantastic empirical research into meditation going on, and that it is yielding interesting results. But still, my personal impression is that many of the empirical researchers are “crypto-Buddhists” (or something comparable) in their private lives - there often seems to be a hidden agenda. But then again it is only human and natural that people promote and research what they have found to be the most intense and meaningful experiences in their lives. I am not at all interested in doing this, but my sense is that this community could profit from an interaction with philosophers working in theory of science and epistemology - for instance in systematically designing their catalogue of explananda. Meditation research urgently needs the input of good analytic philosophers!

One of the things I find strangest about myself is that I have personally gone quite deeply into what is very dubiously called “first-person methods”, and that I have always found it bizarre and quite suspect how someone could actually claim to do proper philosophy of mind or pretend to have a serious interest in the problem of consciousness without it being understood that they will of course be intensively complementing their theoretical work by exactly such activities. I do not know if you have had the same experience, Richard, but recently I have heard more and more often that something has gone fundamentally wrong with academic philosophy and an increasing number of people have silently begun to ask themselves the question: How does one get philosophyback into philosophy? I think that there may be no royal road and no single silver bullet here, but that exactly a much stronger integration of those dubious “first-person methods” into research and training – by not talkingabout them, but formally practicingthem – would solve a lot of our current problems.

At the same time, I have never really believed in “first-person data” myself, let alone in irreducible “first-person facts”. As a long-term meditator, I am highly critical about philosophical conclusions drawn from contemplative practice itself. There is nothing “given” there. Seriously assuming the existence of such data rests on an extended usage of a concept that is only well-defined in another (namely, scientific) context. “Data” are extracted from the physical world by technical measuring devices, in a public procedure that is well-defined and well-understood, replicable, and improvable; and which is necessarily intersubjective. But in introspecting our own minds we never have any truly direct or immediate access to a mysterious class of “subjective facts”⎯all we have are neural correlates and publicly observable reports (which need not be verbal). To me, all the fancy, romantic talk about “first-person data” rests on an extended usage of a concept that is only well-defined in another context of application, thereby rhetorically exploiting a fallacy of equivocation. “Data” are typically (though not always) gathered with the help of technical measuring devices (and not individual brains) and by groups of people who mutually control and criticize each other’s methods of data-gathering (namely, by large scientific communities). In particular, data are gathered in the context of rational theories aiming at ever better predictions, theories that⎯as opposed to phenomenological reports⎯are open to falsification.

To be sure, autophenomenological reports, theory-contaminated as they may be, are themselves highly valuable and can certainly be treated as data. But the experience “itself” cannot. However, even if one presupposes this rather straightforward view, it would at the same time be unphilosophical to play dumb and deny that all of this cannot be the wholestory. Of coursethere is something relevant there. But before we can take the step from those dubious “first-person methods” to the philosophically much more interesting “zero-person methods” we better develop an empirically grounded and highly differentiated theory of what that vague metaphor of a “first-person perspective” really refers to in the first place – and of what can seriously count as a “method” and what cannot. So, coming back to your original question about meditation and philosophy, perhaps the three most interesting issues are: Can there be a fully secularized form of spirituality, or is this not even a coherent thought? Can we develop an epistemology of contemplative practice (i.e., the philosophically motivated development of non-cognitiveand non-intellectualepistemic abilities), or is there really no such thing as a relevant, non-discursive form of knowledge? Thirdly, how would one train a new generation of philosophers of mind and cognitive science who are not only analytically sharp and empirically well-informed, but also “well-traveled” in that they carry out their academic research against the background of a rich experience of unusual states of consciousness and a systematic practice of cultivating their introspective experience?

3:AM:You’re a philosopher who does experiments and works closely with the neuroscientists and cognitive psychologists and other scientists. Is it important that scientists and philosophers learn to work together?

TM:No, I longed to do something unimportant and irrelevant! More seriously, the “learning” you refer to has actually taken place during the last three decades, and now there already is a whole new generation of excellent young researchers who pursue an interdisciplinary approach, worldwide. But it is just somuch harder for them! It is much easier to be a really good historician of philosophy and specialize on a specific epoch – if you work hard, then you can build up your expertise and basically be done by the age of 35. The same goes for people who withdraw into analytical metaphysics, or who are very good at formal methods generally: This type of philosophical research is certainly just as respectable and important, but it is much easier to keep at least a general overview. A young philosopher of cognitive science today not only faces the hostility of those still remaining parts of the academic establishment that have slept in on the whole development, but she is also confronted with a never ending flood of new insights and potentially relevant bottom-up constraints from the empirical frontier. If, say, your speciality was “bodily self-consciousness”, “predictive processing”, or “mind-wandering”, then it is hard enough to just stay on top of the philosophical literature. But at the same time you are trying to understand difficult publications from fast-moving neighbouring fields, perhaps with a lot of mathematics, or based on experimental methodologies you do not fully comprehend, simply because you never studied these subjects. The relevant empirical literature is exploding, and many busy neuroscientists may not be interested in more substantial cooperation, because they have acquired a false view of philosophers as those intuition-mongering and slightly unprofessional colleagues who in the end really have nothing more to contribute than Zombie-style thought experiments.

I was always proud to live in a country where all higher education is free and open to everyone. When, however, I spent a year at UC San Diego in 2000 it became quite obvious to me that the philosophy students there – as long as they are able to pay those horrendous fees – not only have a much more attractive environment, much better libraries and green spaces, better air conditioning, food courts and cafeterias, but also that they get a much better academic training than I ever did in Germany. Consequently, I have invested a lot of my personal energy into improving the conditions for German students, for example by helping to initiate the European Platform for Life Sciences, Mind Sciences, and the Humanities, by founding the MIND Group, or by realizing the Open MIND-projecttogether with Jennifer Windt. It took me 6 years to put together a three volume text-booklike I would have liked to have one when I embarked on this journey in the beginning 80’s, when almost all there was armchair analytical philosophy of mind.

It remains difficult for early stage researchers today, but in a different way. For a serious young philosopher in this field the current situation is demanding, and highly unsettling. There are very few good and developed models of “empirically informed philosophy”, so many ambitious junior researchers will have to try and develop their very own brand of interdisciplinary philosophy. At the same time there is a shortage of established career paths, there are financial cuttings and overheated competition, plus the fact that the increasing marketization of academic philosophy have also lead to new types of competitors on the philosophical job market: the merciless point-scorer, the shamelessly self-promoting entrepreneur, or the phoney little politician. In a recent poll conducted by the German newspaper DIE ZEIT, 81% of junior researchers and more than half of the assistant professors said they consider leaving academia altogether. Of all the young people I have trained in my career and who have been granted tenure by now, none is in Germany, they all went away – they are in Japan or in Holland, in Canada, Taiwan, or in Australia. We should all keep our fingers crossed for those young philosophers of mind and cognitive science wo are driven by a serious interest – it is by no means an easy world out there.

3:AM:And finally are there five books other than your own that you could recommend to the readers here at 3:AM to help them delve further into your philosophical world?

Brentano, F. (1973)[1874]. Psychologie vom empirischen Standpunkt. Erster Band. Hamburg: Meiner.
Churchland, P.M. (1989). A Neurocomputational Perspective. Cambridge, MA: MIT Press.
Clark, A. (2015). Surfing Uncertainty. Oxford: Oxford University Press.
Hohwy, J. (2013). The Predictive Mind. Oxford: Oxford University Press.
Krishnamurti, J. (1976). Krishnamurti’s Notebook. New York: Harper & Row.


Richard Marshallis still biding his time.

Buy his book hereto keep him biding!