Susan Schneider interviewed by Richard Marshall.

Susan Schneideris the Sarah Connor of philosophy as she ponders the role of science fiction and thought experiments to help understand uploading, time travel, superintelligence, the singularity, a new approach to the computational theory of mind, consciousness, Jerry Fodor and physicalism. Hasta La Vista baby.

3:AM:What made you become a philosopher?

Susan Schneider:Living behind the Iron Curtain. I spent my junior year abroad in Budapest, Hungary, which was then communist, or more accurately, an authoritarian dictatorship. I read Michel Foucault’s work on disciplinary institutions, and I was mesmerized by the way Foucault’s work mirrored the political situation that I was living under. Amazingly, that year I saw the Russians leave Hungary. When I returned to UC Berkeley – another communist hub -- I took a course with Donald Davidson. His systematic theorizing drew me in. I was hooked.

3:AM:You make strong claims about thought experiments and find them valuable. Some philosophers like Paul Horwichdisagree and find them misleading and useless. How can something imaginary lead to knowledge and enlightenment?

SS:Philosophers face a dilemma. On the one hand, philosophers often theorize about the nature of things, so it is useful to think of what might be the case, as opposed to what happens to be the case. For instance, metaphysicians who consider the nature of the self or person commonly consider cases like teleportation and brain transplants, to see if one’s theory of the self yields a viable result concerning whether one would survive such things. On the other hand, thought experimentscan be misused. For instance, it strikes some as extreme to discard an otherwise plausible theory because it runs contrary to our intuitions about a thought experiment, especially if the example is far-fetched and not even compatible with our laws of nature. And there has been a movement in philosophy called “experimental philosophy” which claims that people of different ethnicities, genders, and socioeconomic backgrounds can come to different conclusions about certain thought experiments because of their different backgrounds.
I still employ thought experiments in my work, but I try to bear in mind three things: first, the presence of a thought experiment that pumps intuitionscontrary to a theory should not automatically render the theory false. But a thought experimentcan speak against a theory in an all-things-considered judgment; this is an approach I’ve employed in debates over laws of nature.

Second, I find it safer to appeal to more concrete thought experiments that verge on becoming science fact, or at least seem compatible with the laws of nature. The sort of cases that my book Science Fiction and Philosophy are generally compatible with our laws of nature: teleportation, for instance, is under development, as is brain uploading, humanoid robots, and, believe it or not, brains in vats.

Finally, we should carefully consider data suggesting that our intuitions about a thought experiment are influenced by our backgrounds. But background influences should not immediately lead us to discard our judgments about thought experiments. I assume that our philosophical training gives us insight that those not trained in philosophy do not have. (Of course, if trained philosophers, not novices, still appear to disagree on a particular thought experiment because of differences in their backgrounds, that is more serious.)

3:AM:In your book Science Fiction and Philosophyyou’ve written about some very intriguing philosophical issues and used science fiction to help your thinking. Before looking at some of the philosophical issues, can you say why you have used science fiction rather than the more honed philosophical thought experiments such as Descartes’ demon or the brain in the vat? Isn’t there a danger that science fiction such as a story by Philip K Dick includes too much noise and so may hinder rather than help?

SS:Descartes is a philosophical staple, but I love using science fiction because many students are passionate about it, and it leads them to carefully work through the details of even the most dense philosophical works. And nowadays students have an intuitive understanding of technology that lends itself to thinking philosophically about films like The Matrixand I, Robot. It’s not difficult to steer class discussions away from the “noise” in a storyline and focus on the thought experiment. (Showing a film clip or short story, rather than assigning an entire book, can help.)

3:AM:One of the issues you’ve analysed revolves around the question of whether we might really be computer simulations, as in the film ‘The Matrix’. So how do you answer that puzzle, and who are you agreeing with and disagreeing with? I think Dave Chalmers, for instance, argued that it’s probably a good way of analysing the truth about external realitydoesn’t he?

SS:I don’t believe there is a good answer to the external world skeptic. I regard skeptical thought experiments as a means of staying humble – science and technology can do amazing things, but still, there are intrinsic limits to what we can know.

I do, however, regard one skeptical scenario as unique - that depicted in Nick Bostrom’s simulation argument. This is not merely a thought experiment that places us in a simulation or matrix, and suggests that we can’t be certain we aren’t in one. Instead, it argues that if civilizations throughout the universe are in general capable of surviving technological maturity, being able to run sophisticated simulations, and if some of the multitudes of planets with intelligent life run simulations of us, then, it is more likely than not that we are in a simulation. (I have just provided a very simple version of the argument that is like the one in my book.)

Bostrom’s argument does more than traditional skeptical thought experiments in the two ways: First, unlike traditional experiments that rely upon remote possibilities, Bostrom uses empirical premises and assigns a concrete probability to being in a simulation. Second, it does not rely on the idea that knowledge is certainty, as skeptics tend to do. Given any reasonable conception of knowledge, it turns out to be more likely than not that we are in a simulation.

My answer to the puzzle of whether we are in a simulation is only partial: while I suspect we can’t disprove the hypothesis that we are in a dream, a brain in a vat, etc., these possibilities seem remote, because we suspect that it is more likely than not that the world is as it seems. But the simulation argument is a special case: if it is correct, it is likely we are in a simulation. While we can rule out a simulation scenario if we find out that reality cannot be computational, or that beings in a simulation cannot be conscious, I do not think there is currently good reason to deny either of these things.

Of course, we may locate a reason to deny one or both of these claims. But maybe an evil demon is deceiving us and we are simulated after all, although we thought, wrongly, that we were not inside of a simulation. We will always be stuck with the traditional skeptic. When it comes to the simulation argument, while we may learn that it is likely that we are deceived, we can never be certain that we are not.

David Chalmers thinks the simulation argument, and even the more general thought experiment that we are in a matrix, is not a genuine skeptical hypothesis, but a metaphysical claim. Among other things, it is akin to finding who our creator is – our creator is the architect of the simulation-- and that the universe is made up of something surprising, being made of bits rather than fundamental physical entities like strings and forces.

I disagree, although I can appreciate Chalmers’ observation that a completed physics could turn out to be no less bizarre than this! Just look at holographic theories that can situate us in only two spatial dimensions. Indeed, we could turn out to be cut off from the rest of the universe even if we aren’t simulated: we could be trapped in an isolated part of the multiverse, never gaining access to the rest, and the laws of nature could be different elsewhere in the multiverse. This is a situation in which external world skepticism would be false but our epistemic predicament would still be akin to being a prisoner in Plato’s Cave. This would stink.

There are certainly situations in which our reality would be much like living inside of a simulation. Still, being in a simulation strikes me as a bone fide skeptical scenario, for the creation story would be radically unlike our ordinary conception of a creator. Consider the traditional Judeo-Christian conception of God, in which the creator of our universe is a benevolent, omnipotent, omniscient, incorporeal being. If we are simulated, the architect could be an alien adolescent playing a computer game, bent on deceiving us, and ready to turn us off when he grows bored. The upshot: Be entertaining.

3:AM:Another issue that comes out of questions about what is the nature of persons is about the possibility of uploading. Various people have proposed that as we become more technologically advanced the possibility of transcending our biological limits is becoming close to realisation. What are the philosophers saying about this, and where do you place yourself in this debate?

SS:Science fiction has long depicted scenarios in which a character uploads his or her brain in a desperate attempt to avoid death. The idea is that the person’s brain is scanned, and a software model of it is constructed that is so precise that, when run on ultra-efficient hardware, it thinks and behaves in exactly the same way as the original brain. The process of scanning will likely destroy the original brain, although non-destructive uploading has also been discussed.

You might think that if uploading could be developed, day–to-day life would be drastically improved. For instance, on Monday at 6PM, you could have an early dinner in Rome; by 7:30 PM, you could be sipping wine nestled in the hills of the Napa Valley; you need only rent a suitable android body in each locale. Airports could be a thing of the past for you. Bodily harm matters little – you just pay a fee to the rental company when your android is injured or destroyed. Formerly averse to risk, you find yourself skydiving for the first time in your life. You climb Everest. You think: if I continue to diligently backup, I can live forever. What a surprising route to immortality.

Oxford University’s Future of Humanity Institutehas a brain emulation project that is taking the first steps toward developing uploading. Uploading could be perfected during a technological singularity, a point at which ever-more-rapid technological change leads to unpredictable consequences.

Suppose that the technology could be developed at some point in the future. Would the upload really be you? Philosophers, such as Nick Bostrom and David Chalmers, tend to respond with guarded optimism. But let’s consider an example to see if even guarded optimism is well-founded. In Robert Sawyer’s Mindscanthe protagonist, Jake Sullivan, tries to upload to avoid dying of a brain tumor. He undergoes a non-destructive uploading procedure, and while the contents of his brain are copied precisely, he wakes up after the procedure, still on the operating table, and is astonished to find that he is still stuck in his original body. His consciousness did not “transfer”! Sullivan should have read the personal identity literature in metaphysics, which asks: in virtue of what do you survive over the time? Having a soul? Being a material being? Having the same memories and thought patterns as your earlier self? I’ve urged that in deciding whether you could survive uploading, it is important to consider the metaphysical credentials behind each of these views, and also, to consider a related literature on the nature of objects, or substances, in contemporary metaphysics. Here, as I love to tell my students, doing metaphysics may one day be a matter of life and death.

One reason Jake should have been suspicious is that objects generally follow a continuous trajectory through space over time – but here, for Jake to “transfer” to his upload, his brain would not even move, and his consciousness would somehow travel inside a computer and then, at a later point, be downloaded into an android. And the stuff that makes up the new Jake would be entirely different. As Joseph Corabi and myself have urged, when one reflects on what metaphysicians have said about the nature of substances, or objects, uploading becomes dicey. Further, an upload can be downloaded to multiple places at once. But at most, only one of these creatures would really be Jake. Which one? Finally, notice that Jake survived the scan. So why believe that any of the uploads is him, rather than the original Jake? In the macroscopic world around us, single objects do not reside in multiple locations at once.

3:AM:News stories abound about body enhancements and genetic perfectionism. Is it possible in theory at least to enhance our brains and change our natures, and should we? How does Terminatorand Blade Runnerhelp here?

SS:Strictly speaking, I do not think we can really “change” our nature – if something is truly our nature, then how would we still exist if it changed? But speaking in a loose sense, it may be quite possible to change our nature in the following sense. If you believe we are essentially biological animals, and if we upload or become mostly cyborgs, these creatures would no longer be primarily biological entities. If every human did this, our whole species could go extinct, and humans would have descendants with different natures – their nature would be computational.

Should we enhance in these radical ways? These new creatures may very well be persons or selves, and they will likely be far more intelligent we are. Maybe they would even have a greater capacity to enjoy their lives. Now, I’ve claimed that they wouldn’t be us, but other factors besides your own survival could militate in favor of radical enhancements. For instance, perhaps you believe that the self is an illusion; Buddhist James Hughes occupies this sort of position. Or maybe you would regard the chance to upload as being even more valuable than your own survival, for an uploaded being would bear a special relation to you, having all your memories, and would have a life free of biological senescence, and eventually integrate with superintelligent AI – AI that can radically outperform the best human brains in practically every field, including scientific creativity, wisdom, and social skills. Maybe this matters more to you than your own life does. Finally, perhaps some of us would value uploading because it could be the only way for the human way of life and manner of thinking to survive in the event of a global catastrophe. I suspect this latter issue is important to Nick Bostrom, who is greatly concerned with global risks surrounding the singularity.

3:AM:I suppose with all these questions, and others such as the question about whether time travel is possible, there’s a suspicion that the philosophers don’t have anything over and above what the scientists and technologists discover. How would you describe the role of philosophy in these questions? Why not just leave it to the scientists and technologists?

SS:Science tells us what the physical world is like; philosophy does a different job entirely. It asks ultimate questions about the nature of reality, whether God exists, whether we have free will, and so on. For example, cognitive science may show that the brain is a computational engine, but this alone does not tell us that the mind is not something more than the brain, or that we don’t survive the death of the brain.

Time travel is a great example of thought provoking dialogue between science and philosophy. On the one hand, physicists like Kip Thornehave argued that time travel is compatible with the laws of physics. On the other hand, philosophers have pointed out that if this is correct, then you should be able to go back and kill your grandfather, before you were conceived. (This is called “The Grandfather Paradox”). How can something be physically possible yet violate the laws of logic? Some philosophers say that something simply stops you from killing Grandfather every time you try. This seems arbitrary.

3:AM:Ethical issues emerge out of this, not just regarding the specific issues but also regarding how things that may happen in the future ought to be handled. For example, if the notion of the singularity is a real possibility then aren’t there ethical questions regarding technological advancement that we need to be publically discussing now, rather than just drifting into it? I guess I’m asking what you think is at stake in all this philosophy?

SS:If there is a singularitythan the intelligence explosion will be monumental, ending poverty and disease, giving rise to a new level of intelligence, and more. But it carries with it tremendous potential dangers, such as the end of the human race.

Even if there is only a small chance that there will be a singularity, because the stakes are so high, the singularity needs to be planned, inasmuch as this is possible. One problem with planning the singularity is that by definition, a technological singularity involves scientific and technological advances that happen quickly on a massive scale, and, because humans can’t even follow the discoveries of superintelligent AI, humans lose control over the advances. Dangers range from renegade AI to bioterrorism; journalist Joel Garreau provides an excellent overview of the dangers in Radical Evolution.

If AI becomes superintelligent it will think differently than we do, and it may not act in the interest of our species. Philosophers and scientists are currently working on principles for creating benevolent AI, and keeping superintelligent AI “in the box” so it would be unable to act contrary to our interests. The challenge with benevolence principles is that a sophisticated system may override them, or the principles could create a programming conflict. Discussions of the AI Box Problem voice the worry that superintelligent AI would be so clever that it could convince even the most cautious human to let it out. Frankly, I wouldn’t be surprised if the creators of the first forms of superintelligent AI did not even try to keep it in the box. For instance, it could be seamlessly integrated into the internet, being developed by Google, whose chief engineer is currently Ray Kurzweil.

Philosophers, especially philosophers of mind, metaphysicians and ethicists, could do a great deal here: they could help develop principles for identifying and defining superintelligence, develop principles for determining whether intelligent computers can be conscious, determine under what conditions one can survive radical brain enhancements, and under what condition AI, including humanoid robots, should have rights. This is just the tip of the iceberg.

3:AM:You’ve been bold in asserting that the Fodorian Language of Thoughtprogram and the related computational theory of mind theory have three major problems that unless solved renders them obsolete. Before saying what these problems are can you sketch out the theories and what they’re supposed to be explaining?

SS:The computational paradigm in cognitive science aims to provide a complete scientific account of our mental lives, from the mechanisms underlying our memory and attention to the computations of the singular neuron. The Language of Thoughtprogram (LOT) is one of two leading positions on the computational nature of thought, the other being a neural network based approach advanced by (inter alia) philosophers Paul and Patricia Churchland.

According to LOT, humans and even non-human animals think in a lingua mentis, an inner mental language that is not equivalent to any natural language. This mental language is computational in the sense that thinking is regarded as the algorithmic manipulation of mental symbols, where the ultimate algorithm is to be specified by research in the different fields of cognitive science. The “Computational Theory of Mind” holds that part or all of the brain is computational in this algorithmic sense. In my book on LOT, I urged that both approaches are insightful; the brain is probably a hybrid system -- being both a symbol processing engine, and having neural networks. In particular, deliberative, conscious thought is symbolic, but it is implemented by neural networks.

3:AM:The problems are about computationality, symbols and Frege aren’t they. Can you say what’s wrong?

SS:Sure. Several problems have plagued the LOT approach for years: First, LOT’s chief philosophical architect, Jerry Fodor, has argued the cognitive mind is likely non-computational. Fodor calls the system responsible for our ability to integrate material across sensory divides and generate complex, creative thoughts “the central system.” Believe it or not, Fodor holds that the brain’s “central system” will likely defy computational explanation. One of his longstanding worries is that the computations in the central system are not feasibly computed within real time. For if the mind truly is computational in a classical sense, when one makes a decision one would never be able to determine what is relevant to what. For the central system would need to walk through every belief in its database, asking if each item was relevant. Fodor concludes from this that the central system is likely non-computational. Shockingly, he recommends that cognitive science stop working on cognition.

Fodor’s pessimism has been influential within philosophical circles, situating LOT in a tenuous dialectical position: since LOT itself was originally conceived of as a computational theory of deliberative, language-like thought (herein, “conceptual thought”), and the central system is supposed to be the domain in which conceptual thought occurs, it is unclear how, assuming that Fodor is correct, LOT could even be true.

Second, LOT says that to have a thought is to have a certain mental symbol in one’s head. But what is a mental symbol? Unfortunately, LOT has not satisfactorily explained what they are, and the different approaches to symbols have been regarded as deeply problematic. Clearly, for LOT to be interesting and informative, it should explain what the nature of thought is, and relatedly, what concepts are.

Third, the problem of Frege cases is a problem that is based upon a particular approach to how thoughts get their meaning – they get their meaning through causal or lawful connections to the items in the world they tend to refer to. This gives rise to certain problems with scientific explanations in cognitive science. The generalizations do not seem to be precise enough to capture the behavior of classes of similar thinkers.

3:AM:So you go about providing a solution to each don’t you?

SS:Yes, the solutions are lengthy. I can sketch a solution to the first one, which is particularly interesting because there are related puzzles in AI, and because it questions whether cognition is computational, it connects to our discussion of the singularity.

First, Kirk Ludwig and myselfexamine Fodor’s armchair arguments that cognition is non-computational in detail, and we illustrate how they go awry. Second, I urge that a central system can compute what is relevant. I draw from the work of Murray Shanahan and Bernard Baars, who sketch the beginnings of a solution to the Relevance Problem that is based upon the Global Workspace (GW) theory. According to the GW theory, a pan-cortical system (a ‘GW’) facilitates information exchange among multiple parallel, specialized unconscious processes in the brain. When information is conscious there is a state of global activation in which information in the workspace is “broadcast” back to the rest of the system. At any given moment, there are multiple parallel processes going on in the brain that receive the broadcast. Access to the GW is granted by an attentional mechanism and the material in the workspace is then under the “spotlight” of attention. When in the GW the material is processed in a serial manner, but this is the result of the contributions of parallel processes that compete for access to the workspace. (This view is intuitively appealing, as our conscious, deliberative, thoughts introspectively appear to be serial.)

3:AM:Fodor has criticised your approach hasn’t he? Doesn’t he think you’re rehashing approaches that LOT was intended to replace and so he can’t make peace with computationalism nor pragmatism?

SS:Yes.

3:AM:How do you respond?

SS:I’ve just outlined my response to his view that cognition is non-computational. The matter of pragmatism connects to the second problem I set to solve, the one concerning the nature of mental symbols. Fodor has long argued against views that take concepts’ natures to be determined by the role they play in thought. He calls such views “pragmatist” because concepts’ natures are determined by what they do – by their role in inference, categorization, and the like.

After considering the different approaches to mental symbols, I urge that a pragmatist account is the only position that LOT can occupy, and further, this view of symbols leads to a more plausible version of LOT (Chapter Six). In fact, Fodor himself covertly appeals to a pragmatist theory of symbols (Chapter Seven).

Some intriguing results grow out of this account of symbols. First, Chapter Seven generates a new theory of concepts that holds that concepts’ natures are determined by both the role they play in computation and their meanings, where meaning depends upon how symbols “lock on” to items in the world. This approach is advantageous, because it is compatible with different kinds of concepts having different sorts of computational roles -- some concepts can be definitions, some can be prototypes, etc. And with slight modification it even available to those who reject LOT for a neural network based approach.

Second, I develop a view that does not require an appeal to concept or symbol nativism, the view that all, or at least some, of our concepts or mental words are innate. Although I am sympathetic to the possibility that some of our concepts are innate, LOT has historically been associated with an ultra-extreme form of nativism in which even concepts like photon and carburetor are innate.

3:AM:How do you think cognitive and computational neuroscience are going to make the theory of mind based on your version of LOT succeed? If it does, could you say what that would make mind to be? Would it be the final word? And which theories would have to be abandoned with consequences for some of the theories and ideas you look at in your thought experiments about the mind?

SS:My development of the central system requires that LOT turn its attention to cognitive and computational neuroscience to succeed, despite Fodor’s well-known repudiation of these fields. First, I urge that integration with neuroscience will enable a richer explanation of the structure of the central system. And I illustrate how neuroscience can provide a deeper understanding of the central system by appealing to certain recent work on the central system in neuroscience and psychology, such as, especially, the aforementioned GW theory.

Second, on my view, cognitive and computational neuroscience are key to
determining what kinds of mental symbols a given individual has (e.g., whether one has a symbol for jazz, Chianti, and bulldogs), for these fields detail the algorithms that the brain computes, and on my view, the nature of a mental symbol is determined by the role it plays in the algorithms that the brain computes. Third, I stress that given that LOT is a naturalistic theory, that is, one that seeks to explain mental phenomena within the domain of science, it depends on integration with neuroscience for its own success.

If LOT is correct, the brain is a computational engine. More specifically, deliberative, conscious thought – the kind of thinking that would be activated in a GW -- involves the algorithmic processing of symbols. Such symbols are likely implemented by neural networks, and other aspects of thought, such as sensory processes like facial recognition, may be characterized by the connectionist approach. Contra Fodor, who sees connectionism as deeply mistaken, on my view, the proponent of LOT should look to connectionism with great interest, as it seeks to uncover the neurocomputational basis of thought. But if LOT is correct a more radical form of connectionism, eliminative connectionism, which envisions no place for LOT or mental representations must be rejected.

However, we are nowhere near the final word on the brain. Who knows, perhaps a singularity will lead us to reconceptualize everything. And even setting aside the singularity, there seem to be intrinsic limits to any scientific approach to the brain. For instance, science cannot tell us whether consciousness is entirely a physical or even computational phenomenon, or whether it is something more. This is a philosophical issue.

And notice that the success of computationalism does not entail that physicalism is correct. Physicalism holds that everything is made up of the features and kinds of matter that a final physics says are fundamental, such as fields and strings. But one way to quickly see that computationalism and physicalism could come apart is to consider the possibility of being in a simulation; notice that if this is the case, reality is computational but not physical in the way we thought. The entities in the simulation are ultimately made up of bits, not particles, fields or strings. In a seriesof recent papersI urge that if one considers physicalism in light of debates over nature of mathematical entities and substances, physicalism turns out to be deeply problematic.

3:AM:And finally, for the readers here at 3:AM, are there five books that you could recommend that would take us further into your philosophical world?

SS:Riddles of Existence: A Guided Tour of Metaphysics, by Earl Conee and Ted Sider.
Jaegwon Kim, Philosophy of Mind.
John Heil, The Universe as We Find It.
Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology.
David Chalmers – The Conscious Mind: In Search of a Fundamental Theory.


ABOUT THE INTERVIEWER
Richard Marshallis still biding his time.