Vernor Vingeinterviewed by Richard Marshall.

vernorvinge

3:AM:I'd like to start by asking you about some contemporary ideas in philosophy that are really relevant to your work. You talk about the idea of a "technological singularity." Before we proceed, perhaps you could give us the various definitions associated with this idea.

Vernor Vinge: For the sake of discussion, I see the Technological Singularity(that is, the rise of superhuman intelligence via technology) as separable into five trajectories:
The Artificial Intelligence trajectory: We create superhuman intelligence in computers.
The Intelligence Amplification trajectory: We enhance human intelligence through human/computer interfaces - that is, we achieve Intelligence Amplification (IA).
The Biomedical trajectory: We increase human intelligence by improving our own neurological capabilities.
The Internet trajectory: Humanity plus the nonhuman resources of the Internet together become a superhuman being.
The Digital Gaia trajectory: Reality "wakes up" as the network of embedded microprocessors become a superhuman being.

3:AM:So one thing being discussed at the moment is about consciousness and cognition. Eric Schwitzgebel and others have thought about hooking up computer technologies to biological systems like our own minds in order to boost intelligence. You've written about this in your work. Can you tell us how realistic this is as a possibility in the near future and what we should value in this?

VV:Much of human-computer interface work is in support of this trajectory - though there could be dispute as to how intimate a connection must be before it qualifies as creating a new category of being. Intelligence Amplification is one of the three trajectories (see above) that involves the ongoing participation of humans. For many, IA is especially attractive because we ourselves can remain active players.

3:AM:I guess I'm thinking about whether enhanced intelligence and other capabilities that this kind of hook up might bring both positive and negative results. Could you signal which are the books, authors and films alongside your own which give answers to this question and what your own thoughts are at the moment about this?

VV:My own stories 'Bookworm, Run!' and 'Marooned in Realtime'are about this possibility. The earliest story I know about the idea is Poul Anderson's 'Kings Who Die'. Charles Stross' superb novel Accelerandotakes on IA and a number of other Singularity themes. I've heard that the web series H+is to look at a very dark scenarioinvolving IA. Occasionally I run into folks who are negative about Intelligence Amplification because they believe that our evolutionary heritage ("bloody in tooth and claw") would make us less trustable than pure machines. Personally, I think humans have a lot of potential for moral improvement; both the IA and the Internet trajectories look very attractive to me.

3:AM:One thing connected with the idea of the technological singularity is the idea that some time in the future we're going to be outmatched by technology. Computers are going to be smarter than us and the consequences of this are kind of hard to predict.

VV:But note that pure machine scenarios are just one or two of the possibilities (see list above) for the Singularity.

3:AM:Yes, there are a range of ideas connected to this. There's the hard takeoff idea where you think it might happen in the space of 100 hours. And there's the soft takeoff where it happens over a period of years, maybe even decades. And all this is linked the the mathematics of exponential growth. Have I got this right and can you tell us about this view and how plausible it is and which of the two views strikes you as the one most likely to happen?

VV:I think all the Singularity trajectories will mix together. (Separating them is helpful for sensible discussion though, especially if some have strong results before the others.) The human-oriented trajectories give me reason to think that we can guide or at least influence the outcomes. For example, the Internet trajectory seems like a plausible soft takeoff. The others might be hard or soft - perhaps depending on whether the context is a military arms race.

3:AM:And again, this is something you have written about in your books. Can you say how you treated the ideas and whether in the course of writing over the last three decades you've changed your mind at all?

VV:My overall opinion on these issues hasn't changed very much over the years, but there are some changes in emphasis I would make: In the 1993 paper, I did not have Digital Gaia on my enumerated list of trajectories (though the possibility was mentioned in passing). Also, in the 1993 paper, I should have made the point that unrelated disasters could derail this tech progress. For instance, a large nuclear war with MAD strategy could be a show stopper, and not just for this discussion. (I have a talk on this, 'What if the Singularity Does NOT Happen.')

3:AM:Connected to the last question is the idea of the manifest image of humans, the way we see ourselves and conceive of ourselves. Now in a way, and this is something you write about in Rainbow's Endfor example, this is a profoundly disturbing possibility. The emergence of superintelligence that leaves the human race behind makes any idea of human progress pretty redundant. And the idea that we are the dominant species who can grasp more than anything else in the universe is blown away. It raises serious questions about the value of our values. So much seems invested in the self image we have of ourselves. As Nietzscheand the naturalist philosophers point out, the image was always an illusion. We have no freewill, they say, and our values and thoughts are really just post hoc rationalisations. So what's the problem? Do you think the singularity you discuss is a profounder challenge to the illusions of the human manifest image than any before?

VV:Yes, I think it is a more profound challenge than any before in human history. But it isn't deeper than events on the scale of the history of life on Earth.

3:AM:I guess the idea of AI is also linked with the singularity. We've seen this idea treated brilliantly all over the place in films and books. Can you say which of the treatments you really find satisfying and stimulating and also how you treat the idea and its implications in your own work? How far are your books and stories entangled and responding to others, and vice versa?

VV:There is great entanglement! For us writers, this has been a conversation extending across the decades, where novels and short stories have played the role of sentences in normal conversation. For ideas about mind, my great inspirations were Poul Anderson, Arthur C. Clarke, and Isaac Asimov, but there were also individual stories by many many authors that together were a great influence. Walter M. Miller, Jr, had a short story about a warbot ('I Made You' (1954)), almost an operations log, that illustrated programmed behavior finally exceeding its design spec (plausible machine creativity). There was Murray Leinster's 'A Logic Named Joe' (1946) that described some of the most important features of the internet and search engines. (Actually, 'A Logic Named Joe' illustrates the problem of being too far ahead of the game. It shouldhave influenced me, but instead it went right over my head.) And I'll bet that almost everyone has encountered the vision of Olaf Stapledon.

childrenofthesky

3:AM:Some people argue that the singularity you discuss might be something that makes no difference to us. We just won't notice it. So these people point to all the billions of ants in the world and note that they haven't noticed us at all. So, is all this stuff about the end of the human and the downgrading of our image etc. besides the point? For all we know, the singularity happened last Tuesday and nothing happened from our point of view. Does this idea grab you? After all, if the singularity is like a black hole in physics, where over the rim there lies we know not what and cannot ever know, then how could we know?

VV:The notion that the Singularity would be invisible or secret is intriguing. (As a science-fiction trope, the "Invisible Singularity" is especially nice, another tool we writers use to make human-scale hard SF stories. (Other tools are: Big disasters slow down progress and allows normal SF; magical assumptions that the Singularity can't happen in certain physical zones of the universe.)) In the event, I think the Singularity will be strikingly evident (even if, say, the ants analogy turns out to be a good fit). One way of considering this question is to look at it in terms of each of the five trajectories I list above. For instance, if the Singularity grew out of IA, then you might wake up one morning with a proof of the Riemann Conjecture that is as obvious to you as balancing your cheque book had seemed a few weeks earlier.

A pure AI Singularity might be harder to perceive - especially if the AIs wanted to keep us in the dark! But I can see plausible motives for such Minds to have a very visible human affairs department (see below). In any case, I don't see any strong reason for the Minds to be secretive. Very likely there would abound spectacular physical effects and tech breakthroughs that no ordinary human understood. In fact, a long-term increase in claims of an "Invisible Singularity" would probably be a defensive measure on the part Singularity enthusiasts - in the face of no Singularity!

3:AM:There's an idea that the super smart intelligences are pretty dumb even though they have much vaster processing powers than we do. So the argument is that even though we can built chess playing computers that can beat a human at chess, playing chess is a task that doesn't require the kind of thinking humans use. They have brute memory power far in excess of the human, but they can't draw on a Capablanca heuristic (because they'll never need to evolve tactics to overcome computerising limits). So even though the machines will be able to do loads of things faster and better than we can, there will be blanks. There might be no consciousness, for example, and without it they might end up not being so far ahead as we might have imagined, say. Is this idea a possibility and what would you say to those who think there are reasons for thinking that AI is in principle not going to be possible?

VV:Time will tell. For human equivalent intelligence, I think the Turing test is still the best (at least if we take it in the general form that Penrosedescribes in his book The Emperor's New Mind, at the end of his (generally skeptical) discussion of the Turing Test). It's interesting to track progress in pure AI. As goals are met, arguments such as that about chess-playing are raised. For instance, Watson's success on Jeopardy!is also clearly short of humans' general abilities. What is the ultimate residuum, if any, of this distillation process? Thatis an extraordinarily interesting question, whether one is a skeptic or not, and there seems to be a good chance that we are within years of having some kind of answer.

3:AM:Another thing you're interested in is the idea of conscious organisations and the way that, say, a corporation or a whole system of government might become conscious. What are the implications of this? In some studies of what ordinary folk think there are suggestions that people think that an organisation like Google, for example, can plan but not feel emotions.

VV:My intuition is that many emotions arise naturally when a real-time program must entertain multiple, prioritized goals (often with very different deadlines). In fact, pathologies of human psychology (OCD, bipolar, phobic lockups, obsessions) may reveal things about methods for handling multiple prioritized goals.

3:AM:What about the idea that if we develop biologically using our knowledge of the genome and so on we can keep ahead of the machines? Indeed one idea is that we can keep in step with the soft take off and even the hard takeoff of the singularity if we become superintelligent. And if that is possible, why would the superintelligent future human not want to stop the machines from rising? If they could know what was happening, why not just close it down everytime it starts to look likely. Why wouldn't vigilance and power stop the singularity?

VV:This might be possible, though I think it would be very difficult, ultimately for some of the same reasons that ordinary humans couldn't stop this progress. The long-term issue is that intelligence can probably run on a variety of substrates (bio, silicon, ...) and there are probably non-biological substrates for mind that allow much faster and larger minds. Thus the great virtue of the IA, Internet, and Biomedical trajectories is that they provide us humans a means of guiding the transition and insuring the survival of both physical humans and minds of a human kind.

3:AM:These scenarios and the ones in your books are pretty depressing aren't they?

VV:No. Disasters related to this progress are possible, but they are not the most likely bad things that could happen to the Human race. If the Singularity were something we thought was going to happen 100,000 years from now, in a remote future when human striving finally led to our becoming or creating something that transcended us - then I think it would be a vision that most people would have warm and happy feelings about. It is the possibility that it may happen before you reach retirement age, thatis what's unsettling.

3:AM:Is it because trying to stop this future would require repression on a scale worse than the predicted future that you don't think we should try and stop it happening. Or do the good things coming from it outweigh the bad? Or is it because you just think it's going to happen no matter what we do?

VV:Assuming the technology of the Singularity is possible, I don't think that any tyranny can stop it. Basically, continued incremental improvement in automatic computation is such an immense win for almost all human endeavors (artistic, intellectual, economic, military, scientific, ...), that only nuclear war or equivalent end-of-civilization events could stop or slow this progress. I think the Singularity would probably be a very good thing for humanity, so the main goal is to avoid existential threats (like nuclear war) and make progress as safe as possible.

3:AM:Your new book, what's it about and how does it take forward your extraordinary visions of our futures. You invented the term 'cyberspace'; are there new concepts coming out of this new work?

VV: Alas, I didn't invent the term "cyberspace". Back in 1981 I had a story, 'True Names', that took place in cyberspace - but which I called "the Other Plane" (thus illustrating Mark Twain's distinction between lightning and a lightning bug).

I think that my second-from-most-recent novel, Rainbows Enddoes a good job portraying a plausible 2020s. My most recent novel, The Children of the Sky, is an adventure set thousands of years from now, and thousands of light years from here. It has some things to say about issues of mind, but it's looking at our present situation from a great and fantastical remove.

3:AM:In your books politics and social issues and moralities are inevitably part of the futures you imagine, even if they are broadly drawn. How important is a morality in these future worlds and do you think the technological worlds you consider require new thinking bout what it is to do right and wrong. If the new technologies are just far far superior to us, won't it seem to us that our moral systems and value systems generally seem otiose and pathetic in relation to these supercreatures. And won't that threaten how we behave and shatter our sustaining belief systems? Will we survive?

VV:I'm very attached to some fairly conventional notions of right and wrong. I hope that they apply, perhaps in some generalized form, on larger fields of play. I imagine two areas affected by oncoming events:
The last few hundred million years, we (metazoan life on Earth) have depended on relatively high boundaries between individuals. When humans showed up to think about this situation, "self" issues were a large part of the resulting ethics. For much of the last 150 years, notions of bloody confrontation have been the general perception of evolution. I think we're entering an era where self, identity, and mortality will be reexamined. (The simple-minded, reductionist view of mind as a computational process leads to all sorts of questions and alternative views on these issues.) Recasting ethics in a world of labile, variable, minds is a project that extends beyond our normal human horizon, but we stand at the beginning of the process.

As a species, we humans are a very homogeneous lot. There is the suspicion that modern humans are the surviving first movers, having eliminated - except for limited interbreeding - the alternative models. On the other hand, in the last couple of thousand years technology and markets have immensely enriched humankind. These developments rely on very high levels of cooperation that exploit what different abilities and advantages there are. I listed five trajectories to the Singularity above. I think they can all be successful, but they may result in very different styles of thought. This time, I don't think the first mover will wipe out the rest. The costs of living are low and the profit from cooperation is very high. (Karl Schroeder's novels do a great job with this notion of species of mind.)

3:AM:It's refreshing to have a top maths guy and a top computer guy also being a top literary guy. I'm interested in how the imagination works in all these fields. There are some people who want to say that there's a hierarchy of the imagination, with maths at top and if you've got time, arts way down near the bottom. What's your take on this as someone pretty much at the top of both these domains?

VV:The hierarchy argument could as easily be made in the other direction. For different classes of problems there are different ways of thinking. I guess it's a version of the answer to the preceding question. A wonderful thing about large human populations with good standard of living, is that even weird combinations (fatal in earlier times) of thought styles can still be of use. Now with the Internet and computers, there are much more spectacular improvements possible.

richardmarshall

ABOUT THE AUTHOR
Richard Marshallis still biding his time.