Economics, Metaphysics and Cognition

Interview by Richard Marshall

'The atomistic understanding of mind is not, in my view, even generally useful as an idealisation. It profoundly misrepresents what minds are. Following Dennett and many other current philosophers, I’m persuaded that minds are constructs built under social pressure. People are socially required, in order to usefully contribute to joint projects, to be comprehensible and relatively consistent, but also to manifest some identifying ‘signature’ patterns in their self-presentations that distinguish them from others.'

'Biology tends never to yield bright lines between conditions like ‘has language / does not have language’. My co-investigators and I are empirically studying one piece of this tangle of relationships, the prospect that elephants, like humans, manage uncertainty by using joint representations to transform it into risk. We’re doing this by adapting the experimental design we use with humans in our experimental economics lab. Elephants can do enough basic arithmetic to learn how to consistently distinguish uncertain prospects by magnitude differences, and conditional probabilities don’t baffle them.'

'‘Theory’ means different things in different sciences. In some it’s customary to refer to very general hypotheses as theories. But in economics theory means: a structure that can be used to identify data and estimate parameters. By direct implication, then, a theory is a filter, a device to force consistency in interpreting observations so that knowledge can, if we’re lucky and careful, accumulate.'

'... the most fruitful and successful work in economics is social science, not behavioural science. Insofar as economics has a fundamental class of phenomena to study, these have always been, and remain, markets. The concept of a market applies very broadly, to any system of networked information processing about relative marginal values of flows and stocks of resources that agents in the network seek to control. Markets of goods and services that are priced using money are only one, hugely socially important, kind of market. But where the question at hand is concerned they all have something in common: they are social phenomena.'

'Friendship is among the greatest of human goods, because its true manifestation is helping specific others defend and extend what is special about them.'

'I see no reason to regard ... Sellarsian / Dennettian work as metaphysics, since it isn’t trying to discover the general structure of reality. It’s cognitive anthropology, aiming to enrich and empower our knowledge of ourselves. Art does that too, though it’s only one of many enormously valuable things that art does. But enhanced knowledge of the objective structures of the world that are independent of human practical and moral concerns is delivered only by empirical science and mathematics. So thought the positivists, and they were right (about that).'

Don Ross is interested in economics; economic methodology; experimental economics of risk and time preferences, addiction and impulsive consumption; gambling behaviour; addiction policy; cognitive science; game theory philosophy of public policy; philosophy of science; scientific metaphysics. Here he discusses the links between sociology and neuroscience and the potential role of economics, evolutionary game theory, ‘mindshaping’ with Conditional Game Theory, full human-style consciousness (FHSC) and elephants, why a single utility model on empirical data, or running horse races between models, is never best methodology, why economics needs sociology and anthropology, welfare economics, developing better roads in South Africa, ecomics and the pandemic, developmental economics and neuroscience, addiction, science and freewill, ontic structural realism, and  2008 and macroeconomics. 

3:16: What made you become a philosopher?

Don Ross: My first encounter with philosophy was when I got a summer job while I was in high school to research and write up biographical annotations for the first volume of the Collected Papers of Bertrand Russell, which was being produced by Nick Griffin and his team at the Russell Archives at McMaster University. The job lasted over two successive summers, and much of it involved pouring over Russell’s early Cambridge essays and letters. My imagination was captured by the richness and depth of the intellectual environment in which the young Russell had lived – I hadn’t imagined that there had ever been, or could be, a world like that. So I read everything I could get my hands on about and by Russell, and I have little doubt that Russell worship in those late teenage years shaped my cast of mind and general stance toward the world at least as much as any other influence I subsequently fell under. So when I went to university – the University of Western Ontario – I naturally included Philosophy among my subjects. But I had no intention of becoming a philosopher; my sights were on going to law school after my Bachelor’s degree. That didn’t happen, because the point of my admission to law school coincided with a period in which I had taken on a strongly “anti-establishment” set of attitudes, a mixture of hippie and punk. In the end I just didn’t show up for the first day of law school, and instead took a front-line social service job helping prisoners reintegrate into society. The work had its rewards, but I quickly felt bored intellectually. So I was secretly relieved when the government pulled the funding for our programme after a year, because that freed me to go to graduate school without having to willfully desert my colleagues, most of whom were ex-convicts. In fact, we hung together on a volunteer basis and managed to get a new programme funded and off the ground within a couple of years – with me in the role of a founding Board member instead of a staffer. This was the Raoul Wallenberg Centre for Young Offenders, which still exists in London, Ontario.

My first intention with respect to grad school was to study ichthyology, and I in fact registered in a Master’s programme in Florida to study fish. But a few months before that was due to start I read Doug Hofstadter’s book Gödel, Escher, Bach, and was captivated by the themes and ideas it pulled together. I had no idea how to follow up on it, knew no one I thought could advise me on how to do so, and was too intimidated to write directly to Doug (who, as I found out when I met him 10 years later, is a lovely and approachable person). But, just because I had studied philosophy in my BA, I felt I was “allowed” to write to his philosopher co-editor on The Mind’s I,  Dan Dennett. I explained to Dan that I was about to start on the path to becoming an ichthyologist, but was afraid I would find it boring because I’d become obsessed with AI and the relationships between thought and mathematics. (Russell was also still whispering in my ear.) I asked Dan whether I should instead study Computer Science, and if so where – but also told him that I’d never touched a computer. (This was 1985.) Dan replied by pronouncing one of the most influential single sentences anyone has ever directed my way: “If you become a philosopher you’ll be able to read anything you want and call it work.” That struck me as a sentiment which couldn’t possibly be overcome by anything else anyone could say. So I withdrew from the ichthyology programme. And as Western Ontario was one of the top schools in the world for philosophy of science, and featured a cognitive science track with the Psychology and Computer Science Departments; and as London Ontario is where I already lived and was busy creating the Wallenberg Centre, with minimal disruption to the rest of my world I registered there. It was my way of hedging intellectual bets, and that strategy was reflected in my alsotaking as many courses as I could in economics.

3:16: You’re interested in the philosophy of economics, of neoroscience and philosophy of mind, behavioural science and how we should understand scientific explanations . You’ve recently written about the links between sociology and neuroscience and the potential role of economics in this. You identify a problem with the way this has been done up to now – applying game theory and the economics of networks to the social neuroscience project. So can you first sketch for us the contours of the problem as you see it, particularly its attempt to model mindreading?

DR: The dominant tradition in Western behavioural and social ontology is basically atomistic. That is, we begin with individuals who have properties they carry around with them across contexts, including social contexts. A number of these properties are relevant to constructing their utility functions. These include preference orderings, risk attitudes, and discount rates over future welfare. These in turn determine the roles that the individual will play in determining the equilibria of games with others in which she’s involved. For many analytical purposes, particularly those that most interest economists, this is a useful idealisation. But we should remember that it’s an extreme idealisation, which is to say that it’s false. In fact, the most important properties of people as agents are sculpted by social influences. As Wynn Stirling’s work shows, it’s possible to formally represent that kind of influence without having to sacrifice the power or clarity of standard game theory and its associated solution concepts. So game theory need not be encumbered in its applications by social atomism. However, the tendency of analysts to maintain atomism is strongly encouraged by the further deeply entrenched philosophical idea that preferences are generated by minds, and that a mind is fundamentally private to an individual. On that assumption, others learn about the mind – and about its preferences – by making inferences from behaviour (including, of course, linguistic behaviour and other semantic expression). That is mindreading.

The atomistic understanding of mind is not, in my view, even generally useful as an idealisation. It profoundly misrepresents what minds are. Following Dennett and many other current philosophers, I’m persuaded that minds are constructs built under social pressure. People are socially required, in order to usefully contribute to joint projects, to be comprehensible and relatively consistent, but also to manifest some identifying ‘signature’ patterns in their self-presentations that distinguish them from others. This complex pressure obviously creates some tension: a person’s behaviour must be similar to that of others in her reference community, but not quite identical to anyone else’s. This reflects the remarkable cognitive ecology of a hyper-social species that relies on specialisation of labour but lacks a genetically programmed caste structure of the kind we see in ants. How do humans actually manage this tension? The technology we use is what Austin Clark called mindshaping, an idea that has been most thoroughly explored by my former student Tad Zawidzki. We encourage new people – children and adolescents – to construct elaborate self-histories in which past choices can be understood as partial anticipations of current ones, and which the agent can use to produce fluent rationalisations on demand. The construct is built in public, which is its point, in negotiation with the shifting cast for whom it is performed and who help its owner interpret it, and depends on social acknowledgment and reinforcement for its integrity over time. The resulting virtual object is what a mind is. Despite being virtual, minds are vividly real. A person’s mind is her most important ecological adaptation, in defense of which she will therefore risk her life if necessary. But it is not identical to her brain, it does not somehow dwell ‘inside’ her, and it is not hidden and private, waiting to be ‘read’ on the basis of clues it generates.

Putting all this together: if you’re doingsocial neuroscience then it doesn’t do to model the brain as if it produces mind in isolation from the social processes you’re trying to shed light on. You must model the brain as shaped by the (socially sculpted) mind, not the other way around. And then, if you’re going to apply game theory to model these processes – which is often the best tool for the job – you must drop the idealisation about fixed preferences that comes in so handy if you’re modeling a person’s strategy when she’s trying to buy a house or get her child accepted into college. You’ll need to have recourse to Stirling’s conditional game theory, or some yet-to-be-invented alternative technology that does a similar job.

3:16: Evolutionary game theory is very important to your understanding of economic modeling and you’ve refined early versions to include ‘conditional games.’ Can you tell us something about all this and what in particular conditional games adds?

DR: Conditional games are not, in their basic construction, evolutionary games, though they can of course be modelled dynamically like any game, and as played over fitness payoffs instead of utilities. Evolutionary game theory is the tool we need for understanding the strategic pressures that have driven genetic and cultural evolution at long-run scales. By ‘long-run scales’ I mean scales that abstract away from feedback by individuals, where individual influences are modelled as statistical noise. But a subject in which I have long been interested is the strategic aspect of the dynamics that give rise to distinct individual agents, conceived as normatively central by themselves and, if they’re fortunate, by others. Evolutionary game theory should be used to understand why there should be any such dynamics in the first place. But it isn’t the tool for modelling the dynamics themselves. In Economic Theory and Cognitive Science: Microexplanation (2005), I sketched in general terms what properties an adequate modelling technology would need to have. The key spec was that it would have to allow individual agents’ preferences to dynamically influence one another while allowing for enough stability to make equilibria – including, specifically, Nash, Bayes-Nash, and Quantal Response Equilibria – formally specifiable, and identifiable in data. It was then my intention to build such an apparatus myself. As I saw that that would be a major undertaking, I expected to spend at least a decade preoccupied with the mission. Then I read Wynn Stirling’s Conditional Game Theory (2012) as soon as it was published, and discovered that Wynn had already laid down the core foundations. This was a gratifying moment for me, to say the least – I’d been given ten years back to work on other things!

I say that Wynn had built only ‘core foundations’ because the 2012 version of CGT was still missing some key pieces. It modelled one-way influence of an individual’s preferences on another individual’s preferences beautifully, but its mechanism for modelling reciprocal influence was a clumsy patch. And in Wynn’s initial presentation, CGT was positioned to serve social preference theory, which for reasons well articulated over the years by Ken Binmore, is an approach that exogenously hand-wires into models what the models ought to endogenously generate, namely, accommodations between normative selfishness and normative groupishness. Finally, CGT 1.0 could handle only preference orderings as utility functions. Any version of game theory that’s intended to be applied to real empirical human choice data must incorporate uncertainty, and therefore subjective expected utility.

In 2016 I co-organised a workshop in Rome to which we brought both Wynn and Tad Zawidzki, who had published Mindshaping three years previously. Wynn saw right away that mindshaping was the general form of application to which his intuitions that drove CGT were naturally adapted. We started working on that adaptation soon after, and we’ve lately made considerable progress, as two recent papers show. Thanks to Wynn’s efforts, reciprocal influence is now directly baked into the foundations, and in a forthcoming paper he and I, along with Luca Tummolini, have expanded reciprocal influence networks to include multiple agents, and have worked in rank-dependent expected utility theory as a general format for representing risk. As soon as COVID-19 restrictions allow us to get back into the lab, with my regular co-authors Glenn Harrison, Andre Hofmeyr, and Brian Monroe, and a PhD student, Cuizhu Wang, I’ll be running experiments that we hope will generate data patterns we can identify using CGT 2.0. The applied goal is to represent norms as social structures that aren’t mere aggregations of individual utility functions with social preferences as arguments. That would represent a major milestone in the agenda I laid down in 2005, because among the key choices people make in constructing themselves as individual persons is: which norms do I support, and which do I resist?

3:16: Your solution involves something you call ‘mindshaping’ with Conditional Game Theory. So can you walk us through this and say why this is a better approach?

DR: Having not looked ahead while composing previous answers, it seems as if I’ve already answered this above. Using the resources furnished by the biological brain, along with evolving external technologies of representation, human minds reciprocally create, stabilize, disrupt, and sculpt one another. That is mindshaping. It can’t be modeled using game theoretic models that treat preferences as fixed and exogenous to strategic interaction. Conditional game theory extends standard game theory so as to allow for endogenous preference change under strategic pressure, while retaining the solution concepts of standard game theory.

3:16: Interest in the potential of non-humans to acquire full human-style consciousness (FHSC) – workers in AI for example, and neuroscientists, and philosophers of mind – should therefore be interested in your work on elephants. What is it about the neuroscience of elephants that helps us to get clearer about what FHSC entails – and do economic models get involved in this too? Can elephants acquire FHSC – and could AI robots?

DR: Elephants are among a small group of non-human species that appear, based on years of intensive field observations by scientists such as Cynthia Moss and Joyce Poole, to deal with uncertainty and risk through deliberation. This should intrigue students of human cognition, sociality, and evolution for a number of reasons. First, that capacity in hominins has been central to the rise of H Sapiens as the global apex predator and overwhelmingly ecologically dominant species. It’s not clear that we’re entitled to say we know that we took over the show because we’re the most intelligent animals, if by ‘intelligence’ we mean to be trying to refer narrowly to information processing capacity in brains. But there’s no doubt that humans coordinate representations, and therefore collective planning, on a scale that’s sui generis. So when we encounter another animal that manages collective choice through collective deliberation, that’s an important place to look for better understanding ourselves. Second, collective choice implies some restraint of individual motivation. That is arguably the fundamental wedge into accountability, which is in turn, I’ve argued (along with some others), the basis of personhood as a normative idea. Finally, the complexity of the elephant communication system is being gradually but persistently unlocked. 

Application of deep learning algorithms to discover relationships between elephant syntax and elephant behaviour promises to greatly increase leverage into elephant semantics. Elephants clearly do displaced reference – something mainstream linguists once vehemently denied but now generally accept – and consequently their social negotiations can plausibly involve hypothetical and counterfactual states of affairs. I prefer to step around the question of whether we should hypothesise that elephants have ‘genuine’ language, because that quickly tends to degenerate into rival conceptual analyses. That elephants carry on sophisticated communication that regulates their behaviour is the relevant clear fact. Biology tends never to yield bright lines between conditions like ‘has language / does not have language’. My co-investigators and I are empirically studying one piece of this tangle of relationships, the prospect that elephants, like humans, manage uncertainty by using joint representations to transform it into risk. We’re doing this by adapting the experimental design we use with humans in our experimental economics lab. Elephants can do enough basic arithmetic to learn how to consistently distinguish uncertain prospects by magnitude differences, and conditional probabilities don’t baffle them. This work is just getting underway with a small self-foraging but tame herd in South Africa. After calibrating our design and instruments in this setting, which amounts to a lab setting, we aim to take it into the field with wild elephants.

As for AI-guided robots, I’ve never seen the slightest persuasive reason that we couldn’t in principle build them in such a way that they developed personhood, provided they were (i) autonomous pattern discoverers; (ii) connected to the world via affordances and not just self-contained syntax; and (iii) motivated to compare and modify their individual states under pressure from social goals and processes. We might destroy ourselves before we ever actually create such synthetic persons, because it remains an extremely distant challenge and current robots aren’t remotely close to any of the above conditions except (ii) to a crude extent. But if we escape self-destruction over the next couple of centuries, I expect that there will eventually be persons that needn’t be socially and self constructed by DNA-based organisms.

3:16: You’ve looked at economics and asked whether, as it has developed and become more sophisticated – particularly in its mathematical modeling – there are any deep empirical regularities to which the increasingly refined techniques has allowed improving access. So are there and if there are, what are they?

DR: It would be an ambitious enterprise to try to compile anything approaching a complete list. What I have in mind here are not epic propositions about causes of growth at the national scale or principles for organising society. Issues at that level are never merely matters of economics. But empirical identification of well-specified micro models has established a good deal of knowledge that couldn’t possibly have been sorted out by a priori reasoning or by mere observation without theory. 

I’ll give some examples. We know a lot about the trade-offs between optimising expected seller’s surplus and optimising buyer participation in choosing amongst auction mechanisms. We know that if we’re trying to estimate the sustainability of an anti-poverty intervention, where by ‘sustainability’ we mean continuation of effect beyond the explicit public investment, then household consumption expenditure, not mean income or the poor’s share of GDP, is the best proxy to track. We know how to figure out, with remarkable precision, how many competing firms are optimal in various markets if we want to minimise incentives to collude away consumer surplus. We know a lot about how to target flows of benefits from new transport infrastructure, depending on which kinds of maintenance are fiscally efficient for different materials. We know a lot about how to design dynamic pricing algorithms to expand the extent of markets for services in which there are varying price elasticities in different segments. A rule of thumb is that wherever you see economists being widely hired as consultants by private companies, there is real knowledge being applied that the companies in question don’t think they can efficiently discover by mere strategic analysis or trial and error. I could go on. The efficiency of modern markets, such as it is, owes a great deal to modelling by economists – and much more to microeconomists than to macroeconomists.

3:16: Why do you think that imposing a single utility model on empirical data, or running horse races between models, is never best methodology and does this tell us something about methodological approaches in science generally and economic science in particular?

DR: ‘Theory’ means different things in different sciences. In some it’s customary to refer to very general hypotheses as theories. But in economics theory means: a structure that can be used to identify data and estimate parameters. By direct implication, then, a theory is a filter, a device to force consistency in interpreting observations so that knowledge can, if we’re lucky and careful, accumulate. But this gives rise to a trade-off. If a theory has too little structure it doesn’t constrain data modelling tightly enough to do the job and the empirical record will likely just be a pile of barely connected observations. If it has too much structure either successful identification will be too rare or, if the structure has too many parameters, can retain empirically extraneous elements that free-ride on the part that does the work. (The last thing is the most serious problem with cumulative prospect theory.) I don’t see that there could be a general principle for judging this trade-off, which is worrying because then each way of making it tends to give rise to its own lineage of studies that evolves in splendid isolation from the others. A discipline can become a collection of methodological sects. Modelling at the individual level, or mixture modelling at the pooled level, the usual approaches our group uses, minimise the stakes on these subjective judgments.

Of course a mixture can only be a finite mixture, or at the individual level one must specify a model in order to estimate it, so the domain of judgment shifts to: which theories should be included in a mixture? That judgment, though, can readily be put under empirical pressure. For several years we usually included cumulative prospect theory (CPT) in our mixtures for estimating risky choice data, just because the literature had obviously selected that as a theory of interest. This was a nuisance, because it required us to always run procedures to endow subjects with money that they could then lose. But we have lately dropped CPT from our mixtures because in study after study it consistently failed to empirically perform, even when my colleagues Glenn Harrison and Todd Swarthout spent the heaps of cash needed to give it its best chance to shine. One could never reach that point by running horse-races between CPT and Expected Utility Theory (EUT), because CPT will always win such races thanks to its richer parametric structure. But CPT stops delivering the goods when it competes in an application with a sufficiently flexible specification of rank-dependent utility (RDU) theory. On the other hand, you hardly want to leave EUT out of your estimation set, because some subset of your subjects will actually be trying to conform to it. And we use multiple specifications of RDU so as to preserve parity of generality across mixtures. So in this area it seems that using fewer than three models simply can’t be defended as sound method. I see no reason why this kind of logic shouldn’t be persuasive across all sciences that deal with noisy data. (It would be stupid in physics.) I’d love to see it catch on in psychology, which has suffered from a general default practice of testing one narrow hypothesis at a time against the standard of arbitrary conventions of statistical significance.

3:16: Why do you think it vital that if we’re to understand the actual and ideal role of economics as part of a unified spectrum of sciences, one must go beyond understanding the boundaries between economics and psychology and bring sociology and anthropology into the picture?

DR: As I’ve written a 300-page book on this question, I think the answer is complicated! Trying to boil it down, I begin from the observation that there is a cluster of confusions around the role of individual people in economic theory and in the history of economics. First, economists’ widespread endorsement of normative individualism for purposes of welfare analysis has frequently led to assumptions that economic theory is descriptively individualist (i.e., essentially about individual choices and valuations). Second, economists have for many decades used a standard pedagogy that begins with a lone individual optimising investments of time, and then adds new agents one by one so as to build up a market. This happened because beginning economics students are typically learning applied calculus and economics at the same time, and the ‘Robinson Crusoe’ story has the benefit of getting calculus brought to bear where the point of it is most intuitively obvious. 

Now, then, if economics is fundamentally about individual optimisation under a budget constraint, then it looks as if economics is a branch of decision theory. Normative decision theory is ‘owned’ by philosophers, and descriptive decision theory is part of psychology. So on the social atomist understanding of economic theory, it seems that economics and psychology are addressing aspects of the same phenomena. In that case they ought to be brought into alignment with one another. That’s what the best-known strands of behavioural economics aim to do. What tends to happen then is that economic theory looks empirically inferior to psychology, so economics simply collapses into psychology. The work of Dan Ariely is an ideal illustration of this. The huge take-up and celebration of cumulative prospect theory, a theory that is inelegant, difficult to empirically apply, and very poorly supported by direct tests against choice data, is a leading symptom of what I’m talking about.

In fact, the most fruitful and successful work in economics is social science, not behavioural science. Insofar as economics has a fundamental class of phenomena to study, these have always been, and remain, markets. The concept of a market applies very broadly, to any system of networked information processing about relative marginal values of flows and stocks of resources that agents in the network seek to control. Markets of goods and services that are priced using money are only one, hugely socially important, kind of market. But where the question at hand is concerned they all have something in common: they are social phenomena. There is no market involving humans on Robinson Crusoe’s island until Friday shows up. Like other social phenomena, markets are mediated by much more than psychological differences between agents. Indeed, markets tend to put pressure on agents to partly transcend their psychological idiosyncrasies. Markets are mediated by exogenous material scarcities, of course, the main focus of standard economic models. But they are also mediated by power asymmetries, status competition, symbolic valuation, solidarity, social hatred, and collective narratives – the topics of other social sciences. So that is where alignment is truly in order.

The history of efforts at such alignment has shown only grudging progress, partly because of economists’ recurrent confusions about the role of individuals and individualism in their own models, but at least as much because of resistance to mathematical modelling in ‘humanistic’ wings of social science. Fortunately, there has recently been steady rapprochement along the boundaries between economics and other social sciences. Economists increasingly accept that power asymmetries etc are crucially important elements of the processes they aim to model. Ever more sociologists and anthropologists recognise that the most powerful form of social evidence is statistical, and that when it comes to handling statistical data economists are immensely far ahead of them. I look forward to a time when beginning sociologists will puzzle over the fact that the basic analytic toolkit they have to learn is called ‘econometrics’, and when history of economic thought courses (to the extent that these avoid extinction from being crowded out by the time needed to learn Stata) include Weber as a canonical figure and don’t present Marx as merely a ‘minor post-Ricardian’ (as Samuelson called him) with a bad theory of value.

3:16: You’re particularly interested in developmental economics – what are the key philosophically interesting issues this branch of economics, in particular the fusion of epistemology with normative ethical concerns and also questions about how people try and understand their behaviours and try to integrate normative and factual judgments?

DR: I’ll assume that the intended referent of this question is welfare economics, because then the second part makes more sense. Of course methodologists of development economics have argued extensively over the relevant welfare measure in that field. There, I’m unmoved by most of the philosophical argumentation, because I’m persuaded on the basis of empirical work that the best performing proxy is household consumption expenditure. If that is rising, and if it rises faster as you move toward the poorer end of the distribution, your policy, whatever it is, is growing your economy sustainably (though not necessarily environmentally sustainably, which is simply a different, though obviously very important, issue). If it isn’t, I won’t believe that the policy is reducing poverty. And I don’t want you to show me averages without information about distributions.

But welfare measurement is much more complicated where wealthy populations are concerned. There, methodological problems are rife. You can’t define welfare improvement in terms of optimising expected utility, because, as Lara Buchak among others has argued very persuasively, there is no reason why people shouldn’t reasonably prefer to maximise rank-dependent utility. Glenn Harrison and I have also defended that view, for technical rather than philosophical reasons. And then what should we say about people whose choices don’t conform to any consistent welfare standard we can model? Should the government treat them paternalistically, by nudging or regulating them? 

I’m sympathetic to Robert Sugden’s arguments against this. On the other hand, there are a few personal goals that people say they have, such as saving against destitution in retirement, where it seems very strained not to take them at their word even though their choices clearly aren’t serving the goal very well. I think that governments can ethically try to correct such bad choices, not on paternalistic grounds but because insufficient household savings creates negative externalities for everyone in a country. Notice that macroeconomic policy is all about manipulating households’ and companies’ consumption and savings behaviour by fiddling with the incentive landscape, e.g. by changing interest rates or the money supply. I think that something has gone wrong somewhere if it’s argued that merely having a monetary policy represents objectionable nudging. The tension between allowing people to screw up as the price of letting them be responsible agents, and shepherding them to coordinate for the sake of social stability and cohesion, is the oldest and deepest one in political philosophy. And then of course there’s the fact that governments are not very reliable shepherds. If you let Donald Trump influence monetary policy, the monetary policy he’ll encourage is the one that puts the most loot in the asset portfolio of Donald Trump.

I’ll say just one normative thing about the relationship between development and welfare. People concerned about climate sustainability often argue that we need to stop trying to achieve development through economic growth. I of course see the point they’re aiming at. But they sometimes maintain that commitment to growth is simply an expression of a consumerist fetish cultivated by capitalists. In fact, it is impossible, because of the nature of money, to run recurrent public fiscal deficits without growth. I’ve never seen a single credible model that shows us how we might administer public finance without fiscal deficits and at the same time avoid exacerbating class inequality. Fiscal austerity is intrinsically anti-poor. So no one has a clue how you could stop promoting growth and avoid fiscal austerity. I favour a very generous universal basic income (UBI) as the single most important kind of welfare-promoting regime we could build. This couldn’t be financed in a zero-growth economy.

3:16: Can you tell us about your work on developing better roads in South Africa and how the issues throw light on general philosophical issues that arise with developmental economics?

DR: Let me say first that this work doesn’t have philosophical motivations. I’m a professor of economics, and my (now) two decades of modelling SA’s roads is what I consider “bread-and-butter” applied development economics. But of course one can mine just about anything for philosophical lessons, and that applies here. Economics as a discipline is more motivated by the goal of improving policy than by disinterested scientific curiosity. In that sense, as economic methodologists such as Ed Leamer and David Colander have emphasised, economics more closely resembles engineering than it does physics. I think that this point has not infrequently been lost sight of by philosophers of economics, and has led some of them to regard highly abstract theory, such as general equilibrium analysis, as being more central to other activity in the discipline than it really is. The majority of person-hours in economics are invested in the kind of enterprise represented by the road network optimisation project I’m conducting with my former PhD student Matthew Townshend.

Our aim is to prioritise road upgrade and maintenance schedules in such a way as to achieve two goals. First, the network must support the government’s constitutional obligation to ensure that every citizen has access to basic education and medical care. This means that no household should be more than 5 sealed kilometers from a primary school, a high school, and a clinic. Second, we seek to discover the network structure and characteristics that would make the largest contribution to South African economic growth given the limited budgets available to the SA Department of Transport and the SA National Roads Agency. Note that we are not asking “What share of South African public finance would optimally be allocated to the road network?”. Trying to answer that question would require us to have first agreed on a single-peaked national welfare function, and building that would in turn require heroic theoretical assumptions. We don’t attempt this because it would be largely pointless. The SA budgets for roads are set by political bargaining processes, as they are in all countries, not on the basis of general welfare analysis. But once this budget is set, it is possible to de-politicise its allocation to a considerable (but certainly far from complete) extent. In most countries pavement engineers exert decisive influence. Their objective functions tend to be driven by the goal of maximising the kilometer magnitude of road surfaces that are in good repair at any given time. In wealthy countries with relatively high marginal labour costs this arguably makes sense. In a country like South Africa, where pervasive unemployment and an extremely skewed distribution of skills with high market value, the marginal shadow price of labour in rural areas is effectively zero. This fact should drive the capital-labour mix in maintenance and upgrade budgets. That in turn puts strong constraints on choices of surfacing material, since some surfaces that are optimal from the pavement engineering perspective are highly capital intensive to maintain. As that isn’t an issue pavement engineers are trained to know how to think about, economists should be brought into the decision process. Which, I am happy to report, we are.

Since we don’t operate a general welfare function, in our model basic access isn’t traded off against growth. Instead, we assign lexicographical priority to finding the set of surface upgrades, new road links, and geographically relocated schools and clinics that would achieve the 5 km minimum sealed surface distance for every household (given current settlement configurations). We then calculate the proportion of the budget left over to be optimised with respect to the network’s contribution to growth. In our actual model, given the proportion of roads that simultaneously serve basic access and growth-promotion once sealed, the marginal drag on service to growth generated by the basic access constraint is reassuringly small. We worry, however, about a recurrent tendency in South African policy implementation to treat the constitutional provision as mere guidance. Therefore, the model we circulate to planning authorities, the one that has been vetted and verified on cost attributions by National Treasury, includes an explicit Rawlsian analysis showing application of the difference principle to lock basic access prioritisation into the algorithm. In our experience this formalisation encourages planners and engineers to treat it as a proper part of the technical specification. I wonder how many other countries’ road authorities and pavement contractors read manuals that include a summary of A Theory of Justice?

Now, here is a more substantive philosophical issue. If network development optimisation isn’t derived from a general welfare model, then should we regard the economic conception of value as irrelevant to the exercise? Is bread-and-butter applied economics nothing but a mash-up of engineering and accounting? The answer is “no”, for two reasons. First, we employ the general macroeconomic models used by National Treasury as checklists to determine which causal relationships we need to explicitly measure when we build our practical model. Second, we define optimality in terms of marginal growth-promotion effects, not in terms of financial accounting. Thus we use theory to structure our modelling space even though our recommendations are notderived from theory. This situation is the typical one in the day-to-day environment of the “working” economist. Philosophers subjecting economic theory to critical scrutiny should bear this in mind to a much greater extent than most do. It transforms the critical questions that seem most relevant, and significantly affects the relative plausibility of competing answers to those questions. To give an example from a different policy area: economists don’t privilege growth in giving policy advice because they believe, on the basis of grand normative theory, that the best sort of economy is the largest one. They do so because without growth you can’t base your welfare floor on public borrowing. We don’t know how else to build such a floor in a way that wouldn’t largely abandon the poor to their own devices.

3:16: We’re in the middle of a pandemic – are there things that philosophy of economics that are relevant to understanding what’s happening both as a response but also in how the virus behaves? Would economics be able to help us with understanding the virus?

DR: There is general mathematical modelling of contagion that can be applied to anything that spreads through a population of hosts – from a tune, to a meme, to a virus. But I’m not sure that there’s anything added to this by the specific logic of economics. Where economics is certainly relevant is for modelling how people’s responses to risk interact with their confidence in their beliefs about probabilities associated with the virus to influence the extent to which they comply with public health guidelines on behaviour. 

My group is currently running incentivised (experiment-based) elicitation of risk preferences and beliefs about probabilities of COVID-19 case and mortality rates across 3 temporal waves between May and September 2020, in samples from South Africa and the United States. We also collect self-reported data on social distancing and personal hygiene behaviour, information consumption, and a correlated archive of public information circulating in each country at each elicitation date. The aim is to apply advanced econometric methods to identify and estimate a structural model of the coevolution of subjective beliefs, risk preferences, information exposure, and risk-response across 5 months in the progression of the public health emergency. Key to this is our lab’s special techniques for joint estimation of risk preferences with distributions of bets, made for real money, on virus cases and deaths at specific future dates. Subjects are paid, once the future dates in question roll around, depending on the accuracy not just of their modal forecasts but of the density of their distributions of bets around the point estimates of the public health authorities. This allows us to identify a measure of relative belief confidence. And we’re also gathering information about where subjects are mainly getting their information, which we expect to be a particularly interesting variable in the model of the US sample.

There is a major philosophical issue in how we understand what a democratic government is implicitly doing when it tries to represent the risk preferences of its citizens. This is an extraordinarily demanding ask, partly because risk is a multi-dimensional concept but also because it interacts dynamically with information and confidence. We don’t pretend to yet have any original insights into how this challenge might best be solved. But modelling the effects themselves is an essential input to clearer thinking here, and we are the group best placed, through the econometric methods we’ve developed, to do this. Thus when the pandemic was declared, we pushed all of our other research commitments into the back seat and got busy preparing our experimental tools to be administered online instead of in the physical lab. Our first two waves of data are in, as of today (mid-July 2020), with the next one pending later in the Northern-hemisphere summer.

3:16: What do your approaches to developmental economics and neuroscience tell us about basic and time-worn beliefs about facts, values, the measurement of data, rights, needs and the nature of government? What do we need to rethink, and are there some basic tenants or axioms that can help with this job?

DR: To keep this tractable I’ll focus on just one idea here. It is very often assumed, across various literatures, that descriptive and normative individualism are mutually reinforcing theses. I’ve argued that this assumption is false, and has generated numerous further confusions. Humans are natural conformists and they largely build their personalities from existing social templates and under social pressure. This is why human behavioural science uninflected by social science is largely misleading and unhelpful for policy engineering and reform. One does not change society by influencing individuals; one must reform institutions so as to modify incentive landscapes. But it is precisely because humans are natural conformists and mimics that individuality, each nugget of genuine idiosyncrasy that is successfully maintained, is a precious achievement worth defending. 

Friendship is among the greatest of human goods, because its true manifestation is helping specific others defend and extend what is special about them. Putting these two points together, social reform, because it involves changing incentives in order to change behavioural patterns, can undermine autonomy when its aims and mechanisms aren’t transparent. But in environments in which social change is transparent, and frequently successful, individuals flourish best because they can more easily avoid freezing into unreflective routines that can be maintained without effort. (This is why a very reliable sign of a society with healthy political and ethical dynamics is a surge of artistic innovation.) Thus conservatism is harmful both to social welfare and to the defense and promotion of individualism. The classic alleged tension between social engineering and individual liberty is a false dilemma. It has appeared to be a real problem only because too many people try to effect change while concealing their motivations and intentions, and by manipulating others instead of setting examples for them.

3:16: You’re resistant to the idea that the subpersonal interests modeled in Ainslie’s picoeconomics should be expected to be identified with neurofunctional value-processing modules. Can you say why and what frameworks of neuroeconomics (decision making in the brain) and picoeconomics (patterns of consumption behaviour) tell us about the explanatory power of economic theory - tested by the phenomenon of irrational consumption, examples of which include such addictive behaviors as disordered and pathological gambling? Do they give us a better understanding addiction and erode the culturally endorsed picture of the addict as victim and moral failure?

DR: Over the past twenty years the causes and nature of addiction, which had basically been a mystery, have become understood. There are of course still some details to be filled in (for example, about specific genetic vulnerabilities), but the general picture is established. Addiction arises because if a person discovers a set of simple, stereotyped actions – pushing the start button on a slot machine, extracting and lighting a cigarette, pouring whiskey into an ice-filled glass – that generate random dopamine fluctuations, that person has a potentially serious problem. The ventral reward circuit that people share with all other mammals cannot learn that a data-generating process is truly random. This leads it to mistake the circumstances created by the behaviour as a cheap learning opportunity. The system then implements a natural source of efficiency built by natural selection: it sends signals to motor preparation circuits that bypass frontal cortical censorship. These signals are experienced as visceral cravings, which interfere in the same way that severe itches do with the person’s ability to concentrate on other activities. She gains relief from this distressful motivational state by consuming the addictive utility source. The resulting behavioural pattern is self-reinforcing.

This might look like a reductive account because it is about neural mechanisms. But note a few things that follow from it. First, it isn’t compatible with regarding addiction as a ‘brain disease’. The relevant brain systems are functioning exactly as natural selection intended (in the Dennettian sense of intention, which is the scientifically important and coherent sense). Natural selection did not weed out vulnerability to addiction because until the rise of agriculture and manufacturing, no organisms could exert enough control over their environments to produce constantly available supplies of addictive targets. (Contrary to popular myth, people cannot become addicted to sex or running or work or particular types of food.) Enormous numbers of modern people become addicted – it is a devastaing public health problem – because other people engineer environments to exploit the natural vulnerability for financial profit. 

So the most effective and fully defensible intervention against addiction is to try to minimise the manufacture and distribution of goods, such as cigarettes and slot machines, that wouldn't exist if people didn’t become addicted to them, and carefully regulate addictive goods, such as alcohol, that most people enjoy consuming non-addictively. But then, as there will always be addicts even given optimal social policy, clinical interventions should not involve directly fiddling with brains as if there was something wrong with them. Addicts need to out-smart their confused dopamine systems, which is generally possible because these systems are less cognitively sophisticated than insects. Here is where Ainslie’s picoeconomics comes into its own. The addict needs to identify a feasible personal rule of behaviour that disrupts the patterns of activity and reward-error prediction that have tricked their dopamine system into believing (the intentional stance again!) that playing a slot machine or drinking whiskey is an efficient way to learn. So the strategy is not at all reductive. The addict, aided by friends and family or perhaps a professional therapist who uses talking at least a much as she uses drugs, is helped to take the intentional stance toward her own brain, and from that perspective outwit it.

3:16: .Should we be worried that science seems to be eroding the traditional concept of freedom of the will? I like to think many things I do are ‘up to me’ but increasingly it seems science doesn’t support my stance. Are apparent threats to the cogency of the idea of will misleading or should I stop thinking in these folk theory terms?

DR: On this topic I’m happy to defer to philosophers who’ve put far more concentrated thought into it than I have, starting with Dan Dennett in Elbow Roomand Freedom Evolves. I’m particularly impressed by a descendent version of Dan’s general strategy, as articulated by Jenann Ismael in her book How Physics Makes Us Free. She presents her view through dialectical critcism of Dennett, but I’m not convinced that her constructed version of Dennett shares the full commitments of the actual Dennett. In any event, I think she gets the bottom line right, and I see no reason why Dan couldn’t agree with her about it. Minds operate on time scales different from those of brains. In thinking about free will we shouldn’t focus on short-scale events like wiggling your finger. We should focus on things your mind does, like collaborate with other people and respond to social structures and ideas to frame the constraints on actions that we identify with your person. Your mind is massively influenced by other people and their clouds of memes, but the one constant in its complex dynamics is you, as its biological centre of gravity. So your earlier selves are the closest thing to primary causal forces driving the nature of your later selves as science can find.

Are these causal influences determined by “the laws of physics”? Ismael argues that the laws we typically have in mind when we worry about this are the principles of classical physics. Those, she shows, are entirely the wrong kind of generalisation to capture the sense in which your earlier personal nature is a generator of your later personal nature. Then I would add: and of course the “laws” of classical physics aren’t even laws; they’re heuristics for doing mechanical engineering near the surface of the earth. If there are laws of physics (and I think that there are), they’re the quantum field equations and perhaps the main equations of general relativity that we hope will someday be reformulated in quantum field-theoretic terms. These laws are unlike the classical laws in many ways, one of which is that they don’t state sequences of influence over time. They state statistics that no physically possible processes (so, no mental processes) can violate. So they indicate some things you cannot do. They say nothing whatsoever, even in principle, about what you will do among the zillion prospects in your ex ante feasible set. So there’s simply no reason to view physics as implying a problem for your relative autonomy to decide to do any of those things. (The “relative” here mainly acknowledges social constraints. If you’re 45 and didn’t take math in high school you won’t be able to earn an economics PhD at Stanford, period. If you’re a 50-year-old person with asthma you can’t decide to become a US Marine. Almost no one can do anything to try to become the monarch of Sweden.)

The bottom line here is that fearing that “science” undermines free will is a hold-over of our once having thought that natural metaphysics reduces to mechanics. It doesn’t. Ismael shows that it never could have been successfully reduced to classical mechanics, and quantum theory itself blocks reduction to quantum mechanics. Quantum theory in general doesn’t look even superficially like the sort of theory that could say anything about intentionality. A person is a kind of entity identified from the intentional stance. The intentional stance identifies real patterns, persons among them. Persons don’t violate the laws of physics, but there’s no meaningful content to the claim that “persons are physical”. Like all patterns identified from the intentional stance, they are, from the perspective of the physical stance, virtual. What it is to be a person (this is Ismael again) is to have the ability to control your brain to the extent of forcing it to do many things becausethere are socially coherent reasons to do them. That is what we must mean by free will if we want to mean anything coherent by it. So let’s privilege coherence, and enjoy being free – and, its essential complement, responsible.

3:16:  You don’t think that so-called special sciences, such as economics, psychology and sociology, reduce to physics or stand in potentially reducing relationships with one another but you do agree with the working universality and primacy of fundamental physics. On the face of it this seems to be an unbearable tension, or set of tensions. So how does this metaphysics of ontic structural realism work? Are there no microfoundations for macroeconomics, or for anything ( eg reducing sociology to psychology) and doesn’t this position render analytic metaphysics redundant? In fact, doesn’t your whole approach to naturalized philosophy render philosophy redundant or at least, radically change the subject?

DR: Perhaps the most basic commitment underlying the version of naturalism I’ve developed with James Ladyman is that there is no good reason to expect the conceptual distinctions that humans find natural to map at all systematicaly onto the objective causal and structural dynamics of the world, except arguably in local domains where human manipulation has been of pressing importance to us, and relatively shielded from politics and normative identity-construction. So, there might be decent isomorphism between observable meso-scale morphological and geological concepts and real structures that science recognises. However, our natural physical ontology is completely misleading because the space from which observations are drawn are tiny and unrepresentative. And our natural social ontologies are also deeply misleading, because social perception and manipulation are mainly driven by normative considerations that aren’t closely aligned with truth-seeking.

Most of Western metaphysics since the scientific revolution has consisted in trying to mutually accommodate our natural ontologies with the structures discovered empirically by our sciences. This Sellarsian project has been stoutly defended, and indeed extended, by Dan Dennett, and I don’t at all deny its value. Indeed, we’re being reminded of how important it is by watching through our fingers as anti-scientific populists bungle their way around management of a pandemic. But I see no reason to regard this Sellarsian / Dennettian work as metaphysics, since it isn’t trying to discover the general structure of reality. It’s cognitive anthropology, aiming to enrich and empower our knowledge of ourselves. Art does that too, though it’s only one of many enormously valuable things that art does. But enhanced knowledge of the objective structures of the world that are independent of human practical and moral concerns is delivered only by empirical science and mathematics. So thought the positivists, and they were right (about that).

Now let’s get physicalistic reductionism onto the table. In the possible world where the history of science had delivered on a reductionistic programme to convince us that we could understand all real patterns using the structural generalisations of fundamental physics, we would be eliminativists about metaphysics. Some philosophers in that world would quibble by arguing that we still needed to do metaphysics to theorise about natural modality. I would respond by saying that that’s really epistemology. But in any event, we would be eliminativists about ontology. The objectively true generalisations about the structure of the word would be directly stated by the fundamental physical equations.

However, this possble world is not the actual one. Genuine ontological reductions across the sciences have been vanishingly rare, and as our collective knowledge increasingly specialises they become ever rarer. Put another way: plausible reductionist proposals circa 1880 rested on the relative imprecision of most scientific measurement then. As our capacity to elicit and process information has expanded in reach, this has become steadily clearer. From naturalism it follows that there is only one sound kind of basis for a view on ontological reductionism: is science tending to generate reductions or not? The answer is overwhelmingly that it isn’t. Indeed, there is no scientific reason to believe that the world is structured into ‘levels’ of relative ontological generality at all. The world as the sciences in fact describe it has scale-relative ontologies. So that is the ontological structure we should believe to be actual, until such time as the scientific picture changes. Ontic structural realism provides the basis for an inductive argument to the effect that the picture will not change in that respect.

One possible philosophical stance to take in response to these facts is to say, with Nancy Cartwright and John Dupré, that the world is disunified or ‘dappled’. James and I have no purely philosophical argument against that view. A problem with it is that scientists don’t appear to accept it, since they regard outright contractions between sciences as problems to be solved. Furthermore, facts are routinely exported across disciplinary boundaries and fruitfully put to work in conceptual fields other than those in which they were discovered. Scientific practice thus suggests commitment to a single world with complex structure but no ontological hierarchy. This holds even within very closely related disciplines. For example, I join David Colander and Kevin Hoover in arguing that macroeconomics constrains microeconomics more than the other way around. In general, where humans are concerned, as Harold Kincaid has argued for decades and on the basis of many examples, social structures are often more fundamental, in the sense of applying with wider scope and lower variance, than psychological structures.

But then scientists do respect one major asymmetry with respect to the authority of disciplines: none are allowed to violate the generalisations of fundamental physics, and fundamental physicists are not obliged to worry about whether their generalisations conflict with those of other sciences. (So, quantum physicists: go ahead and breach principles of Boolean logic. Computer scientists will have to adjust, as they are now very excitedly hoping to successfully do.) This asymmetry opens the door to naturalised metaphysics. The physics we call ‘fundamental’ (which excludes most of physics, e.g. condensed matter physics, theremodynamics, etc.) is the only science that claims the universal reach of traditional metaphysics, with its generalisations treated as constraining (not reducing) measurements of real patterns at every scale and in every measurable space. Thus it provides the reference framework against which to try to identify the principles that unify the sciences. This is not in the job description of the practitioners of any special science. It is not regarded by physicists as part of their technical mission. But it invites well motivated curiosity. And it very closely matches the metaphysical project as Aristotle conceived it. So there is still work for metaphysicians. But every sound metaphysical argument must derive its premises from empirical science. Your question asks whether ‘analytic’ metaphysics is ‘redundant’. It’s worse than redundant. In trying to discover general truths about reality that are independent of science, it implements a counter-Enlightenment project. James and I argue that it is a barrier to knowledge.

One dangling point: many philosophers view efforts to generalise the ontologies of particular special sciences, or pairs of special sciences, as counting as ‘metaphysics’. Sometimes this is because they try to do it analytically, in which case it’s just more nonsense of that kind. But it’s frequently empirically motivated and subject to empirical measurement – think of neuroeconomics, evolutionary psychology, cognitive ethology, economic anthropology. Special scientists clearly do take that kind of local unification as part of their business. So I see no particular reason for calling it ‘metaphysics’. Metaphysics is best used to label the activity with which Aristotle associated it: describing the most general structure of reality. Whereas Aristotle thought that could be done using natural language, I think it can only be done using mathematical machinery, since that is the core representational and reasoning technology of fundamental physics. (James and I don’t, by the way, think that all or most mathematics is metaphysics. But I’ll arbitrarily cut off this answer here, since no trip into the philosophy of mathematics can ever be brief.)

3:16:  What effect did the 2008 economic crisis have on macroeconomics? I remember chairing a panel of economists in Bristol at the behest of Diane Coyle and microeconomics seemed to be in crisis. And is the new economic crisis caused by this pandemic flushing out anything new or consolidating the changes the earlier one brought about, in particular the importance of experimental economics?

DR: Let me start with the second part of the question, because it admits of an easier and shorter answer. I see no reason why the economic crisis engendered by the pandemic should provoke any significant shifts in economic theory or methodology. Yes, it offers remarkable empirical material for economists, including experimenters. But the proportion of experimental papers in top journals was rising anyway, very quickly, before anyone had heard of COVID-19. I hesitate even to say that this trend will be accelerated, because it was already moving at what I speculate are the limits of system capacity, a function of the number of graduate schools that offer rigorous training in the necessary techniques and knowledge. Then, separately from that, macroeconomists are about to benefit from a wealth of new evidence about effects of a system-scale, unexpected collapse of demand across a range of sectors. Because of this, and because governments and central banks stepped in quickly with interventions of very large magnitude, economically the global shock of March 2020 most closely resembles the onsets of the world wars. But those occurred in very different economies – composed mainly of manufacturing and agriculture sectors, with comparatively tiny service sectors – so their relevance to conptemporary conditions is limited.

Thus the pandemic crisis is a major opportunity for expanding economic knowledge. But my impression from unsystematic reading is that its effects have unfolded largely as already standard models predicted. I haven’t yet been signfifanctly surprised by anything, in the way that many economists were by the course of events in and after 2008. Popular economic commentary expresses amazement that financial markets have been very resilient through at least the early phases of the present rupture. But actual economists are less surprised. Stocks in companies that were directly exposed to choked-off sectoral demand – airlines, hotels, cruise lines, car rental companies – got clobbered immediately and have remained flat on the ground. The rest of the equities market experienced a deep shudder and almost immediately shook it off. Where else was all that money supposed to go? Real interest rates on cash are negative. Bond yields are too low to allow institutional investors like pension funds to touch them. Crucially, financial services companies had deep reserves to cover loans at risk in the clobbered sectors, and that was generally known. In this vital respect, bank regulation that followed the 2008 crisis prepared the system for the 2020 crisis. Even more importantly, the unconventional monetary strategies that central banks invented in 2009-2010 – take a bow, Ben Bernanke – were applied without hesitation, including by the European Central Bank this time. 

The blast of cold water has even induced the German Government to be willing to allow substantial collective responsibility for sovereign debt within the EU, which is something to loudly celebrate. This might even incentivise the Germans to tolerate a healthy increase in inflation targets in a few years’ time, to erode the debt burdens being taken on during the emergency. The crisis could even lead to adoption of universal basic income (UBI) in enough countries to demonstrate to under-imaginative naysayers that it can be sustainably financed and won’t turn people into slackers. If these things happen, we might in 10 years be saying that where economic management is concerned, this event was the prod that restored the world to Keynesian good sense. Of course, that will be no consolation to the millions of people who will have lost loved ones or be suffering from the serious long-term health impairments that this very nasty disease too frequently causes.

And, well, except … climate change and biodiversity collapse. These are mind-boggling economic threats, far beyond the scale of severity represented by the pandemic. But people will be far better able to avoid the collapse of civilisation if, thanks to COVID-19, we have UBI and widespread acceptance of vigorous coordinating action by activist public sectors when cities and islands start to go under water and large parts of the earth become uninhabitable due to heat, drought, and fires.

Now, on the more complicated issue of the post-2008 financial crisis and macroeconomic theory. The pillar of orthodox macroeconomics as of the end of the twentieth century, so-called ‘new classical / new Keynesian macro’ (which is in no interesting way classical, and Keynesian in only a very minor sense), has been heavily criticised as having, allegedly, contributed to policy blindness that helped give rise to tolerance of too much correlated risk in lending markets, which blew up with disasterous consequences in late 2008. In fact, serious tremors began to be felt in 2006 and 2007, and I had many anxious discussions about these with other economists during the 18 months preceding the collapse of Lehman Brothers that triggered the freeze-up of overnight lending. Most economists of my acquaintance were thus not surprised when the roof fell in. I was at a workshop on critical problems in economic theory the day of the Lehman bankruptcy, news of which trickled through to South Africa overnight. The first thing almost everyone said at breakfast was some variant on ‘here it is!’. Economists, like other people, consider it rude to talk about personal finances with non-intimates, but I’m also willing to add that my friends and I had by then taken our own money out of equities. Yet a few months later, there was Alan Greenspan ruefully telling a Congressional hearing that he had been astonished by what happened. How is all of this to be explained?

There are two interacting stories here that need pulling apart. One concerns purely intellectual developments. The other is about the relationship between macroeconomic theory and monetary policy governance. I will draw in this part of my answer on a commentary I published a couple of years ago in the popular science blog platform Edge.

Paul Krugman is among those who publicly held a sub-group of macroeconomic theorists, whom he calls ‘freshwater macroeconomists’ (because many of them worked at universities located around the Great Lakes) partly to blame for Federal Reserve policy, specifically leaving interest rates too low for too long, that facilitated overly risky lending. By ‘freshwater macro’ Krugman refers to a particular policy interpretation of New Classical / New Keynesian theory according to which monetary policy cannot have any significant effects on the macroeconomy at all. In my opinion, as in Krugman’s, freshwater macroeconomics has been clearly refuted by empirical facts, and not merely those observed after 2006. Statistically literate readers can find the basis of this refutation summarized, in an engaging and accessible way, in Paul Romer’s 2016 article here. Romer explains the continuing survival of this theoretical movement in terms of social psychology and deference to revered academic leaders. I have argued in some of my work, that we can identify its core economic mistake as ignoring the fact that what matters to macroeconomic effects is not just what will happen sooner or later, but what will happen sooner versus what will happen later. I have no quarrel with equilibrium modelling in economics. Indeed, it’s indispensible. But equilibrium dynamics can play out at any speed, and freshwater theory says little or nothing about this in application to a real economy.

Macroeconomic forecasts rely on simulations. The modelling technology inspired by freshwater macroeconomic theory, known as ‘DSGE’ (for Dynamic Stochastic General Equilibrium), was – and remains today – the core of the technology used by practical policy makers at the Fed and other central banks, and in private financial institutions. If freshwater macroeconomic theory is empirically misguided, must the modelling and simulation approach that it historically inspired not be equally suspect?

The answer, as so often where theory meets practice at the sharp end, is ‘yes and no’. If the economists at the Fed didn’t believe that monetary policy can affect the real economy, they would have no reason to bother doing their jobs. Neither the Fed nor any other politically independent national central bank is full of cynics who secretly regard their mission as pointless. The DSGE models they use are evolved from the so-called ‘Real Business Cycle’ foundations that Krugman and Romer rightly attack, but this evolution has occurred under the intense pressure of and feedback from real-time policy experience. The Fed’s working models don’t just reflect abstract, general economic theory. They incorporate a richly empirically informed theory of the relationships in the actual economy that it is the Fed’s job to try to manage

Greenspan’s expression of surprise was about two things. First, he was amazed that major investment banks under-responded, to put it mildly, to what he assumed was their pressing incentive to avoid their own bankruptcy. Second, the core model the Fed used for forecasting before 2008 failed to predict the major impact that the 2007-2008 fall in house prices would have on the general economy. It came to quickly be generally agreed that this resulted from the fact that the DSGE models the Fed used in 2006 didn’t include financial sector variables. The absence of financial sector data in the Fed’s model was indeed consistent with the theoretical biases of freshwater macroeconomics. But ‘consistency with’ is the weakest kind of basis for causal inference. Mainstream economists before 2008 were well aware of the specific causal channels by which a mortgage market crisis could blow up financial markets and, by transmission, the real economy. This is why we were glad to be out of the stock market on the day of Lehman’s implosion. The transmission channels had been rigorously modelled, in very well-known papers going back several years, by Jean Tirole and co-authors. The crucial variables weren’t missing from the Fed’s model due to theoretical ignorance by Fed economists, nor to faith by these economists in a theory according to which they were professionally useless. The problem was institutional and political.

The Fed’s models emphasised the variables over which the Fed actually had leverage. Fed directors didn't imagine that they controlled the whole economy. Their mission was mainly to keep price inflation within boundaries that have been identified as desirable by decades of empirical policy experience. What they needed their models to predict to do this job were rates of medium-term growth and ratios of aggregate investment to aggregate savings and aggregate consumption. In 2006, when house prices in the US started to decline (slowly, at first, and only on average in a regionally diverse network of local real estate markets) the models predicted gently rising total productivity and gently declining rates of output to investment. As of thirteen years later, just before the COVID-19 shock, those forecasts had been broadly confirmed. Of course, in between there was a catastrophic short-term growth collapse and then (in the rich world as a whole, though not in each country) a sluggish but steady return to the predicted trend lines.

The Fed’s core model didn't predict the short-run disaster because it wasn't built to. It was intended to be a tool for use by asset market regulators, the people who should have acted to forestall the processes that produced the crash. We can now see, as Greenspan did, that there are also measures the Fed itself could have taken in advance that would probably have mitigated the effects of households and banks taking on unsustainable mortgage debt. Theory would have suggested this, but we know it empirically because, when the crisis hit, by far the most effective institutional fire brigade turned out to be the Fed. Through his creative and courageous policy intervention known as ‘quantitative easing’, former Fed Chair Ben Bernanke prevented the Great Recession from metastasising into a second Great Depression. His innovation was later copied by the European Central Bank, which used it to prevent the sovereign debt crisis in Greece and other EU countries from destroying the viability of the Euro. Bernanke’s policy leadership was based on his knowledge of economic theory, and quantitative easing was exhaustively structurally modelled before it was implemented. The theory Bernanke applied wasn’t freshwater orthodoxy, but it did not come from outside mainstream economic thought.

If the Fed could have made use of financial-sector variables that their models didn’t incorporate, wasn’t this obviously a failure? Well, yes; but we need to locate the failure in question accurately, and ask to what extent it was a failure that could and should have been prevented by better economics.

Most macroeconomists in 2006 believed that private financial institutions were properly incentivised to manage the risks they voluntarily assumed in issuing and trading mortgages and other loans. They furthermore believed that the government regulators of these institutions would control such misalignment of incentives as were likely to arise. These beliefs were, alas, false. Due to pathologies in corporate governance culture, capture of the legislative agenda by self-interested political donors, and intimidation by private banks of politically under-powered and under-compensated regulators, financial-sector risk management failed systemically and spectacularly.

Allof the economic theory needed to understand the effects of this institutional failure, and in principle to predict it, was in place in 2006. In 2008, most economists thus understood immediately what had happened and why. Nor was there a failure to collect or even to model the necessary data. During the run-up, such highly visible bellweathers as The Economist continuously sounded the alarm about unsustainable mortgage risks at both household and institutional levels. Tirole et al’s modelling knowledge wasn’t trapped in esoteric cloisters: in 2004 The Economistput a cartoon on its cover of houses falling from the sky onto the retail high street. The reason the failure was not prevented is that there was no institution tasked with monitoring and controlling both financial-sector risk management and the stability of macroeconomic policy targets.

It is unfortunately true that before the crisis, some lazy economists, mostly not economic theorists but salaried shills for investment banks, publicly pronounced that no one had to worry much about mortgage debt levels because DSGE models weren't forecasting instability. Most economists who weren’t being paid to peddle such soothing balm did not. Most economists were never in the grip of the freshwater fallacy.

This is history now. We won’t again see macroeconomic models without financial variables being used to guide policy. Such models are still useful for some other purposes, as the success of those models in forecasting the limited relationships for which they were actually designed illustrates. Macroeconomic theorists certainly learned a lot from the crisis, and even more from the subsequent efforts to emerge from it. Policy-makers, as I said earlier, enacted measures that have so far prevented the COVID-19 demand shock from triggering a financial melt-down. But it would wildly over-state the facts to say that the 2008 crisis caused a revolution or paradigm shift in macroeconomic theory. My own take, no doubt reflecting my long-standing priors, is that the various tumultuous events of the past two decades have led economists to better appreciate styles of thinking they had relatively neglected, particularly those associated with Keynes (though Keynes’s own not very systematic work certainly can’t be used a policy manual). Perhaps, especially in the wake of the COVID-19 responses, I can say “We’re all Keynesians again”, without this being misunderstood as implying anything like “throw away what we’ve learned from monetarism and new classical theory”.

3:16: And finally, are there five books other than your own that you can recommend to the curious readers here at 3:16 that will take us further into your philosophical world?

DR: Just five! That concentrates the mind. Setting aside classics that created the foundations of my whole world – Hume, Darwin, Peirce, Marshall, Russell, Savage, Samuelson, Quine, Dennett, (and Bill James!) – and then making my job more tractable by sticking only to 21-st century books, here are those that most structure my current thinking:

Ken Binmore, Rational Decisions (Princeton U.P 2009)

George Ainslie, Breakdown of Will (Cambridge U.P. 2001)

Mark Wilson, Wandering Significance (Oxford U.P. 2006)

Edward Leamer, Macroeconomic Patterns and Stories (Springer 2009)

Haim Ofek, Second Nature (Cambridge U.P 2001)


Well, and then I’ll cheat by acknowledging five from the 1990s:

Daniel Dennett, Consciousness Explained (Little Brown 1991)

Ken Binmore, Game Theory and the Social Contract (MIT Press 1994, 1998)

Daniel Dennett, Kinds of Minds (Basic Books 1996)

Michael Mandler, Dilemmas of Economic Theory (Oxford U.P 1999)

Philippe van Parijs, Real Freedom For All (Oxford U.P. 1995)


ABOUT THE INTERVIEWER

Richard Marshall is biding his time.

Buy his second book here or his first book here to keep him biding!

End Times Series: the index of interviewees

End Time seriesthe themes