The Moral Maze is a show with its strengths and weaknesses, a format better suited to debating some issues than others. This week’s programme, on the relationship between science and morality, was somewhat messy, inevitably perhaps given the complexity of the issue, the subtlety of many of the arguments and the depth of knowledge required. Nevertheless, there were, I thought, useful parts of the debate. I was particularly struck by Joshua Greene‘s skepticism about the ability of science to settle moral questions, given the general thrust of his academic research.
Greene is perhaps the world’s leading moral psychologist and his work has thrown much light on the character of our moral evaluations. There are, Greene argues, two modes of moral thinking. One is intuitive, the other consciously reasoned. The analogy he often uses is that of the distinction between automatic and manual modes in a digital camera. The automatic mode is quick but inflexible. The manual mode is flexible but slow. Much the same is true, he suggests, of the two modes of moral thinking. He also famously suggests that Kantian notions of rights and duties emerge from our intuitions while conscious, reasoned moral evaluations are driven by utilitarian cost-benefit analyses.
I wanted to explore two issues with Greene. The first was the idea that is becoming increasingly popular, namely that questions of morality ultimately devolve to questions of wellbeing, that wellbeing can be investigated and measured scientifically, primarily in utilitarian terms, and that, therefore, there are in principle no moral questions for which science cannot provide an answer. Or, as Sam Harris put it in The Moral Landscape, morality is in reality an ‘undeveloped branch of science’. It is an argument with which Greene has been sympathetic in the past. And it is an argument with which I profoundly disagree.
Scientific investigations, whether of ourselves or of the world around us, certainly can, and do, illuminate our moral judgments. Yet it is not difficult to imagine situations in which our moral reasoning requires us to reject the answers that scientific data or cost-benefit analyses seem to suggest – nor why such rejection would be rational from a moral point of view. To take the example that I used in the programme, suppose that in the future scientists really were to discover that racial differences are a biological reality and that one race is cognitively inferior to another, and that cost-benefit analysis showed indisputably that the best outcome for humanity was for that race to enslaved by another. How should we morally respond? Obviously (at least, I hope it would be obvious) by insisting that whatever facts science may discover about racial differences, and whatever may be the outcome of a cost-benefit analysis, there is a rational moral argument for treating all humans equally.
Why should we treat all humans equally despite empirical evidence or cost-benefit analyses that suggest otherwise? Because all humans possess a certain integrity by virtue of being autonomous moral agents. Humans are moral beings living within a web of reciprocal rights and obligations created by our capacity for rational dialogue. We can distinguish between right and wrong, accept responsibility and apportion blame. No other animal – not even the great apes – exist within such a community and it would be cruel to treat them as if they do. That is why, for instance, we do not hold chimps morally responsible for their actions. All humans are, or potentially are, moral beings in this fashion and it is with respect to this that all humans can be deemed equal. The presumption of equal treatment derives from a profound insight about what it is to be human and no amount of empirical data about racial or other similar differences can alter that.
A moral answer may well be contrary to the answer suggested by scientific data or by cost-benefit analysis, and there is nothing irrational about ignoring such data or analysis in making moral evaluations. It is simply that the logic of moral evaluations is different to that which undergrids the assessment of empirical data or utilitarian arithmetic. I have argued with respect to torture, for instance, that it is precisely when torture is proved to be effective that it is most important to oppose it. The real problem arises not in ignoring empirical data but in doing so in such a way that closes off rational debate (by insisting, for instance, that something must be so ‘because God says so’).
All this takes me to the second issue that I had wanted to discuss with Joshua Greene – his claim that deontological notions of rights and duties are mere rationalizations of intuitions while conscious, reasoned moral evaluations are necessarily utilitarian. I disagree. Notions of rights and duties are not simply intuitions that have been given a philosophical makeover but, especially in the case of rights, are concepts that have emerged through historical development and through rational assessments of what it is to be human. Rights, in the classic sense, derive precisely from the existence of humans as autonomous moral beings.
Joshua Greene appeared to agree with me with respect to the first issue – that is, that science cannot settle moral questions because we already have to possess certain normative assumptions (about what it is to be human, for instance) before we can interpret the empirical data; such assumptions certainly have to relate to facts (about the kind of creatures that we are, about the kind of world in which we live), but they cannot be reduced to facts. If I was surprised by Greene’s stance, it is because it seemed at odds with the views that he had adopted in the past, at least as I understood them. I never managed to discuss the second issue with him though. There is, unfortunately, only so much you can discuss in three minutes – another of the constraints of The Moral Maze, I’m afraid.
Jerry Coyne, another of the witnesses, has posted his own thoughts about the programme.