Pandaemonium

THE MORAL CAMERA

Moral compass

The Quest for a Moral Compass is now out in paperback. And, to accompany the new edition, I will be, over the next few weeks, running extracts from the book. This first extract is from Chapter 18, ‘The search for ethical concrete’, and looks at neuroscientist Joshua Greene‘s idea of morality as existing in two modes – automatic and manual. You can buy The Quest for a Moral Compass in most bookshops, through Amazon or from my Pandaemonium bookstore.


From The Quest for a Moral Compass, pp 310-313

Imagine a runaway train. If it carries on down its present course it will kill five people. You cannot stop the train, but you can pull a switch and move the train on to another track, down which it will kill not five people but just one person. Should you pull the switch? This is the famous ‘trolley’ problem, a thought experiment first suggested by Philippa Foot in 1967, and which since has become since become one of the most important tools in contemporary moral philosophy. (In Foot’s original, the dilemma featured a runaway trolley, hence the common name of the problem.)

When faced with the question of whether or not to switch the runaway train, most people, unsurprisingly, say ‘Yes’. Now imagine that you are standing on a bridge under which the runaway train will pass. You can stop the train – and the certain death of five people – by dropping a heavy weight in front of it. There is, standing next to you, an exceedingly fat man. Would it be moral for you push him over the bridge and onto the track? Most people now say ‘No’, even though the moral dilemma is the same as before: should you kill one to save the five?

Or consider a dilemma first raised by Peter Singer forty years ago. You are driving along a country road when you hear a plea for help coming from some roadside bushes. You pull over, and see a man seriously injured, covered in blood and writhing in agony. He begs you to take him to a nearby hospital. You want to help, but realize that if you take him the blood will ruin the leather upholstery of your car. So you leave him and drive off. Most people would consider that a monstrous act.

Now suppose you receive a letter that asks for a donation to help save the life of a girl in India. You decide you cannot afford to give to charity since you are saving up to buy a sofa and bin the letter. Few would deem that to be immoral.

Again, there seems to be no objective difference between these two cases. Yet to most people they appear unquestionably morally different. In both cases, Joshua Greene suggests, the difference, lies not in the facts of the case but in the brains processing those facts. The perplexing, seemingly contradictory, ways that people approach many moral dilemmas reflect the way in which human brains are wired. Our ancestors, Greene suggests, ‘did not evolve in an environment in which total strangers on opposite sides of the world could save each others’ lives by making relatively modest material sacrifices’. They evolved, rather, ‘in an environment in which individuals standing face-to-face could save each others’ lives, sometimes only through considerable personal sacrifice’. It makes sense, therefore, ‘that we would have evolved altruistic instincts that direct us to help others in dire need, but mostly when the ones in need are presented in an “up-close-and-personal” way.’ We intuitively think it immoral not to help the driver but are less concerned about ignoring the girl in India. We also make a moral distinction between a case in which we personally kill someone and one in which the individual’s death is the result of a more impersonal, mechanical action. We can, however, also step back from the dilemma, and our intuitions, and take a more sober, reasoned view. If we do that, we realize that there is no moral distinction between the two cases in either of the scenarios.

For Greene, then, the intuitionists are right. We arrive at moral answers instinctively, intuitively. But not entirely instinctively, and not always. Human morality, Greene suggests, is a bit like a digital camera. It can work in both auto mode and in manual mode. In automatic, point-and-shoot mode, the camera can take pictures quickly and easily, but often goes awry in difficult conditions – in bright sunlight, for instance, or in scenes with high contrast. Auto mode, in other words, is fast but inflexible. In manual mode, the camera can be fine-tuned to take perfect photos in even the trickiest conditions. But such fine-tuning is fiddly and takes time. It also takes considerable experience. Manual mode is highly flexible, but it is slow and awkward to set up.

The same is true, Greene suggests, of moral thinking. Normally we rely on point-and-shoot moral answers. We respond quickly, instinctively, almost unthinkingly to moral problems. Our fast, instinctive point-and-shoot moral snapshot answers have developed against the background of our evolutionary history. In auto mode, our brains perceive it as moral to switch the trolley to a second track so that only one rather than five people are killed, and immoral to ignore the plight of an injured person to protect the leather in your car. But auto mode is not subtle or powerful enough to detect the moral link between the injured traveller and the poor child on the other side of the world, or between killing one to save the five by pulling a switch or by throwing a man off a bridge. We are, however, able to switch from auto to manual mode, to recalibrate our settings and to look at the dilemmas anew. When we do this, we see the moral links, and the moral answers, that auto mode fails to detect. Unlike in a camera, the two moral modes are often in conflict. We can reason our way to a moral answer, and still feel instinctively that it is the wrong answer.

camera 4

Greene’s argument is highly sophisticated. It is also highly contentious, and one for which, so far at least, there is no real evidence. There are other ways of understanding these different responses to moral dilemmas. When people distinguish between helping an injured man at the roadside and helping a girl in India, it may not be because they are using two different modes of moral thinking, one instinctive, the other reasoned. It may rather be that the kind of reasoning that yields useful moral answers is different from the kind of abstract reasoning championed by Singer and Greene. The roadside victim requires your help and only you can help; if you refuse to take him to hospital it would directly, and adversely, affect his wellbeing. The causes of poverty in India are myriad; the fact that you don’t donate to a particular charity will not necessarily worsen that girl’s situation, any more than your money will necessarily improve it. In any case, unlike in the case of the roadside victim, there are undoubtedly others who could also offer assistance. The two cases, in other words, are different in terms of what caused them, what may be necessary to improve wellbeing, and in the impact of one’s actions. The two cases may be morally equivalent in an abstract sense. But in the reality of actually-lived human lives, there is a moral gulf between them. In insisting that the two are moral equivalents, Singer and Greene ignore the context of moral reasoning. A similar argument could be made about the trolley dilemmas.

Strikingly, Greene, in perhaps the most contentious of his claims, suggests that the two modes of moral thinking correspond to different kinds of moral philosophies. In auto mode we construct moral answers akin to Kantianism. This is not so much because we are all evolved to be mini-Kants, but rather because Kantianism, and by implication most other forms of ethical thinking, are attempts to rationalize our instinctive moral responses. Not all moral theories are, however, like this. Utilitarianism, Greene suggests, is the product not of rationalization but of reason. It is what happens when we turn on manual mode moral thinking. Greene insists that one mode is not better than the other. Each is useful in different circumstances. Yet, Greene clearly believes that utilitarianism is superior to Kantianism and other forms of moral philosophy. The distinction between auto and manual modes appear to be a means of validating the superiority of utilitarianism.

quest

Buy The Quest for a Moral Compass through the Pandaemonium bookstore.

7 comments

  1. dimvisionary

    Thank you for an stimulating post and congratulations on the paperback release of your book! Any reply, for me, would likely run on too long and would be better served by a post of my own. I look forward to more excerpts!

  2. “But auto mode is not subtle or powerful enough to detect the moral link between the injured traveller and the poor child on the other side of the world, or between killing one to save the five by pulling a switch or by throwing a man off a bridge.”

    Should “by pulling a switch or” be included here. I think not.

    • Felix, I know normally I rely on you to proof my articles :-), but in this case I cannot see the problem. The phrase ‘…between killing one to save the five by pulling a switch or by throwing a man off a bridge’ is correct because those are the two alternatives between which Greene sees no objective moral difference, but most people instinctively do.

  3. Having analysed it, I am clearer about what is says and have located the source of my confusion.

    We have 4 scenarios, T1 & T2 (T= trolley), S1 & S2 (S=stranger).
    You say:

    “In auto mode, our brains perceive it as moral to [T1] so that only one rather than five people are killed, and immoral to [not S1]. But auto mode is not subtle or powerful enough to detect the moral link between [S1] and [S2], or between killing one to save the five by [T1] or [T2].”

    I think the structure of the ‘between’ clause requires an ‘and’ rather than an ‘or’:

    “auto mode is not subtle or powerful enough to detect the moral link between killing one to save the five by pulling a switch and by throwing a man off a bridge.”

    However, with this change it doesn’t work very well as a sentence and I will not waste more of your time proposing alternatives to an already published text!

  4. Tim

    “Now imagine that you are standing on a bridge under which the runaway train will pass. You can stop the train – and the certain death of five people – by dropping a heavy weight in front of it. There is, standing next to you, an exceedingly fat man. Would it be moral for you push him over the bridge and onto the track? Most people now say ‘No’, even though the moral dilemma is the same as before..”
    It really isn’t the same. No “exceedingly fat man” would carry sufficient heft to stop a runaway train. Throw him off the bridge and you are just adding to the body count.

  5. Tim

    To put it another way, if he really were that fat, then pushing him off the bridge would be physically impossible.

Comments are closed.