By: Daniel Tippens
A heated debate has arisen since the rise of moral psychology as a scientific discipline. Moral psychology examines, descriptively, how we come to make moral judgments and have moral intuitions (these two terms I will henceforth use interchangeably). Some thinkers argue that normative ethics — the study of what we morally ought to do — can be informed by moral psychology. Of course, whether this claim is controversial depends on exactly what one means by “informed.” In some rather trivial sense, moral psychology informs normative ethics insofar as it provides us with the mechanisms that lead to moral judgments.
The more controversial claim is that, in some sense, understanding how we come to make moral judgments can tell us, in a more direct way, what we ought to do. Somehow, learning about how we come to think about morality will inform us about how we should behave.
Opponents of this are quick to cite the famous is-ought gap — one cannot derive how things ought to be from how things are. Since moral psychology is a descriptive enterprise about how our minds work, it would seem to fall into the category of telling us what is, and so cannot tell us what ought to be. People taking this position we can call philosophical ethicists.
Proponents of the view that moral psych can tell us what we ought to do, who we’ll call psychological ethicists, will reply by first asking what it is that we use to determine what we ought to do. The answer that almost anyone would agree on, they say, is moral intuitions — those spontaneous, felt-judgments we have about what is right or wrong in various cases.
Philosophers will put forward a moral theory and then “test” it against our moral intuitions in cases. Take Utilitarianism, which loosely states that what one ought to do is whatever promotes the greatest amount of happiness. Well, if utilitarianism is correct, then holding gladiator fights for large amounts of people is not only morally good, but morally required. Since our intuition is that it is wrong to subject people to violent fighting simply to amuse an audience, we have a reason to doubt the truth of utilitarianism.
Psychological ethicists then claim that some facts about how we come to have moral intuitions can tell us whether those intuitions reliably track what is moral. Take the following case to help illustrate:
Psychologists, in collaboration with pharmacologists, develop a drug which lasts for five hours. This drug causes you to only have a strong utilitarian intuition in all moral dilemmas you are presented with. Should you act on the basis of these intuitions while you’re under the influence of this drug?
Given that your intuitions aren’t being “properly” generated, it would seem that you should not rely on the moral judgments you make while under the influence of the utilitarian drug, either when building a moral theory or making moral decisions. Thus, knowing facts about how your moral judgments are being formed has changed how you ought to behave.
Similarly, psychological ethicists will claim, understanding how our moral intuitions are formed can reveal cases where, perhaps, we shouldn’t trust such intuitions. For example, there is evidence that the order in which one is presented with different moral dilemmas can influence whether one intuits that an action is permissible, and also how strongly that intuition is felt. Clearly, the order in which moral cases are presented is irrelevant to the truth of one’s intuitions about how one ought to behave, and so, like in the case of the utilitarian drug, we should be wary of trusting our intuitions when we are presented with cases in serial fashion.
One way that philosophical ethicists might respond to this argument is to still deny that moral psychology has told us anything about what we morally ought to do or even ought not do. Rather, they might argue, it has told us what we rationally ought to do. When you are under the influence of a utilitarian drug, it would seem that such knowledge informs us that we are in a kind of skeptical scenario — whereby we have strong reason to doubt our intuitions. As such, we should refrain from believing any of those intuitions. But of course, this seems far from a moral intuition. Rather, it closely resembles a rational intuition about what we should believe based on prior knowledge about defeaters for our beliefs. Perhaps psychology hasn’t told us anything about what we ought to do, morally, after all.
Some psychological ethicists will concede this point, and be happy to do so, for they consider it to be a pyrrhic victory. Numerous studies have been marshaled to show that our moral intuitions are formed by all sorts of morally irrelevant factors such as the presence of a noxious odor, the wording and ordering of cases, and whether you have just seen a violent or negative visual stimulus. The idea is that such causal influences on our moral intuitions are the product of our idiosyncratic and morally irrelevant evolutionary history, and so they are quite ubiquitous. Given this ubiquity of such influences, perhaps we rationally ought not trust our moral intuitions at all.
Walter Sinnott-Armstrong takes a position that is somewhat like this. His claim is that the enormous and frequent impact of irrelevant influences on moral judgments implies that moral intuitions are never non-inferentially justified. In other words, such intuitions are not justified without further supporting reasons — they are not justified by default.
Peter Singer takes an even more radical approach, and, for somewhat similar reasons to those I just mentioned, claims that we shouldn’t use moral intuitions at all in moral theorizing. Instead, we should perhaps only appeal to what he calls rational intuitions.
There is little point in constructing a moral theory designed to match considered moral judgments that themselves stem from our evolved responses to the situations in which we and our ancestors lived during the period of our evolution as social mammals, primates, and finally, human beings. We should, with our current powers of reasoning and our rapidly changing circumstances, be able to do better than that (Singer, 2005).
At this point in the debate, one might be tempted to conclude that the answer to the question about what influence moral psychology has in normative ethics is either that is has none, or that it undermines the whole program, leading to a kind of moral skepticism (at least insofar as we shouldn’t use moral intuitions in moral theorizing). But I think that moral psychology can have a very different role, one that is compatible with the idea that we can trust moral intuitions, and still should use them in deciding how we ought to act. On my view, moral psychology should be treated as a small but useful tool in normative ethics.
In showing how our moral judgments are formed, psychology’s power isn’t only restricted to undermining our intuitions, but also clarifying our reasons for having those intuitions. Moral psychology can help specify which principles underlie our moral intuitions.
Take the famous principle of the Doctrine of Double Effect (DDE), which states that it is impermissible to intend to cause harm as a means to an end, but permissible to intend to help someone while foreseeing harm as an inevitable side effect. This is classically illustrated by the contrast between two types of trolley problem cases:
A runaway trolley is barreling down the tracks. If it continues on its course, it will strike five people who stand unaware on the tracks. However, beside you is a lever which, if pulled, will redirect the trolley onto a different track, where one unaware person is sitting. If this person is struck, he will surely die. Should you pull the lever?
A runaway trolley is barreling down the tracks. If it continues on its course, it will strike five people who stand unaware on the tracks. However, you are standing on a footbridge above the tracks next to a plump man. If you push this man off the bridge and in front of the train, he will surely die, but the trolley will come to a halt, and the five people will be saved. Should you push the fat man?
Most people will say that it is morally permissible to pull the lever, but impermissible to push the large man. Philosophers have traditionally explained this difference in intuitions with the DDE. It is permissible to pull the lever because you intend to save the five even though the death of the one individual is a foreseeable side-effect. It is impermissible to push the fat man because in doing so you must intend to kill the fat man as a means to save the five.
Fiery Cushman has provided a good heuristic to determine whether a subject intends to use harm as a means to an end as opposed to intending to help with a foreseeable harmful side-effect — ask whether the agent’s plan to bring about good consequences could be implemented without harming the individual in the scenario. In Push, one cannot stop the train and save the five without the presence of the fat man, but in Pull one can save the five even if the individual on the tracks were not there.
The DDE intuition has been so powerful that it is reflected in many ethical and legal positions. For example, some people justify certain forms of euthanasia with this doctrine. The reason that euthanasia is permissible, they will say, is that the physician only intends to alleviate the pain of the person, though he foresees the inevitable side-effect of death. Chief Justice William Rehnquist appealed to the DDE in his majority opinion in Vacco v. Quill when he said, “Just as a State may prohibit assisting suicide while permitting patients to refuse unwanted lifesaving treatment, it may permit palliative care related to that refusal, which may have the foreseen but unintended “double effect” of hastening the patient’s death.”
I believe that moral psychology can play a role in clarifying whether our moral intuitions actually justify the DDE. First recall that what grounds the DDE is the intuitive difference between cases like Push and Pull. But while our intuitions are consistent with the DDE, it’s not as though people will consciously state that the principle that they believe in, which justifies their intuitions, is the DDE. So, it isn’t obvious that the DDE is what our intuitions are actually tracking. This is an inference philosophers have made to explain the intuitive “data.” If we showed that there are cases where our intuitions don’t track the DDE, but rather track other similar principles, we might want to give up the DDE on the basis of our intuitions.
To be clear, this is something that philosophers do all the time. They try to find thought experiments and cases that will show that our moral intuitions don’t track a principle. But moral psychologists can play a role in the process as well. First, they can help adjudicate what our moral intuitions are actually tracking and second, they can investigate the moral intuitions we have not just in thought experiments, but in performing actual actions. I’ll elaborate on these in turn.
The intuitions in Push and Pull are consistent with a variety of principles. For example, a prohibition against those more causally “direct” harms, and a permission for more causally “indirect” harms. In Push, the harm that is caused is up-close-and-personal, with your push immediately leading to the death of the fat man. In pull, there is a series of steps in between your action — pulling the lever — and the one person dying.
The DDE is silent on the moral relevance of causal “directness,” as it makes no mention of causation. What matters in the DDE is the intention an agent has. Of course, we sometimes infer an intention based on causal directedness. For example, it is sometimes said, according to chaos theory, that a person waving his arms in Texas can cause a hurricane in China (through a long chain of causal effects). Given how many causal steps there are between a person flapping his arms and a hurricane occurring, we infer very clearly that such a person did not intend to cause a hurricane.
So while the principle of causal directedness is related to the DDE, it is not identical to it. But both principles are compatible with the intuitive difference between Push and Pull. Maybe our intuitions don’t actually track the DDE, but track some other principle(s) that typically co-occur with DDE-style cases, like causal directedness.
In order to determine which of these two principles our intuitions actually track, philosophers traditionally would try to give thought experiments that would differentiate them, like these (again provided by Fiery Cushman):
You are driving a motor-boat and see five people drowning far off in the distance. You know that your boat is too heavy to reach the speeds that would allow you to rescue the five in time. However, if you speed up the boat you will throw a passenger off causing him to drown, but allowing you to save the five. Should you speed up the boat?
You are driving a motor-boat and see five people drowning far off in the distance. You know that if you don’t speed up the boat a drastic amount, the five will surely die. However, you also know that speeding up the boat will throw off one of your passengers, causing him to drown, but allowing you to save the five. Should you speed up the boat?
These cases are consequentially identical to Push and Pull, but they no longer involve any differences in causal directedness. So, if people report having strong differences in intuitions similar to those in the Push and Pull cases, this bodes well for the DDE.
However, suppose we couldn’t think of examples like this, and still wanted to figure out if causal directedness was grounding our intuitions. Psychology can help, here, by providing neural data. Suppose ascriptions of direct causation are paradigmatically associated with a certain region of the brain X, and indirect causation with region Y. We could give cases like Push and Pull and see whether there is a difference in the activations of X and Y, which would give us some reason to think ascriptions of causation are playing a role in underpinning our intuitions, lending support to that hypothesis. As such, moral psychology can assist in adjudicating (but not determining) which principles our moral intuitions track, and therefore justify, in this way.
One may reject that there can be any such useful mapping between brain and mental states. But I have two concerns with this. First, if one made this move, it would be hard for them to avoid claiming that our whole brain mapping project — whereby we correlate certain areas of the brain to things like emotion and vision — is mistaken, which seems implausible. Second, this would be to reject a large assumption in neural science, as opposed to rejecting the idea that moral psychology, if we assume it can map different states, has no role to play in normative ethics. I, at least, am interested in the latter, narrower, question, as are most people in this debate.
There is another way moral psychology can be helpful in normative theorizing — by showing how people’s intuitions differ in thought experiments and in practice. In the philosophy classroom, people make moral judgments from behind their desks. What they don’t do is actually embed themselves in situations that involve the relevant moral choices where they must act. Plausibly, moral intuitions will differ in practice and in theory, and that raises interesting ethical questions.
Suppose a medical student in Oregon feels that physician assisted suicide is permissible when he considers the question in a medical ethics class. But when he is doing his palliative care rotation he finds that he is not only hesitant to administer the drug, but has the moral intuition that euthanasia is impermissible. In such a situation, there is a disconnect between his judgment-in-theory and his judgment-in-action.
Moral psychologists can uncover such clashes. Since they have the advantage of a laboratory setting, they can produce situations in which subjects will have to make moral decisions, and then can ask for their moral intuitions shortly thereafter. Once they contrast these in-practice intuitions with other subjects’ behind-the-desk intuitions, they can reveal some stark differences. These contrasts should be interesting to philosophers, for they raise questions about which intuitions we should rely on in normative theorizing. Also, if there is no difference between in-practice and behind-the-desk intuitions about particular cases, perhaps we can have more confidence in them. But aside from merely raising questions, revealing differences between these kinds of intuitions provides philosophers’ with more kinds of intuitive “data” to build on, which strikes me as valuable.
While moral psychology certainly doesn’t settle normative questions, I think it can be a useful tool. It can help us determine which principles our moral intuitions actually track, and can reveal differences between in-practice and behind-the-desk intuitions, which provides philosophers with interesting intuitive contrasts that they can use in theory-construction. When it comes to normative ethics, moral psychology need not be seen as either simply irrelevant, or entirely subversive. There is a middle ground it can, and I think should, occupy.