By Daniel Tippens
An abridged version of this essay originally appeared here on Quillette Magazine.
In 2016, allegations of sexual harassment against Thomas Pogge, an internationally recognized professor of philosophy, came to public light. Accusations against him had been made by other students in the past, but the most widely publicized claims were brought by Fernanda Lopez Aguilar, who said that Pogge had invited her to be his interpreter at a conference in Chile, and then proceeded to make unwanted advances toward her, including asking her to watch a movie with him in his bed, fondling her body and pressing himself against her, and sleeping on her lap during the plane ride home.
Yale investigated Pogge, and ultimately found that he had failed to uphold standards of ethical behavior, but not that he had engaged in sexual harassment. This caused an uproar within the philosophical community. Angered by Yale’s lenient attitude toward Pogge, some philosophers wrote an open letter condemning Pogge’s actions, and others went a controversial step further, suggesting that since Yale wouldn’t handle matters appropriately, philosophical academia would take morally motivated action — excluding Pogge’s work from their syllabi.
In response, philosophers Brian Leiter and Justin Weinberg argued that this proposal is objectionable. Leiter claimed that “not assigning his work when it is relevant to the topic… [is] educational malpractice.” Weinberg could be read as elaborating on this, arguing that “Philosophers are interested in figuring out what’s true, or, if you don’t like talk about truth, what we have most reason to believe. Apart from some statements that refer to the speaker or his or her situation, the truth of a statement does not vary according to who is saying it. We may loathe the harassing and unprofessional behavior Pogge is alleged to have engaged in; whether he behaved that way or not has no bearing on the truth of his views.”
What is interesting about this debate is that it takes the form of something philosophers have discussed for a long time — a clash between different types of values. Things we value are things we care about, are motivated to act on, and feel compelled to uphold. One type is epistemic value — we care about forming true beliefs, and feel we should act in a way that upholds this, which will influence our behavior. We will demand evidence, provide arguments, and consider objections before endorsing a belief. Another type is moral value, or our felt motivation to do what is right or virtuous.
Conflict between different types of values happens frequently. Nobody wants to think of herself as a person with vices. But what if, as David Enoch suggests, you come across some evidence that you are the smartest person in your class? On the one hand, you probably value adopting beliefs based on evidence, and so you might feel compelled to endorse the belief. On the other hand, adopting this belief seems pretty arrogant, which would be a moral vice. It seems that either option you choose, believe or not, will leave one kind of value unsatisfied. If you believe you are the smartest, your epistemic value may be satiated, but your moral value is arguably left wanting.
Underlying the debate about excluding Pogge’s work seems to be just this type of conflict. On the one hand, there is a moral pull to try to deter future sexual harassment, and perhaps to apply punishment to a putative bad actor that he seems to have avoided. That could (possibly) justify banning this person’s work from classroom syllabi. But there is also an epistemic pull to pursue truth, which speaks in favor of teaching students material that is likely to be true, or at least contain arguments we have good reason to believe on their own terms. So when it comes to the question of whether we should exclude a bad actor’s material from the syllabus, these two types of value have the potential to pull us in opposing directions.
Some strong disagreements are manifestations of a different kind of irreconcilable clash — one within a particular kind of value. Morally, a parent doesn’t want to cause harm to her son. When the son says he wants to join the military, the parent might want to stop her from doing this, in an effort to keep her out of physical harm’s way. But the parent also recognizes that this restriction might stifle her son’s growth as an autonomous individual, instantiating a different kind of harm. The parent, then, has seemingly irreconcilable conflicting moral motivations — either choice will leave one of her motivations unsatisfied.
This kind of clash can be seen in the trigger warning debate, which was recently fueled when John Ellison, Dean of Students at University of Chicago, wrote a letter to their incoming freshman class of 2016 informing them that the university does not support trigger warnings. Trigger warnings (also referred to as content warnings) are statements made either on syllabi or at the beginning of classes, informing students that some of the material they will read or discuss may involve emotionally distressing material, such as depictions of sexual assault or murder. Those in favor of such warnings differ on how strong the requirement on implementing them should be. Some advocate that instructors should be required to give such warnings as a matter of enforced policy, while others simply think that professors should give warnings, but no policy should force them to.
The reasons in favor of content warnings appear to be both epistemic and moral. In a New York Times article, Philosopher Kate Manne argued that informing students about potentially highly distressing material will help prevent them from experiencing emotional and psychological harm, and in doing so, will also help them to better engage with ideas in the classroom. Morally, such warnings prevent harm, and epistemically, they facilitate getting the student to engage with the ideas in the classroom.
Those against implementing the warnings argue that there are also moral and epistemic reasons against the idea. These reasons are best captured in a now landmark article in the Atlantic by Greg Lukianoff and Jonathan Haidt. They make two arguments (among many others). First, that trigger/content warnings instill a pattern of thinking that involves epistemically harmful cognitive distortions, such as fortune telling, which takes place when one predicts that future events will be harmful or negative, feels emotionally convinced that one’s prediction is true, and then “sees danger in an everyday situation.” Issuing warnings, for Haidt and Lukianoff, promotes fortune telling-type thinking, which most likely delivers false beliefs, as clearly one doesn’t know what the future holds, and should rely not on their emotional conviction that impending events will be negative but rather on evidence. Epistemically, then, content warnings are problematic (to be clear, they also argue that such distortions are morally problematic, for example, because they cause students to predict harm, and subsequently experience harm when they otherwise wouldn’t have).
Lukianoff and Haidt also contend that preventing students from encountering distressing ideas is a way of treating one’s symptoms, but not addressing the underlying cause. Exposure therapy involves presenting a patient with a phobia or trauma to triggering stimuli, in safe situations. They outline how lightly exposing somebody with an elevator phobia to elevators reinforces the idea that elevators are not, in fact, something to be afraid of. Similarly, exposing students to disturbing ideas in the comfort of a classroom might cause emotional pain to students, but in the long term it helps to get rid of students’ triggers. This seems like a moral concern (though see here for a critique of this line of reasoning).
In the case of content warnings, then, there is a clash within one’s moral motivations, and a separate epistemic clash. It is tempting to think that the moral part of the dispute is actually an empirical one in disguise — those like Manne think trigger warnings cause harm, and Haidt and Lukianoff think that refraining from issuing such warnings causes harm. Maybe we just need to put on our lab coats and figure out who is right.
This would be a mistake, however, because there are two different notions of “harm” at stake, and deciding between them is a conceptual, not empirical, matter. A young person is diagnosed with a mental illness, and can either spend two years of grueling therapy and treatment to rid himself permanently of the disease, or he can take medicine which will keep the symptoms relatively, though never entirely, in check, but will also leave the underlying pathology untouched. What should he do? The question is like whether to peel off a bandaid slowly or rip it off quickly and get it over with.
The trigger warning debate involves similar issues. One way to harm somebody would be to allow them to experience unanticipated emotional pain, and another would be to stifle their ability to overcome whatever is eliciting the pain. Morally, professors want to prevent both simultaneously, but they can’t. Whether one implements warnings or not, one kind of moral motivation will be frustrated.
Epistemically, content warnings may facilitate students’ engagement with the material in the class which helps them learn the material in that lesson, but on the other hand, they may instill a pattern of thinking (like fortune telling) which is likely to deliver false beliefs in the long-run. An irreconcilable conflict once again. Both sides of the debate, then, represent competing underlying motivations; the pro-warning advocates care most about preventing emotional harm and promoting acquisition of material in class, and their critics push the view that what matters is curing what is causing their emotional harm, and instilling good habits of thinking.
The Pogge and trigger warning debates are but two examples of many that we could analyze using this framework of value conflict. The debate over disinviting morally controversial campus speakers is another, with one side suggesting that epistemic value demands that we allow such speakers to give lectures (promoting free and open inquiry), and the other prioritizing sending a moral message to deter others from holding such morally heinous and harmful views (e.g, racist or sexist positions) by banning them from giving lectures. And there are certainly more many more contemporary issues that fit this kind of framework.
What is interesting about irreconcilable value conflicts is that they can happen between reasonable, moral, and competent people. Consider moral conflict — almost every ethical issue divides intelligent and empathetic individuals, such as the permissibility of abortion, euthanasia, and torture. Anyone who has witnessed a debate between professional philosophers on such issues, or has observed a graduate ethics course, knows this. Interestingly, the disagreement is by and large respectful in such cases.
Respectful polarization occurs between moral and epistemic conflicts as well. Consider the question of what to do with scientific research that has been acquired through morally heinous means. During World War II, Nazi scientists performed countless experiments on Jews, including women and children. In the Auschwitz concentration camp, 1,500 twins were experimented on in an attempt to learn about their genetics. Some twins had dyes injected into their eyes to see if it would change their color, and others were literally sewn together. This was basic research, with no clear therapeutic application. But we are still curious about the knowledge they may have acquired, though it might seem wrong to study their findings — it feels as though we would be vicariously using the subjects of those studies as means to further our epistemic ends. Should we study their results despite the morally heinous way they were acquired?
Given the fact that intelligent, reasonable, and ethical people disagree about what we ought to do in cases of irreconcilable value conflict, it is shocking that so many people, on all sides of a debate, seem to think that if someone disagrees with their position, that person must either lack empathy, be grossly immoral, power hungry, stupid, irrational, or a coddled child.
As David Ottlinger has previously discussed — though not exactly in the way I’m about to — in political discourse this name-calling and labelling could amount to viewing a party that disagrees with you in what the Philosopher Wilfred Sellars called the space of causes, as opposed to the space of reasons. The former is the way we view rocks — we explain their behavior by citing laws of nature that an object is completely governed by. The latter is the way we view agents — creatures whose actions are best explained by considering their reasons, values, beliefs, and desires. When we say that the opposing party lacks empathy, is stupid, irrational, or coddled, we are in effect saying that they lack some basic feature that all ordinary agents have, similar to citing a mental illness to explain one’s behavior — the schizophrenic isn’t in control of his behavior, it is caused by psychological features outside of his control. He can’t really be said to be an agent acting at all. This way of viewing people shuts down any reasonable chance of progress in a debate, leaving both parties feeling unsatisfied and hostile. How do you react when somebody dismissively responds to your claims with “what an idiot?”
That these recriminations don’t happen in most philosophical debates and classrooms is precisely why the disagreement is respectful. Students and professors state their positions, are recognized as reasonable people, and leave conferences and classes feeling pleased with the outcome even if the disagreement isn’t resolved. Of course, when it comes to political discourse, the stakes are higher, as people’s well-being and personal interests are affected by the outcome. So even though respectful disagreement may offer the best hope of making progress in campus debates, it is harder to achieve.
Given the ubiquity of value conflict cases like the trigger warning or Pogge debates, the hostility between opposing factions, and the consequent lack of progress, it seems less important to ask who is right in these particular issues than to ask what we ought to do generally when values conflict — and since people’s tendency to view dissenters in the space of causes is a primary reason that debates are hitting intractable stalemates, we should answer this question in a way that minimizes this as much as possible.
Everybody cares about moral and epistemic aims, opposing parties simply disagree about which value they think overrides the other, or which value should be more important in particular issues such that it governs our actions. One answer to our question of what to do when values conflict would be to flat-footedly hold that the moral overrides the epistemic, in all circumstances, period. Indeed, I suspect that many of those behind advocating trigger warnings and excluding Pogge’s work from syllabi hold precisely this intuition. There are, of course, some clear cases where this is true. If someone were to tell you that you must attempt to believe that the earth is flat or he will set off a nuclear weapon, you’d obviously try to become a flat-earther.
But there are two problems with this proposal. First, the moral doesn’t always clearly override the epistemic. Besides the fact that there are numerous controversial cases, such as that of using Nazi scientific research, there seem to be clear situations where the epistemic overrides the moral. If I had to slightly cut your hand in order to prevent Einstein’s work from being permanently destroyed, you should probably prepare for a tinge of pain. Second, to hold that the moral is simply always overriding, despite knowing that many reasonable people will disagree, dangerously promotes viewing dissenters in the space of causes — irrational individuals who clearly don’t have their values straight. These same concerns apply to conflicts within moral value (e.g which kind of harm is worse) — reasonable people will disagree about whether peeling a bandaid off slowly is better than giving it a quick pull.
Perhaps, then, what we should do is reserve judgment and refrain from acting, like scientists and philosophers who refrain from believing some proposition when there is insufficient evidence to determine its truth. The problem with this is that it leaves one side of the debate wholly unsatisfied. If we reserve judgment on what to do about Pogge’s work, the default is to carry on using his work, which is objectionable to those who disagree. In On Liberty John Stuart Mill recognizes this concern, saying, “If we were never to act on our opinions, because those opinions may be wrong, we should leave all our interests uncared for, and all our duties unperformed.” Indeed.
These proposals fail because they inevitably neglect one side of the debate, or they promote viewing people in the space of causes. The answer which best overcomes these problems, I believe, is to be found in negotiation; an attempt to recognize competing conflicting values and find a solution that maximizes both parties’ interests as much as possible, even though neither party will be fully satisfied.
Let me illustrate — suppose you come across evidence suggesting that African Americans have a lower IQ than Caucasians, and that there is a genetic basis for this. You present this evidence and one person says they object to holding such a morally dangerous belief despite any amount of evidence, given that it is likely to be wielded to cause harm. A different individual says we should be open to going wherever the evidence takes us, because we seek true beliefs. After recognizing these competing values, the best solution, I think, would be to increase one’s standards of evidence for adopting this morally suspect belief — demand more evidence than you would normally require. In this way you have taken an action to respect one person’s moral concern, by raising standards of evidence, and have also left open the possibility that the IQ claim could be true.
In such a negotiation, neither party will be fully satisfied. In the IQ case, the epistemic defender might feel that they shouldn’t have to raise their standards of evidence, and advocates of moral supremacy would be bitter about the very idea of adopting such a belief, no matter what the evidence says. But what matters is that they will both leave feeling that their interests have been heard, respected, and acted upon. The virtue of negotiation is that it requires that both parties listen and hear the reasons and values motivating their positions, forcing everyone to enter the space of reasons.
On campuses, negotiation would look like what Emory University recently engaged in with their students. After protests on campus took place where activists assembled to ask for change to improve the racial climate at the university, students sent a list of demands to the administration. Ultimately, Emory decided to host a retreat with 100 people — 50 student activists and 50 administration officials who had the power to implement policy changes. The students spoke to the administration about their demands and timelines. One demand was that students be able to report bias in classrooms. The administration pointed out that a large number of the faculty who teach controversial material, in which bias is most likely to be reported, are faculty of color. At this point the students recognized this concern, which they might not have considered before, and began taking a more middle ground approach — discussing ways of informing faculty about ways they can instruct students in a more sensitive way.
In this case, the administration’s concerns, about faculty being subjected to intense and stressful scrutiny, and the students’ concerns, about preventing particular patterns of behavior, were maximized, though neither party got exactly what they initially may have wanted. One striking fact is that despite the fact that the students didn’t have all of their demands implemented, they reported being very pleased with the outcome of the process, feeling that their concerns were heard and that their values were respected.
To me this suggests a policy that universities should consider: when a significant number of students, say 20% of them, sign a petition or raise a set of concerns, there should be a direct avenue available to them where they can engage in negotiation with the administration. With this option in place for students, we funnel disagreeing parties into a place where they must view each other in the space of reasons, drawing out the heart of their conflict in values, and preventing hostility from festering between them. Additionally, if this route is available to students, they will not take to the streets in protest as their default action, which has only served to heighten tension and anger between the administration and the activists.
Of course, these institutional negotiations may fail, but again, at least in going through the process; both sides will hear the values and reasons that are important to them, which can only be a good thing. Until we make negotiation our default way of handling normative conflicts personally and institutionally, debates are likely to remain static, with both parties attempting to gain ground through a shouting match of name-calling and demonization, and in such a state of dialogue, one can only observe that everybody suffers.