by Kevin Currie-Knight
In the past year, I’ve read two books on how people’s minds change. The latest, How Minds Change, is by science writer David McRaney. Previously, I’d read, Stop Being Reasonable, by philosopher Eleanor Gordon-Smith. Both attempt to drill into “what we know” about how real people in the real world go about changing their minds. But, both have another theme in common, though a less overt one: the significant discrepancy between how philosophers tend to conceptualize human beings and how we tend to work in the real world.
As a philosopher myself, this is an interesting and concerning theme, mostly because I increasingly sense this problem, even when I can’t put my finger on it. The short version is this: Philosophers tend to conceptualize human cognition in their own image. We encounter and entertain positions and reasons about them, we deliberate on these reasons, and we conclude accordingly. And while philosophers will be all too glad to acknowledge that humans often fall short of this ideal, it is the ideal to which we should aspire.
McRaney and Gordon-Smith describe a different situation. Yes, we judge based on reasons, but belief is produced most often not by reason alone, but also by extra-rational factors like how the belief makes us feel, what types of arguments our life history has predisposed us to be more favorable towards, what consequences a position has for our self-image, etc. In some sense, the moral of these two books – implicit in Gordon-Smith’s title – is that if we attempt to change people’s minds, we have to deal not only in reason, but in the extra-rational factors in belief. In other words, do not look to (most) philosophers as your guide to real-world human decision-making.
This should be a problem for any area of philosophy that represents itself as sketching a picture of how things could and should work in the real world. Jurisprudential theory does not have a problem if it sketches judges as ideally thinking in ways that are at odds with how must humans would think, because it is talking about the ideal for judges, not human beings generally. (Same with, say, philosophy of science describing the epistemology of scientists.) But areas like moral and political philosophy would surely have a problem if they – as they often do – sketch theories of morality or justice that depend on deliberative processes but which many people would fail to see as livable lives.
Stephen Asma elucidates what I suspect is the difference between the philosopher and the regular person in his book, Why We Need Religion. Where the non-believer faults the religious person for believing something for which there is no good evidence, Asma points out that humans function on two levels: the indicative and the imperative. The indicative level is where humans get at what is true, and the imperative level is where humans get at what will help them navigate the world. Asma points out (as has Donald Hoffman) that these two can but need not overlap. He also notes that by and large, humans aspire to navigate the world more than to get at truth, and when doing the latter, they do it with the aim of using those truths to navigate the world.
Since Plato, it seems like much of philosophy has treated getting at knowledge of truth (sophia) as the overriding goal and treating well-being (eudaimonia) as subordinate. What Asma – and indirectly, McRaney and Gordon-Smith – are getting at is that this is the opposite of the world most humans live in, where those goals are decidedly reversed.
In fairness, a number of philosophers have recognized the motivational nature of belief and truth-seeking. Hume did when he famously wrote that reason is and ought to be the slave of the passions. The pragmatists did when they suggested that, for beings whose goal is to act successfully in the world, we call beliefs that help us in doing this true (often refusing to specify what standard of measurement we should use in judging this).  Various others have gotten there in different sub-disciplines, such as care ethicists who argue for the role partiality and sentiment necessary plays in moral behavior.
However, a larger group of philosophers have downplayed the role of Asma’s imperative (navigating) function over the indicative (truth-seeking) function. Moral and political philosophers have given much emphasis to the importance of impartial reasoning in charting proper courses. We can arrive at the best political outcomes by doing democracy deliberatively, where we all come together and do our best to talk about issues in detached, secular, and non-sophistical ways until we reach (again impartial) consensus. We are to prize, as Toulmin puts it, the rational over the reasonable, what accords with logic over what accords with what seems sensible for our lives (if philosophers even recognize a distinction between those).
I wonder if this is the best way to understand why to so many people, philosophy – even though it purports to be about helping humans live better – seems such a dull subject. Is it because even when philosophy tackles questions that so many people struggle with, it does so in ways those people find unrecognizable and inapt to the human lives they live in Asma’s imperative realm?
It reminds me of a YouTube discussion between four (nonprofessional but serious) philosophers on the problem of evil and theism. The problem is whether the existence of evil is compatible with belief in a theistic god, surely a problem a lot of people who aren’t philosophers think about. While I found the video interesting, I noticed how quickly the discussion shifted from whether theism makes sense given the existence of evil (and vice versa) to whether it was logically possible to believe in God in the presence of evil. And those seem like different questions. If the former is about whether a belief in God is a good belief to have in the presence of evil, the latter turns it into a logic game: “Well, if we conceive of evil in these theoretical terms and God in those theoretical terms, there is no inconsistency.” That latter thing may be interesting for someone like me, but it won’t likely compel someone whose faith in the God of Catholicism is shaken by her brother’s murder. Any philosopher who says or talks as if it should will likely make themselves even less relevant in that situation.
I think the same way about other areas of philosophy. Works in epistemology are often fascinating, but they quickly exceed any type of dilemma most people will ever experience with regard to knowledge. They depict epistemic questions that few outside the field will seriously raise and answers that few would take as serious answers to questions like whether I am sure the plumber will arrive on Wednesday, God is as my Church says he is, or fish feel enough pain for me to feel bad about eating them. Reasons count in these cases, but (a) so do other factors (how ostracized will I be if I become vegetarian, do I trust my Pastor, etc.); and (b) the regular person wants answers that are good enough to get on with things, not that are decisive to the standards of a professional philosopher.
Is this a problem? Could the philosopher’s standard be one that most people do not, but should, attain in their lives? After all, were someone training to be a scientist and told to be less biased in how they test a hypothesis, it wouldn’t be a convincing response to point out that most people are biased in their everyday lives. Maybe we should do better, aspire to be more rational and less reliant on extra-rational factors.
But here is the difference between the aspiring scientist and the philosophy student. The aspiring scientist is learning to enter a specific practice; they are aspiring to be a scientist, and when they are doing science, biases they might permit in their everyday lives would undermine their work. The philosophy student, however – and especially in areas like epistemology, or moral and political theory – is being told that the philosophical ways they are learning about will help them be better people, not just better specialists. At some point, though, being a better human being cannot proceed through techniques that eliminate much of what (people believe) make them human.
It likely isn’t either/or here. Philosophers have some grounds to tell people that sometimes, the ways of real-life thinking are sloppy and we know of ways to tighten them up. Partiality and extra-rational considerations might make up a lot of what makes you human, but they are sometimes what lead humans astray and learning to lessen them on at least some occasions might help us think better. But even then, there is a sweet spot that I think many of us philosophers miss, mostly because we forget how peculiar and foreign our own preferred thought processes are. And the types of important things our methods can swallow up, those non-rational factors like partial attachments, adherence to traditions, and areas of life that can be cheapened by overemphasis on the rational.
A final analogy might be helpful. Psychologists help people overcome problems, and some write accounts of what humans are like in hopes that those accounts will be of help. When a client goes to a psychologist, the psychologist surely helps them try to overcome present ways of thinking and acting. Going to a psychologist means you want to go beyond the way you presently do things. But the psychologist will not be of aid if she relies on theories of human action that do not sketch a realistic account of the person she’s helping. And she will be less and less helpful the more her advice involves the client shutting off or suppressing parts of himself that would impoverish his life or cause him to feel less human.
I’m not sure I have much of a hard conclusion here except to say that I suspect philosophy is increasingly guilty of sketching ways of being most people can’t find livable. Philosophers deal with questions that many people wrestle with, but do so in ways that jettison extra-rational commitments far in excess of where most people would wish to go.
 For a masterful pragmatic account of belief and its motivated nature, I recommend locating an out-of-print book by the British pragmatist F.C.S. Schiller called Problems of Belief.