When Philosophy Gets Human Beings Wrong

by Kevin Currie-Knight


In the past year, I’ve read two books on how people’s minds change. The latest, How Minds Change, is by science writer David McRaney. Previously, I’d read, Stop Being Reasonable, by philosopher Eleanor Gordon-Smith. Both attempt to drill into “what we know” about how real people in the real world go about changing their minds. But, both have another theme in common, though a less overt one: the significant discrepancy between how philosophers tend to conceptualize human beings and how we tend to work in the real world.

As a philosopher myself, this is an interesting and concerning theme, mostly because I increasingly sense this problem, even when I can’t put my finger on it. The short version is this: Philosophers tend to conceptualize human cognition in their own image. We encounter and entertain positions and reasons about them, we deliberate on these reasons, and we conclude accordingly. And while philosophers will be all too glad to acknowledge that humans often fall short of this ideal, it is the ideal to which we should aspire.

McRaney and Gordon-Smith describe a different situation. Yes, we judge based on reasons, but belief is produced most often not by reason alone, but also by extra-rational factors like how the belief makes us feel, what types of arguments our life history has predisposed us to be more favorable towards, what consequences a position has for our self-image, etc. In some sense, the moral of these two books – implicit in Gordon-Smith’s title – is that if we attempt to change people’s minds, we have to deal not only in reason, but in the extra-rational factors in belief. In other words, do not look to (most) philosophers as your guide to real-world human decision-making. 

This should be a problem for any area of philosophy that represents itself as sketching a picture of how things could and should work in the real world. Jurisprudential theory does not have a problem if it sketches judges as ideally thinking in ways that are at odds with how must humans would think, because it is talking about the ideal for judges, not human beings generally. (Same with, say, philosophy of science describing the epistemology of scientists.) But areas like moral and political philosophy would surely have a problem if they – as they often do – sketch theories of morality or justice that depend on deliberative processes but which many people would fail to see as livable lives.

Stephen Asma elucidates what I suspect is the difference between the philosopher and the regular person in his book, Why We Need Religion. Where the non-believer faults the religious person for believing something for which there is no good evidence, Asma points out that humans function on two levels: the indicative and the imperative. The indicative level is where humans get at what is true, and the imperative level is where humans get at what will help them navigate the world. Asma points out (as has Donald Hoffman) that these two can but need not overlap. He also notes that by and large, humans aspire to navigate the world more than to get at truth, and when doing the latter, they do it with the aim of using those truths to navigate the world.

Since Plato, it seems like much of philosophy has treated getting at knowledge of truth (sophia) as the overriding goal and treating well-being (eudaimonia) as subordinate. What Asma – and indirectly, McRaney and Gordon-Smith – are getting at is that this is the opposite of the world most humans live in, where those goals are decidedly reversed.

In fairness, a number of philosophers have recognized the motivational nature of belief and truth-seeking. Hume did when he famously wrote that reason is and ought to be the slave of the passions. The pragmatists did when they suggested that, for beings whose goal is to act successfully in the world, we call beliefs that help us in doing this true (often refusing to specify what standard of measurement we should use in judging this). [1] Various others have gotten there in different sub-disciplines, such as care ethicists who argue for the role partiality and sentiment necessary plays in moral behavior.

However, a larger group of philosophers have downplayed the role of Asma’s imperative (navigating) function over the indicative (truth-seeking) function. Moral and political philosophers have given much emphasis to the importance of impartial reasoning in charting proper courses. We can arrive at the best political outcomes by doing democracy deliberatively, where we all come together and do our best to talk about issues in detached, secular, and non-sophistical ways until we reach (again impartial) consensus. We are to prize, as Toulmin puts it, the rational over the reasonable, what accords with logic over what accords with what seems sensible for our lives (if philosophers even recognize a distinction between those).

I wonder if this is the best way to understand why to so many people, philosophy – even though it purports to be about helping humans live better – seems such a dull subject. Is it because even when philosophy tackles questions that so many people struggle with, it does so in ways those people find unrecognizable and inapt to the human lives they live in Asma’s imperative realm?

It reminds me of a YouTube discussion between four (nonprofessional but serious) philosophers on the problem of evil and theism. The problem is whether the existence of evil is compatible with belief in a theistic god, surely a problem a lot of people who aren’t philosophers think about. While I found the video interesting, I noticed how quickly the discussion shifted from whether theism makes sense given the existence of evil (and vice versa) to whether it was logically possible to believe in God in the presence of evil. And those seem like different questions. If the former is about whether a belief in God is a good belief to have in the presence of evil, the latter turns it into a logic game: “Well, if we conceive of evil in these theoretical terms and God in those theoretical terms, there is no inconsistency.” That latter thing may be interesting for someone like me, but it won’t likely compel someone whose faith in the God of Catholicism is shaken by her brother’s murder. Any philosopher who says or talks as if it should will likely make themselves even less relevant in that situation.

I think the same way about other areas of philosophy. Works in epistemology are often fascinating, but they quickly exceed any type of dilemma most people will ever experience with regard to knowledge. They depict epistemic questions that few outside the field will seriously raise and answers that few would take as serious answers to questions like whether I am sure the plumber will arrive on Wednesday, God is as my Church says he is, or fish feel enough pain for me to feel bad about eating them. Reasons count in these cases, but (a) so do other factors (how ostracized will I be if I become vegetarian, do I trust my Pastor, etc.); and (b) the regular person wants answers that are good enough to get on with things, not that are decisive to the standards of a professional philosopher.

Is this a problem? Could the philosopher’s standard be one that most people do not, but should, attain in their lives? After all, were someone training to be a scientist and told to be less biased in how they test a hypothesis, it wouldn’t be a convincing response to point out that most people are biased in their everyday lives. Maybe we should do better, aspire to be more rational and less reliant on extra-rational factors.

But here is the difference between the aspiring scientist and the philosophy student. The aspiring scientist is learning to enter a specific practice; they are aspiring to be a scientist, and when they are doing science, biases they might permit in their everyday lives would undermine their work. The philosophy student, however – and especially in areas like epistemology, or moral and political theory – is being told that the philosophical ways they are learning about will help them be better people, not just better specialists. At some point, though, being a better human being cannot proceed through techniques that eliminate much of what (people believe) make them human.

It likely isn’t either/or here. Philosophers have some grounds to tell people that sometimes, the ways of real-life thinking are sloppy and we know of ways to tighten them up. Partiality and extra-rational considerations might make up a lot of what makes you human, but they are sometimes what lead humans astray and learning to lessen them on at least some occasions might help us think better. But even then, there is a sweet spot that I think many of us philosophers miss, mostly because we forget how peculiar and foreign our own preferred thought processes are. And the types of important things our methods can swallow up, those non-rational factors like partial attachments, adherence to traditions, and areas of life that can be cheapened by overemphasis on the rational.

A final analogy might be helpful. Psychologists help people overcome problems, and some write accounts of what humans are like in hopes that those accounts will be of help. When a client goes to a psychologist, the psychologist surely helps them try to overcome present ways of thinking and acting. Going to a psychologist means you want to go beyond the way you presently do things. But the psychologist will not be of aid if she relies on theories of human action that do not sketch a realistic account of the person she’s helping. And she will be less and less helpful the more her advice involves the client shutting off or suppressing parts of himself that would impoverish his life or cause him to feel less human.

I’m not sure I have much of a hard conclusion here except to say that I suspect philosophy is increasingly guilty of sketching ways of being most people can’t find livable. Philosophers deal with questions that many people wrestle with, but do so in ways that jettison extra-rational commitments far in excess of where most people would wish to go.


[1] For a masterful pragmatic account of belief and its motivated nature, I recommend locating an out-of-print book by the British pragmatist F.C.S. Schiller called Problems of Belief.


7 responses to “When Philosophy Gets Human Beings Wrong”

  1. Great topic and one that I have spent quite a bit of time thinking about. It is also one that I think will provide much new ground for philosophers to think through. My own general view is that recent philosophy often takes too narrow a view of what is “reasonable” or “rational” by excluding pragmatic reasons for belief.

    I think philosophy too often uses scientific propositions as the model of how beliefs work. But often scientific beliefs have very little effect on our day to day lives. Let’s say for whatever reason science discovers that they predicted the age of the earth wrong and it is actually a billion years older than they originally thought! I mean sure allot of science might have to be reconsidered by such a thing – I don’t know I just tend to trust whatever the consensus is. So the scientists that work in fields that may be effected might have their day to day thinking change. But I would think for most people they would say “wow that is really interesting the earth is a billion years older than I thought” but this change of belief would not have any other appreciable effect on their life. In fact many such scientific beliefs are beliefs that don’t really change much in most people’s day to day decision making. I mean even if we go with extreme cases like is the earth the center of the solar system. Sure that caused a stir because of the religious stuff but if someone was not invested in a particular reading of a passage of scripture Galileo’s view would be interesting but it wouldn’t the price of bread or how people lived.

    Now sure developing antibiotics, electric lights etc etc have had a big effects on how we live. But even there the effects on the decisions we make as we live are often indirect at best. We can live longer but that doesn’t mean we have a better idea of what to do with the extra time. We learn more with the internet but what should we learn about? What is important to learn? Learn to live even longer? To what end? The hope that someday we will learn what to do with the extra time?

    Using scientific beliefs as the model leads philosophers to think “beliefs” “should” solely be formed on the probabilities yielded by the evidence. (I put “beliefs” in quotes because I think that word could use clarification in philosophy and I also put “should” in quotes because it is unclear how such epistemic norms should stack up with other norms such as moral norms etc.) But what if we don’t have enough evidence to believe X and we don’t have enough evidence to believe not X? Well then some try to come up with “burdens of proof” and/or say we should “withhold belief” unless a certain burden is met, or something like that. Well withholding our belief seems fine when we are talking about the age of the earth or even what celestial entities are orbiting others. But when it comes to certain moral or religious questions life doesn’t really seem to allow that. You are either going to church on Sunday or you are not. You are either going to teach your children it is wrong to eat meat or you are not. You yourself will eat meat or not. What would “withholding belief” on the morality of these cases look like? Sometimes I eat a burger and go to church sometimes I don’t it is just random? I talk about how I think the term “belief” has couple of different meanings here:

    So we realize there are questions that are important to how we live our lives and questions that really aren’t that important. (and as wonderful as science is it often is not giving us the important answers) We don’t think it is rational to live your life with the goal of expunging as many beliefs that do not meet a “burden of proof” from our mind and collecting as many as we possibly can that do meet a “burden of proof.” I mean even if we thought that old telephone books were reliable enough to meet “the burden of proof” to rationally believe someone’s address shortly before publication, no one would think it is “reasonable” or “rational” to spend your time trying to memorize old phone books from 20 years ago so you could hopefully fill your head full of likely true beliefs about random peoples addresses.

    1) People want to believe what is true

    2) People want to believe things that lead them to do the right thing

    I try to at least sketch out reasons to think the second goal can be more important than the first.


  2. Thanks, Dan. In private conversation, you offered a particular criticism that is definitely worth responding to. Feel free to rephrase or elaborate on my understanding of it.

    You wrote that it is something of a category mistake to respond to a proposed norm with a retort about how humans don’t behave that way. That, I guess, is because the very point of a norm is to move away from how humans DO behave into how they SHOULD behave. So, a philosopher can surely say “Look, I know humans form and evaluate beliefs in the way you describe. I’m just saying they are in some cases at least wrong to do so and I want to sketch and argue for a better way to do it, the way they SHOULD do it.”

    That’s fair, and i probably should have been more careful in the piece to attend to this. But I do think there is a point where a norm can become so ignorant of what it is realistic to expect of people that the norm becomes increasaingly irrelevant or maybe foreign to the people you are recommending to. Buddhists can say all they want that we should strive to be detached from everything, but many will find that impossible and more will find that an ideal that so excises part of what makes their lives worth living that the ideal is unattractive and uninspiring.

    I have, for instance, a huge problem with ideas about deliberative democracy for this very reason. Politics is partial and passionate; it just is, at least the way most people do and will always think about it. Even if we could attain some set of discursive rules where we discussed these things in non-rhetorical and impartial ways, changing our minds when arguments (rather than passions) dictate – and I reject that this is even coherent to begin with – the idea is understandably appealing to no one it too unhuman.

    Particularly, I think that as creatures evolved/designed/whatever to act in the world, we are practically destined to be partial and motivated (by survival and its extensions) in our reasoning. To say “But you shouldnt be!” is to say “But you should aspire to be less human than you are,” and in a way tht would be different than “You should aspire to be less greedy/vengeful/insert-human-frailty.” I don’t think partiality and motivation in reason is a frailty per se; removing it will cause as much damage to our ability to reason as leaving it as a player in the reasoning process.

  3. The reason I didn’t raise it in the comments is that once I read the entire piece, I felt you addressed it.

  4. Hi Kevin: This is, as Dan says, terrific. You are describing a very real problem, at least for anyone who thinks as we do that philosophy contains much that is valuable and yet it often fails to connect with the lives and thought processes of the intelligent public. I especially like the way you don’t try to force a simple solution on the problem.

    I wrote an article last year which tries to grapple with the problem from the viewpoint of how to design an ethics course for students who are not philosophy majors. I try to show how an ethics focused on various forms of social practice is superior, for that purpose, to the usual ethics derived from ethical theories.

    This is the abstract:

    The purpose of this article is to discuss the concept and the content of courses on “social ethics”. I will present a dilemma that arises in the design of such courses. On the one hand, they may present versions of “applied ethics”; that is, courses in which moral theories are applied to moral and social problems. On the other hand, they may present generalised forms of “occupational ethics”, usually professional ethics, with some business ethics added to expand the range of the course. Is there, then, not some middle ground that is distinctively designated by the term “social ethics”? I will argue that there is such a ground. I will describe that ground as the ethics of “social practices”. I will then illustrate how this approach to the teaching of ethics may be carried out in five domains of social practice: professional ethics, commercial ethics, corporate ethics, governmental ethics, and ethics in the voluntary sector. My aim is to show that “social ethics” courses can have a clear rationale and systematic content.

    The article is accessible here: https://philpapers.org/rec/TAPWSB


  5. Joe,

    Thanks for the response. Yes, I think the two things that philosophers use that get them away from how most people think is (a) a mode of science modeled on physics, and (b) the rules of logic. Both of these are great things, and if finding truth and refining argument is the goal, being grounded in these obviously helps.

    The problem is, as I say in the piece, that philosophy often puzzles with these methods over problems invovling people (ethics, politics, epistemology) who dont themselves tend to use or resemble these methods often.

    The area I think of particularly is thinking about human psychology. The reason I find psychologists so much more compelling in understanding human belief and mind than philosophers has largely to do with the idea that psychologists are not so much trying to make consistent and decisive arguments as they are to describe, however fuzzy, contradictory, etc, those descriptions. And when philosophers insist that such-and-such description of human cognition can’t be right because it fall afoul of logic (is inconsitent, contradictory, etc), I wonder what about humans and our evolution makes anyone think that humans WOULD be logically coherent. I guess that’s where I side with the existentialists, and certainly the pragmatists (where it is interesting to note that both James and Dewey were both pyschologists before they were phillosophers).

  6. […] in the pages of The Electric Agora, Kevin Currie-Knight pours cold water on this hyper-rational state of affairs and the role analytic philosophy plays in encouraging it. […]