By Kevin Currie-Knight
Early in her book, The Scout Mindset: Why Some People See Things Clearly and Others Don’t (Portfolio, 2021), Julia Galef comes right out and says: “Motivated reasoning is so fundamental to the way our minds work that it’s almost strange to have a special name for it; perhaps it should just be called reasoning.” Jesse Singal’s, The Quick Fix chronicled how social scientists can fall prey to such motivated reasoning. If the ideal is objective scientists dispassionately following evidence wherever it leads, the reality is one of scientists with human commitments and biases sometimes coming to bad conclusions that they defend doggedly. So, what chance do the rest of us have of reasoning well?
The Scout Mindset is Julia Galef’s advice manual on how to reason better, and her approach is a bit different from comparable manuals that have appeared prior. Galef thinks the primary problem is that we reason as “soldiers,” when we should be reasoning as “scouts.”
We are probably all familiar with those who reason as soldiers and have probably done a share of it ourselves. The aim – whether conscious or not – is to win. When one is challenged with data or arguments, the soldier’s job is to refute them, rather than honestly consider them; to defend one’s existing position rather than dispassionately submitting it to challenge. Taking a cue from Lakoff and Johnson’s, Metaphors We Live By, Galef reminds us that this type of reasoning is so typical that it is even reflected in the way we often talk about reasoning. Arguments are either forms of “attack” or “defense.” If we are not careful, someone might “poke holes” in our logic or “shoot down” our ideas. We might encounter a “knock-down” argument against something we believe. Our positions might get “challenged,” “destroyed,” “undermined,” or “weakened.” So we look for evidence to support, bolster, or buttress our position. Over time, our views become reinforced, fortified, and cemented, and we become entrenched in our beliefs, like soldiers in a foxhole.
The scout, by contrast, has a job that demands detachment from battle. The scout, writes Galef, is interested in seeing as clearly as possible, even when that means giving up on a strategy to which one was previously committed. Rather than reasoning as a means to vindicating our current beliefs, Galef wants us to become more like the detached scout, who remains ever open to challenge.
Since motivated reasoning has a social dimension (friendships, reputation, pride, etc.), Galef advises that we reframe how we think about error, from “I’m wrong about this,” to “I’m recalibrating for future accuracy.” Since motivated reasoning can come from our biases, she also suggests finding ways to examine our current beliefs at a level of imagined detachment (imagining ourselves placing a bet on our current position) or real detachment (learning ways to better acquaint ourselves with and take seriously opposing voices).
Most problematic, in my view, is Galef’s advice along the lines of learning to be more detached from our beliefs; as she puts it “learning to hold your identity lightly.” This may be my own postmodernist tendencies talking, but this strikes me as asking people to do the unrealistic. She spells it out here: “Intelligence and knowledge are just tools. You can use those tools to help you see the world clearly, if that’s what you’re motivated to do. Or you can use them to defend a particular viewpoint, if you’re motivated to do that instead.” [p. 49]
I find it implausible that there is any sort of divide between those who see the world clearly and those who argue as “soldiers,” let alone that such a divide stands much chance of being discovered through introspection. Anyone who has been involved in a heated argument knows the chances that “Hey, you should take a step back and see things more clearly” has of changing the tenor of the dispute are slim. Soldiers rarely think that they are scouts, and being a soldier cannot be reduced to “not seeing the world clearly.” We form beliefs because we think they are correct, and once we do so, our inclination is to hold them strongly, deferring only to what seems to us demonstrably better ones.
I think much of Galef’s advice is sensible, particularly the bits about finding ways to approach our beliefs as if they were not our beliefs.  What I find unconvincing is when she treats bad reasoning as a problem primarily of individual characteristics rather than social ones. For instance, Singal’s book is about how social scientists get things wrong by engaging in motivated reasoning, but a careful look suggests that most of these errors in reasoning are products of social incentives and structures. Scientists expressed overconfidence in their conclusions, but within a system of journal publishing that rewards this sort of thing. Scientists defended their claims against evidence that more neutral parties would have found convincing, but in very public forums where there is little good that can come from publicly admitting error. We can and should expect social scientists to reason well, but we should also acknowledge the social factors that make it harder for them to do so.
If I reflect on the many times I’ve changed my mind – or when I haven’t! – I can readily spot the environmental factors that affected the decision. I am more likely to change my mind after a heated argument (or many heated arguments)m when I am able to calmly think to myself and not have to admit in the heat of the public moment that my position was wrong. I am more likely to consider and admit others’ points when: I respect the other person; when we are talking privately, rather than in front of an audience; and when I trust that they are not looking for victory. I notice that debates on social media bring out the soldier in me faster than I’d like, and I suspect this has less to do with me than with incentives that the social media companies have introduced, after a lot of thought and research into what drives engagement.
None of this means that Galef’s recommendations are bad. Sometimes, we can be better reasoners by pulling ourselves up by our cognitive bootstraps, however biased those bootstraps may be. I do wish, however, that some of her advice pertained to recognizing the importance of our surroundings and the incentives involved in how we reason. If one wants to be less soldier-like on social media, Galef’s advice can probably move you a bit in that direction. But what will likely help even more is to realize the ways in which social media incentivizes you to remain in the soldier’s mindset.
In the end, I suspect that Aristotle made a mistake when he suggested that what distinguishes us from brutes is our rationality. It’s true to a degree, but I think that what really separates us has more to do with humans’ capacity to believe and believe strongly. Frankly, we are better at belief than at reasoning to (or out of) it. That’s why there are so many books on how we might reason better, while few find much need to produce books with advice on how to come to quicker and more strident beliefs. The last of my three book reviews will be on How to Keep an Open Mind, a recently translated set of writings by Sextus Empiricus. If the question is how to best form beliefs, given the possibility of error, Sextus’s advice is to do what you can to avoid it. We’ll see how that goes.
 Like Vygotsky and Fernyhough, I believe that most of what we do when reasoning critically is dialogic, by which I mean that we are essentially internalizing the type of dialogue we might have with differently situated others. “It is justified because x. Why do I think that? Because y. But couldn’t someone plausibly argue against y?” If this is right, better critical reasoning means being better able to incorporate the voices of intelligent critics into our internal dialogues, and more of a willingness to do so.