by Kevin Currie-Knight
Three new books came out in April 2021 that I have been eager to read (Jesse Singal’s The Quick Fix; Julia Galef’s The Scout Mindset; and How to Keep an Open Mind, Richard Bett’s abridged/annotated translation of the works of Sextus Empiricus) Upon having read them, it occurred to me that not only is there a common theme – belief despite the possibility of error – but that the books can be read as if in conversation with each other. Singal’s book aims to show us how erroneous beliefs in the social sciences can take hold and why. The other two books offer us different ways of thinking about belief, in light of the possibility of error.)
In an interview with its author, Matt Taibbi called The Quick Fix a “debunking book.” I wish Singal would have offered a partial correction to that characterization. The book is indeed a work debunking eight TED-worthy ideas from various areas of psychology.  But there are two reasons I would not call this a “debunking book.” First, Singal correctly notes that all of the ideas the book takes aim at have some kernels of truth. The idea is not that they are wrong so much as oversold and oversimplified. Second, while each chapter does debunk, the focus is as much on explaining how these ideas gained so much traction as it is to deflate them.
So, this is a book that deflates six ideas from the social sciences, but also – and maybe more importantly – it is a book that warns us to approach belief more cautiously. The more an idea seems to reduce large and complex problems to small and simple fixes, the more TED-ified the idea, and the more its authors have an interest in being its champion, the more skepticism we should apply.
But first, the fun part. Let’s smash some stuff! Singal takes one of his hammers to Implicit Association Tests (IATs). Developed by three researchers, these tests purport to detect hidden biases of various kinds (racist, sexist, ageist, etc.). Test-takers decide by clicking a button whether an image and word belong together. Sometimes the image is of a white person, and the word is a positive word; other times, the image might be of a black person, and the word is a negative word. If the test-taker associates certain kinds of images (say of white faces) with certain kinds of words (positive words) more quickly than other pairings, this indicates a bias.
The real story, however, is the quick cultural ascendency of the IAT relative to the data supporting it. In short order, the test creators wrote a book (which I will admit to having taught in one of my courses), and the test came to be used in a number of diversity training programs. The problem is that over the years, scholars have offered numerous conceptual and empirical criticisms of the test, from questioning whether the tests measure what they claim to measure, to concerns about the fact that the same respondents may get wildly different scores each time they take the test. The test authors responded in an interesting way. In academic settings, they have admitted that some of the criticisms have merit and toned down their own claims for the test. But in public forums, the authors and their numerous defenders continue to make extravagant claims about it.
Singal tells a similar story about power posing, the idea that one can improve confidence and bargaining position by adopting certain kinds of body language associated with strength. (The 2012 TEDTalk given by one of the primary researchers, Amy Cuddy, quickly became one of the most watched TEDTalks of all time.) Again, the book deal and speaking engagements came, and the relevant claims became increasingly extravagant. Few seemed to notice the mounting evidence questioning the researchers’ initial findings. (Here, the big questions were about the statistical significance of the findings and took aim mostly at the researchers’ use of a technique called ‘p hacking’ which may have overestimated the probability that their results weren’t simply matter of chance.) Even after one of the original researchers became convinced that the data didn’t support the original conclusion, a star idea had nonetheless been born, and it has yet fully to die.
Debunking is fun. It is like a detective story for nerds. Each chapter tells the tale of an idea’s hasty assent, the accumulation of skeptical voices, and those voices subsequently being met with everything from incredulity (the continued ascendency and employment of Implicit Association Tests) to reluctant and slow acceptance (the now obsolete idea of superpredators).
But really, this is a book about belief and how it can go wrong. Not only the beliefs of a public eager for the next new thing, but of journalists dying for another sexy science story, and policy makers craving the next cure for our myriad social ills. It is also a story about how the beliefs of well-trained, -credentialed, and -esteemed academics can go terribly wrong. And when we put all of those parties and all of that belief together … well, then.
One culprit, of course, is the lure of the simple and exciting story. As Singal notes, “It’s likely that just as our brains prefer simple stories, within psychology too the professional incentives point toward the development of simpler rather than more complex theories.” To combat racism, we could either build up the political and economic will to change the social systems that allow racism to flourish … or we could take a minutes-long Implicit Association Test that will allow us to detect our biases. To address unequal outcomes in education and elsewhere, we can either take a hard look at the structures that make such inequalities likely … or we can teach vulnerable kids to have more grit and to “eagerly power through.”
Another culprit is the thin line between disinterested and interested scholarship and belief. First, for all their differences, each story has the same rough arc: scholars generate finding based on limited data; journalists pick up on the findings and the scholars are eager to help; then comes a TED-talk, book deal, and other forms of notoriety; and then arrive the skeptical researchers who point to flaws in the original studies and fail to generate similar results when the study is replicated. Alas, by then, the original researchers have become champions of the idea and have a lot riding on it (reputation, grants, publicity), so they either do not see or are unwilling to admit that they have gotten things wrong.
We outsource a lot of our thinking to experts. Partly, this is because most of us lack the relevant knowledge and expertise on any number of subjects. So, we trust experts, and we justify this by appeal to their expertise, and trust in the processes by which the experts and their ideas are vetted.  Yet, Singal’s book reminds us that not only can experts have their objectivity and trustworthiness distorted (even beyond their own ability to perceive it), but that human biases and blind spots also creep into vetting process itself, meaning peer review.
Before a half-baked idea can arrive on the cultural scene, it must be born, and in academia, that means making it through the peer review process. With some lament, Singal notes that “when an idea is stamped with BELIEVED BY IVY LEAGUE EXPERTS, that is often enough to spur its spread.” Unfortunately, Singal spends little time in the book identifying the weaknesses of the peer-review process. I’m familiar enough with it (both as submitter and reviewer) to briefly rehearse some of these. First, most who have been through the process know how tainted by subjectivity it is. My peers and I often share stories about how one reviewer will find fault with the sample size of our study, how question 4 on our survey was worded, why we didn’t incorporate x theoretical framework, and how our conclusion seemed a tad too strong for our findings. And yet, a different reviewer will mention none of this and enthusiastically recommend the article for publication! If you get a rejection from one journal, you can always just send it to another with a “better” acceptance rate. (Sure that journal may be less prestigious, but your finding is now in a peer-reviewed publication!) Not to mention that a journal can only review what is written and submitted, and scholars have no incentive to undertake a study unless they think they can find something interesting. Sometimes, there are even creative ways to ensure that occurs!
All of this leaves us in an epistemic pickle, and gives us a challenge that the next two books I mentioned think they can answer. How should we go about deciding what to believe and not in a world where the experts can and do get it wrong? Some might choose a sort of universal skepticism, at least toward any findings from certain fields. (“That’s a finding from social science. You just shouldn’t believe it.”) This seems impractical, at least in situations where you have to act on some idea about what to believe. Others might go in the opposite direction and proceed full speed ahead. (“Well, the experts believe it and it seems plausible, so….”) But that just ignores the problem.
The next book I will look at, Julia Galef’s The Scout Mindset, proposes that we just be more careful in how we believe. She readily acknowledges that humans are inextricably biased. But that we are biased doesn’t mean we can’t find creative ways to minimize the effects of those biases, nor does it mean we can’t recognize them. The book I will review after that, How to Keep an Open Mind, is an abbreviated translation of some works by Pyrrhonian skeptic Sextus Empiricus, who believes that in as many cases as possible, the best thing to do is withhold any firm belief, and we can do this by always reminding ourselves of reasons we shouldn’t be confident in our judgments. I will have problems with both of these approaches, but we’ll cross those bridges when we get to them.
 In order, the ideas are: (1) the self-esteem movement; (2) the criminology of “superpredators”; (3) the effectiveness of power posing; (4) the Penn Resilience Program; (5) Grit theory; (6) Implicit Association tests; (7) the psychology of priming; and (8) nudge theory. My own field of Education, unfortunately, has fallen quite hard for (1), (5), and (6), and shows no signs of wavering.
 We might not defer to experts in entirely unthinking ways. We might check what they tell us by what we know from elsewhere (which opens up its own problem of why we trust that information), by whether it makes intuitive or common sense, etc. Yet, the farther outside of our skill-set the expert is in, the less of any sort of checking we can do, and the more trust (or uninformed skepticism) we must give.