three new books: One – jesse singal’s, “The Quick Fix”

by Kevin Currie-Knight

____

Three new books came out in April 2021 that I have been eager to read (Jesse Singal’s The Quick Fix; Julia Galef’s The Scout Mindset; and How to Keep an Open Mind, Richard Bett’s abridged/annotated translation of the works of Sextus Empiricus) Upon having read them, it occurred to me that not only is there a common theme – belief despite the possibility of error – but that the books can be read as if in conversation with each other. Singal’s book aims to show us how erroneous beliefs in the social sciences can take hold and why. The other two books offer us different ways of thinking about belief, in light of the possibility of error.)

___

In an interview with its author, Matt Taibbi called The Quick Fix a “debunking book.” I wish Singal would have offered a partial correction to that characterization. The book is indeed a work debunking eight TED-worthy ideas from various areas of psychology. [1] But there are two reasons I would not call this a “debunking book.” First, Singal correctly notes that all of the ideas the book takes aim at have some kernels of truth. The idea is not that they are wrong so much as oversold and oversimplified. Second, while each chapter does debunk, the focus is as much on explaining how these ideas gained so much traction as it is to deflate them.

So, this is a book that deflates six ideas from the social sciences, but also – and maybe more importantly – it is a book that warns us to approach belief more cautiously. The more an idea seems to reduce large and complex problems to small and simple fixes, the more TED-ified the idea, and the more its authors have an interest in being its champion, the more skepticism we should apply.

But first, the fun part. Let’s smash some stuff! Singal takes one of his hammers to Implicit Association Tests (IATs). Developed by three researchers, these tests purport to detect hidden biases of various kinds (racist, sexist, ageist, etc.). Test-takers decide by clicking a button whether an image and word belong together. Sometimes the image is of a white person, and the word is a positive word; other times, the image might be of a black person, and the word is a negative word. If the test-taker associates certain kinds of images (say of white faces) with certain kinds of words (positive words) more quickly than other pairings, this indicates a bias.

The real story, however, is the quick cultural ascendency of the IAT relative to the data supporting it. In short order, the test creators wrote a book (which I will admit to having taught in one of my courses), and the test came to be used in a number of diversity training programs. The problem is that over the years, scholars have offered numerous conceptual and empirical criticisms of the test, from questioning whether the tests measure what they claim to measure, to concerns about the fact that the same respondents may get wildly different scores each time they take the test. The test authors responded in an interesting way. In academic settings, they have admitted that some of the criticisms have merit and toned down their own claims for the test. But in public forums, the authors and their numerous defenders continue to make extravagant claims about it.

Singal tells a similar story about power posing, the idea that one can improve confidence and bargaining position by adopting certain kinds of body language associated with strength. (The 2012 TEDTalk given by one of the primary researchers, Amy Cuddy, quickly became one of the most watched TEDTalks of all time.) Again, the book deal and speaking engagements came, and the relevant claims became increasingly extravagant. Few seemed to notice the mounting evidence questioning the researchers’ initial findings. (Here, the big questions were about the statistical significance of the findings and took aim mostly at the researchers’ use of a technique called ‘p hacking’ which may have overestimated the probability that their results weren’t simply  matter of chance.) Even after one of the original researchers became convinced that the data didn’t support the original conclusion, a star idea had nonetheless been born, and it has yet fully to die.

Debunking is fun. It is like a detective story for nerds. Each chapter tells the tale of an idea’s hasty assent, the accumulation of skeptical voices, and those voices subsequently being met with everything from incredulity (the continued ascendency and employment of Implicit Association Tests) to reluctant and slow acceptance (the now obsolete idea of superpredators).

But really, this is a book about belief and how it can go wrong. Not only the beliefs of a public eager for the next new thing, but of journalists dying for another sexy science story, and policy makers craving the next cure for our myriad social ills. It is also a story about how the beliefs of well-trained, -credentialed, and -esteemed academics can go terribly wrong. And when we put all of those parties and all of that belief together … well, then.

One culprit, of course, is the lure of the simple and exciting story. As Singal notes, “It’s likely that just as our brains prefer simple stories, within psychology too the professional incentives point toward the development of simpler rather than more complex theories.” To combat racism, we could either build up the political and economic will to change the social systems that allow racism to flourish … or we could take a minutes-long Implicit Association Test that will allow us to detect our biases. To address unequal outcomes in education and elsewhere, we can either take a hard look at the structures that make such inequalities likely … or we can teach vulnerable kids to have more grit and to “eagerly power through.”

Another culprit is the thin line between disinterested and interested scholarship and belief. First, for all their differences, each story has the same rough arc: scholars generate finding based on limited data; journalists pick up on the findings and the scholars are eager to help; then comes a TED-talk, book deal, and other forms of notoriety; and then arrive the skeptical researchers who point to flaws in the original studies and fail to generate similar results when the study is replicated. Alas, by then, the original researchers have become champions of the idea and have a lot riding on it (reputation, grants, publicity), so they either do not see or are unwilling to admit that they have gotten things wrong.

We outsource a lot of our thinking to experts. Partly, this is because most of us lack the relevant knowledge and expertise on any number of subjects. So, we trust experts, and we justify this by appeal to their expertise, and trust in the processes by which the experts and their ideas are vetted. [2] Yet, Singal’s book reminds us that not only can experts have their objectivity and trustworthiness distorted (even beyond their own ability to perceive it), but that human biases and blind spots also creep into vetting process itself, meaning peer review.

Before a half-baked idea can arrive on the cultural scene, it must be born, and in academia, that means making it through the peer review process. With some lament, Singal notes that “when an idea is stamped with BELIEVED BY IVY LEAGUE EXPERTS, that is often enough to spur its spread.” Unfortunately, Singal spends little time in the book identifying the weaknesses of the peer-review process. I’m familiar enough with it (both as submitter and reviewer) to briefly rehearse some of these. First, most who have been through the process know how tainted by subjectivity it is. My peers and I often share stories about how one reviewer will find fault with the sample size of our study, how question 4 on our survey was worded, why we didn’t incorporate x theoretical framework, and how our conclusion seemed a tad too strong for our findings. And yet, a different reviewer will mention none of this and enthusiastically recommend the article for publication! If you get a rejection from one journal, you can always just send it to another with a “better” acceptance rate. (Sure that journal may be less prestigious, but your finding is now in a peer-reviewed publication!) Not to mention that a journal can only review what is written and submitted, and scholars have no incentive to undertake a study unless they think they can find something interesting. Sometimes, there are even creative ways to ensure that occurs!

All of this leaves us in an epistemic pickle, and gives us a challenge that the next two books I mentioned think they can answer. How should we go about deciding what to believe and not in a world where the experts can and do get it wrong? Some might choose a sort of universal skepticism, at least toward any findings from certain fields. (“That’s a finding from social science. You just shouldn’t believe it.”) This seems impractical, at least in situations where you have to act on some idea about what to believe. Others might go in the opposite direction and proceed full speed ahead. (“Well, the experts believe it and it seems plausible, so….”) But that just ignores the problem.

The next book I will look at, Julia Galef’s The Scout Mindset, proposes that we just be more careful in how we believe. She readily acknowledges that humans are inextricably biased. But that we are biased doesn’t mean we can’t find creative ways to minimize the effects of those biases, nor does it mean we can’t recognize them. The book I will review after that, How to Keep an Open Mind, is an abbreviated translation of some works by Pyrrhonian skeptic Sextus Empiricus, who believes that in as many cases as possible, the best thing to do is withhold any firm belief, and we can do this by always reminding ourselves of reasons we shouldn’t be confident in our judgments. I will have problems with both of these approaches, but we’ll cross those bridges when we get to them.

Notes

[1] In order, the ideas are: (1) the self-esteem movement; (2) the criminology of “superpredators”; (3) the effectiveness of power posing; (4) the Penn Resilience Program; (5) Grit theory; (6) Implicit Association tests; (7) the psychology of priming; and (8) nudge theory. My own field of Education, unfortunately, has fallen quite hard for (1), (5), and (6), and shows no signs of wavering.

[2] We might not defer to experts in entirely unthinking ways. We might check what they tell us by what we know from elsewhere (which opens up its own problem of why we trust that information), by whether it makes intuitive or common sense, etc. Yet, the farther outside of our skill-set the expert is in, the less of any sort of checking we can do, and the more trust (or uninformed skepticism) we must give.

39 comments

  1. Psychology has been a minefield of misinformation ever since it’s beginnings in Freud and Wundt. The subject is so complex that simplifying things is the only way to survive. It’s just that there are an infinite number of simple ways to see things, so, more than a hundred years later we’re still shooting in the dark. The more uncertainty and complexity in a subject, the more we seem to be subject to intellectual fads – a kind of weaker version of a “paradigm” that tends to go with the human science territory. Philosophy still has a job here to ferret out the unacknowledged assumptions that back up these ‘sciences” and analyze their strengths and weaknesses. It’s a tough thankless job!
    Since the internet came along I’m curious as to whether it will spell the demise of peer-reviewed philosophical journals. Some of the greatest pieces of twentieth century philosophy were written in this format: Quine, Russell, Frankfurt, Searle, etc. but the vast majority of this material is destined to be forgotten, and read by a minutely small cadre of philosophers. One of the most influential philosophers of the last century – Wittgenstein – never wrote a thing in a philosophical journal.
    I wonder how the internet might change all that by bringing philosophy in a more simplified, less technical form to a vastly larger audience. I hope so, because it seems to me that in philosophy, as opposed to science, the peer review process is largely stifling initiative. Why not let a thousand flowers bloom rather than have everyone standing on their tiptoes, afraid to take any intellectual risks?

  2. This is marvellous stuff and I have just procured all three books. I foresee some enjoyable reading.

    From the foreword to Singal’s book

    We’re living in what the Princeton historian Daniel Rodgers calls an age of fracture, the title of his invaluable 2011 book. “Conceptions of human nature that in the post–World War II era had been thick with context, social circumstance, institutions, and history gave way to conceptions of human nature that stressed choice, agency, performance, and desire,” he explains. “Strong metaphors of society were supplanted by weaker ones. Imagined collectivities shrank; notions of structure and power thinned out.”3 In this dispensation, we are taken to be discrete individuals floating around in markets, increasingly responsible for our own well-being and increasingly cut off from the big groups and institutions and shared ideas that gave American life so much of its feeling and texture and meaning in the past.

    This strikes me as a powerful insight.

    The age of fracture has elevated the agency of the individual to an almost god-like status. He has sole, undisputed control over his consumption, pleasures and sexuality. Unbridled narcissism and solipsism are the order of the day. He has become the curator of his beliefs, assembling them at will from sources that appeal to him without regard to epistemic authority. And now the sources of epistemic authority are being corrupted.

  3. Charles,

    the peer review process is largely stifling initiative

    I don’t think so but it does add difficulties. There are two levels of peer review. The first is what we traditionally understand as peer review, the gatekeeper process of getting your paper published. The second and real form of peer review happens when fellow academics read and assess your paper. This is a filtering process that ultimately separates the gold from the dross. And this is where Mao’s dictum comes into play.

    Mao said “Let a hundred flowers bloom; let a hundred schools of thought contend.

    The point here is the contention of differing schools of thought, post publication, is a form of peer review and the most brutal form of peer review is when your paper languishes in obscurity with few to no citations.

    1. My observations of the stuff that gets published in journals is that it is too technical for its own good, and it is trivial and unimaginative. My observation of Philosophy graduates is that they are too inhibited and afraid to take risks – they know they’ll be shot down the moment they come up with what they think is an original idea. But I wholeheartedly agree with what you call the second kind of peer review – the feedback from peers – it is both necessary and at times devastating.

    2. I think Mao’s intentions in saying “let a hundred flowers bloom” was to flush out the Chinese intellectuals so that he could liquidate them.

      1. > I think Mao’s intentions in saying “let a hundred flowers bloom” was to flush out the Chinese intellectuals so that he could liquidate them.

        Gotta respect a man with a plan 😉

    3. Regarding peer review and its pros/cons, I have some thoughts:

      1. Peer review is at least an okay system with some justification to it. My problem is less that there is a lot of subjectivity/contingency in the process than that the public and academia venerates the process as if it didn’t have these systemic problems.

      2. In some sense, peer review surely stifles innovation. It – along with university publishing incentives – ensures that anyone who wants to publish in journals will likely write either very safe (in format or finding) pieces that reviewers will like. It also increases by a wide margin the time between when findings are found, written, and appear in journals – often increasing this gap by a year or more.

      3. But what stifles innovation just as much is the cloistered nature of journals. In fact, the more ‘prestigious’ the journal, the more likely it is behind an increasingly expensive paywall that sets its findings far out of the reach of nonacademic readers. That peer reviewed articles contain often inaccessible jargon is bad, but is probably to be expected in any field of high specialization… though I suspect that the cloistering of journals to academic circles only makes that problem worse.

      4 [I was going to put this in the original review but didn’t, for reasons of space] 1 is one reason I was deeply unimpressed by Peter Bogghossian, Helen Pluckrose and James Lindsay’s study-that-wasn’t-a-study-because-we-didn’t-know-about-IRB hoax thingy. Simply put, they said that getting a bunch of hoax articles published in grievance studies journals showed the bankruptcy of those fields. Maybe. But without a control group, all it shows – every bit as plausible – is the gameability of and flaws within the peer review system.

  4. “How should we go about deciding what to believe and not in a world where the experts can and do get it wrong? Some might choose a sort of universal skepticism, at least toward any findings from certain fields. […] This seems impractical, at least in situations where you have to act on some idea about what to believe. Others might go in the opposite direction and proceed full speed ahead. (“Well, the experts believe it and it seems plausible, so….”) But that just ignores the problem.”

    Three points.

    1. This looks like a false dichotomy. (Oversimplification.)

    2. Sure, experts increasingly seem to be untrustworthy (and for a range of reasons). This is unfortunate and inconvenient. It necessitates that we exercise caution and skepticism and withhold assent on most claims. The range of issues which require some kind of action from us is very limited however. For the rest, we don’t really *need* to take a stance. Skepticism as a default position is, I would suggest, entirely practical. And very necessary today if one wants to maintain credibility in the long term.

    3. Why spend (waste?) time on these generalized, second-order questions at all?

    1. “1. This looks like a false dichotomy. (Oversimplification.)”

      Yes, it is, Mark. What I meant to say and should have said is that those are essentially the extreme polls between which there exist many other approaches.

      “The range of issues which require some kind of action from us is very limited however. For the rest, we don’t really *need* to take a stance. ”

      Potentially. Of course, I speak as an academic who teaches courses that touch upon a lot of research-based issues in social science, namely different areas of psychology. So, it may be that there are more decisions I must make (than others) about what research seems strongest and compelling enough to teach.

      “3. Why spend (waste?) time on these generalized, second-order questions at all?”

      Well, on a personal level, I find the question interesting, just as someone who finds metaphysical or ontological questions interesting (while others think they are complete wastes of time). On a broader level, we must act, and to act, we must form belief, and to form belief responsibly, we must figure out how to believe amidst the possibility of error. Not only how we will form belief (when evidence, say, is scant or less than straightforward), but how to handle being wrong (and how to make sense of why we were wrong). I think those are very human questions.

      1. Kevin

        Thank you for the reply. I accept that different people find different things interesting/worthwhile but there are still sometimes substantive issues at stake.

        “On a broader level, we must act, and to act, we must form belief, and to form belief responsibly, we must figure out how to believe amidst the possibility of error.”

        You accuse others of having a “very idealized” picture of human psychology, but this sounds idealized to me. Not only are you moralizing (the word “responsibly”), you are setting things out in such a way as to *create* an unnecessary (in my opinion) second-order problem.

        As I see it, we don’t, and don’t need to, “figure out how to believe…” in some general sense. We just need to figure out whether *particular claims* (about paranormal phenomena, or Covid, or the effectiveness of a particular teaching method, or human evolution, or quantum mechanics, or the origins of the universe…) are likely to be true.

        There might be generalized rules of thumb you could come up with, sure. But claims about the effectiveness of particular strategies need themselves to be qualified and tailored to specific contexts — i.e. particularized — so that they can be productively discussed, questioned, tested, etc..

        “Not only how we will form belief (when evidence, say, is scant or less than straightforward), but how to handle being wrong (and how to make sense of why we were wrong). I think those are very human questions.”

        What you are saying here is unclear to me. The question of “how we form belief when evidence is scant or less than straightforward” (which is most of the time!) is very general but you obviously could narrow it down and make it the basis of empirical studies. But this is not your real focus, is it?

        The question of “how to handle being wrong” is explicitly normative. And the question of “how to make sense of why we were wrong” could be unpacked in various ways. Human questions, yes. But human questions are always, in the end, personal and particular, are they not? (This is an insight which an appreciation of literature and film drives home.)

        1. “As I see it, we don’t, and don’t need to, “figure out how to believe…” in some general sense. We just need to figure out whether *particular claims* (about paranormal phenomena, or Covid, or the effectiveness of a particular teaching method, or human evolution, or quantum mechanics, or the origins of the universe…) are likely to be true.”

          This reads strange to me, Mark. I understand what you are getting at, but I’m sort of at a loss as to how we can do the latter – at least when consciously deliberating between alternative beliefs – without something like the former already in place. Your statement reads to me like ‘When deliberating on what to get from a menu you’ve never had before, you don’t need to first figure out what kinds of foods you might like or criteria on which to make your choice. You just need to choose what will most likely satisfy you!” But we can’t do that in this situation without first having some idea of how to decide between compelling options.

          “There might be generalized rules of thumb you could come up with, sure. But claims about the effectiveness of particular strategies need themselves to be qualified and tailored to specific contexts — i.e. particularized — so that they can be productively discussed, questioned, tested, etc..”

          True, but i don’t see how that gets in the way of my point. I never argued anything about how our criteria needs to be context-independent, nor would I need to.

          “The question of “how to handle being wrong” is explicitly normative. And the question of “how to make sense of why we were wrong” could be unpacked in various ways. Human questions, yes. But human questions are always, in the end, personal and particular, are they not?”

          Again, I’m not sure how you see this as a point against what I’ve said. To the degree that we like to have good beliefs, and at any points where we deliberate on what we might believe (say, when confronted with conflicting advice, recommendations, or interpretations of data), we need to have some (heuristic?) idea of how to go about coming to belief. We might even need to have an idea of how to recognize when we should judge our current belief wrong. Nothing in that entails that these questions shouldn’t be answered in ‘personal and particular’ ways. If your point is that they must be answered in personal and particular – rather than universal and absolute – ways, then we are actually in agreement.

          1. Kevin

            I am questioning your way of presenting human beliefs and decision-making. You downplay unconscious elements. And I am suggesting that, before you can usefully bring in moral elements, you need to have a clear picture of what goes on when we make everyday (as well as more sophisticated, scientifically-informed) decisions about what is the case in the world around us.

            “Nothing in [what I said] entails that these questions shouldn’t be answered in ‘personal and particular’ ways. If your point is that they must be answered in personal and particular – rather than universal and absolute – ways, then we are actually in agreement.”

            You are misunderstanding me. I was distinguishing between moral (“human”) questions and questions about how the world is. My main problem with your approach is that you blur (or perhaps reject) this distinction.

          2. “I am questioning your way of presenting human beliefs and decision-making. You downplay unconscious elements. And I am suggesting that, before you can usefully bring in moral elements, you need to have a clear picture of what goes on when we make everyday (as well as more sophisticated, scientifically-informed) decisions about what is the case in the world around us.”

            I can sort of see where you got that blurring from what I wrote, but I did not intend to say (nor do I think I wrote) that we must have a clear or conscious idea of how to go about believing. We can, of course, and there is nothing per se wrong about trying to articulate some criterira for belief. But no, I do not expect that anyone has to have an articulated and worked out idea of how to decide what to believe, only that when we choose whether to believe or not believe, or choose between conflicting possibilities, we always have some basis for doing so other than “just believe what is most likely to be true” (for we must have some idea of how to recognize what that is).

            “I was distinguishing between moral (“human”) questions and questions about how the world is. My main problem with your approach is that you blur (or perhaps reject) this distinction.”

            Ah, I see. Yes, I reject the distinction only in the sense that in order to have an account of what the world is, it must come through belief about the world. Put differently, if someone is struggling to figure out what to believe (or not believe), telling them that the best way to do this is believe whatever fits best with how the world is solves nothing and only kicks the problem back a step. We can’t get to how the world is sans belief about how it is. It isn’t that a world ‘as it is’ doesn’t exist, but that positing that it does dies nothing to solve the problem of what we should believe about it.

  5. I started out on Julia Galef’s book The Scout Mindset with happy anticipation, thinking that this was the answer. Sadly I have concluded that she has got it fundamentally wrong, but it was a nice try, with much to commend it. This is just my warning shot that I am going to challenge the book’s argumentation but first I need to marshal my arguments so that they have a coherent form. In the mean time I will watch the other responses and hopefully learn from them.

    1. I’ve always found her — and her gang of “rationalists” — distinctly underwhelming. Indeed some of them — I’m thinking of the ever cringeworthy Eliezar Yudkowsky — are flat out cranks.

      1. As I’ve commented before about ‘the new rationalists’: people who know everything, understand nothing, and have less than zero wisdom. Welcome to the religion of big data. How are your parents ?

        1. They are hanging by a thread but ok for now. Very hard to see.

          It’s funny. As an atheist, I should be a fan of Galef and Co., but I’m not. I guess I find their brand rather facile.

          Also, an unkind thought, but one I have trouble ignoring. If Galef, rather than being gorgeous, looked like Bella Abzug, would anyone have any interest in her within the community? It’s a notorious nerd/sausage fest.

          1. I hope they’ll be ok. Virtual hugs to you.

            > If Galef, rather than being gorgeous, looked like Bella Abzug

            With or without the hat ?

          2. “It’s funny. As an atheist, I should be a fan of Galef and Co., but I’m not. I guess I find their brand rather facile.”

            Dang, right there with you, Dan. I am not sure if it is my contrarian nature or something more thoughtful, but even when I was discovering (explicitly) my atheism, I’d frequent those atheist message boards and find myself ill at ease with the rationalists there. I’ve grown more so over time, but even then, when I was a fan of Dawkins and Harris, I just got the sense that they had a very idealized depiction of human psychology.

      2. Peter and Dan,

        Tou’ll see in my forthcoming review of Galef’s book that I think it is better than I anticipated, but not as convincing to me as she probably wants. Essentially I read her as one in a line of books whose first part talks about how bias is built into us and the latter parts of the book telling us that we can ignore the “built in” part and find ways to step out of our biases. In fairness, she is better (because more cautious in that latter part) than most. But her book reads like this: “We humans are deeply biased and easily become attached to defending rather than really evaluating our beliefs. I have an idea. if we try really really hard, we can use that unbiased part of our brain to step away from the biased part of our brain so that the unbiased part can recognize the biased parts.”

        1. Of all the groups I’ve run into online, the New Atheists were the most tribal, the most self-righteous, the most intolerant I’ve ever met. I met them when I first got broadband internet in around 2005 and they were the first blogs I chanced into.

          I’ve been an atheist since I was a child: my parents sent me to Jewish Saturday school, but I always found religion ridiculous and superstitious.

          However, having encountered the New Atheists led me to defend religion and all the good things about it. The New Atheists were as intolerant as the Spanish Inquisition or the Salem witch hunters.

          When I recall my encounters online with the New Atheists, an epigram from Nietzsche (Gay Science 209) comes to mind: “One way of asking for reasons.—-There is a way of asking us for our reasons that leads us not only to forget our best reasons but also to conceive a stubborn aversion to all reasons. This way of asking makes people very stupid and is a trick used by tyrannical people”.

          1. Although I don’t agree there really is such a thing as “New Atheism”, I do think like for most things the Internet has been the best and worst thing that ever happened to atheism. The best because it brought it an accessibility and visibility it never could have had when local standards ruled the day. The worst because like everything on the Internet it became the domain of young males who think their excrement has no odor, and are only there to argue points not understand the deeper questions of life. Very few of the atheists I have encountered online have much of a background in philosophy, they just jumped straight into the anti-religion arguments against a bunch of fundies. Consequently a lot of what they say is not informed by the voluminous history of the subject on both sides, and they are just trying to gainsay what someone else just said. If you try and correct them you are automatically in the enemy camp. So I always found it kind of funny when after being an atheist for 35 years I was accused of being an evangelical Christian because I dared to say that the Kalam has not “debunked”, or that Jesus Mythicists are essentially making the exact same argument creationists are.

  6. Kevin, here’s what I’m wondering [in a similar vein as Mark].

    Belief has always ever had to stand alongside then possibility of error. That has never been a reason to adopt a general skepticism — the only sound form of which is formal and hypothetical — or to reject the expertise of experts.

    What seems far more dangerous to me today is the ideological capture of experts and expertise. That experts make mistakes is not a reason to generally distrust them. That experts lie and dissemble for ideological reasons — as when scores of public health officials and medical experts did about Covid, vis a vis BLM protests — *is* a reason to generate distrust them.

    I would argue the same re: the IAT. The problem is not that the experts got it wrong. It’s that their ideological commitment led them to mislead everyone about its appropriate use.

    1. “The problem is not that the experts got it wrong. It’s that their ideological commitment led them to mislead everyone about its appropriate use.”

      I think both are problems, honestly. We have good cognitive reason to outsource much of our belief formation to expert others, and when there is any consequence to the beliefs we form when doing that, their getting things right more than wrong is pretty important. Yes, the problem gets worse when they get things wrong for systematic reasons – shitty incentives, overspecialization that vitiates the social nature of the science they are doing, etc. But if corporations, governments, and the general public are going to buy in largescale to the ideas of experts in certain fields, the important thing is that those experts are more likely to get things right than wrong.

      1. “More likely” has been added to the analysis, and I doubt it is true.

        Error and its possibility alone have never been sufficient reason for general skepticism.

    2. Do you think this is a problem that has gotten worse in recent years, and how much of the responsibility for it do you think can be laid at the feet of postmodernism’s effect on academia(the whole “science wars” controversy)?

  7. Let me start by challenging the book’s(The Scout Mindset) foundational premise:

    …painted an unflattering picture of a human brain hardwired for self-deception: We rationalize away our flaws and mistakes. We indulge in wishful thinking. We cherry-pick evidence that confirms our prejudices and supports our political tribe. That picture isn’t wrong

    Like all bad pictures, this picture has elements of truth. But to claim we are hardwired for self-deception is a staggering exaggeration, to say the least. 60,000 years ago we acquired the cognition that would allow self-deception and in an evolutionary blink of an eye we have made astonishing progress. I doubt if being hardwired for self-deception would have made this possible.

    There is something else going on that we need to take into consideration. We evolved in a hostile world of imminent threat and fleeting opportunity. This, more than anything else, has shaped the way our cognition works, because our very survival depended on successfully navigating these imminent threats and fleeting opportunities.

    To survive, decisions have to be made very quickly in the presence of fragmentary or incomplete information. That fleeting shadow might be the arrow destined for your heart. That glisten in the bushes might be the eyes of a leopard poised to spring on you. There are no second chances. You must act and you must get it right.

    And we have, which is why we dominate this planet’s ecosystem. We have got it right because

    1) we are watchful, on high alert for threats and opportunities
    2) we are accustomed to integrating fragmentary evidence into meaningful patterns.
    3) we instinctively employ probabilistic reasoning to select the best outcome.
    4) we act quickly on this information.
    But in a modern world of great information complexity and reduced threat these ingrained ways of responding can work against us. We don’t deceive ourselves, we perceive patterns embedded in incomplete information, as we are designed to do. We are dominated by hairtrigger responses to fragmentary information, as was necessary in the past. What ensured our survival now complicates our survival. This is not self-deception but an evolutionary response that is mostly useful on the sportsfield.

    But it still matters in the many contexts where the ability to think quickly on one’s feet is decisive.

    1. So, the hardwired for self-deception is the Robin Hanson/Kevin Simler view. The idea is this:

      1. We can tell, pretty well, when people are lying to us.
      2. We have great incentive to lie to people.
      3. The most successful way to lie to people is if we can believe our lies ourselves.
      4. So we have developed to be very good at self deception.

      The main problem is that the best empirical evidence supporting 1 has proven to be made up. (Yes, that is highly ironic.) I don’t recall if Hanson and Simler relied on that evidence though.

      1. I won’t get into it much here, Robert, but Galef spends some time (very inadquately, as she relies almost exclusively on anecdote) arguing against 3 and by effect 4. She gives us reasons why the studies that led to 3 and 4 are flawed, and then talks about how Bill Gates and Jeff Bezos were not overly confident but got investors.

        I’ll be giving my review of her book (part of this three part series) to Dan shortly.

    2. Peter, I’ll be giving my review of Galef’s book to EA soon, so stay tuned for it.

  8. From Chapter 16 of Julia Galef’s book

    The example of intellectual honor I find myself thinking about most often is a story related by Richard Dawkins from his years as a student in the zoology department at Oxford.

    Ouch, that is an improbable pairing.

  9. Rage,
    Although I don’t agree there really is such a thing as “New Atheism”
    Militant, fundamentalist atheism is a better term for the phenomenon. What is “New” is their strident militancy and the the dogmatic certainty of the fundamentalist that they exhibit.

    Jesus Mythicists are essentially making the exact same argument creationists are.
    Indeed. There are many forms of denialism and this one is especially bankrupt.

  10. When Julia Galef couples Richard Dawkins with intellectual honesty in a laudatory way she is displaying the directionally motivated reasoning that she decries throughout the book, which tempts me to discount her entire book. But that would be unfair because there are many useful things that she says and I have certainly learned from it.

    In the first sentence of her conclusion she says

    When people hear that I wrote a book about how to stop self-deceiving and view the world realistically,

    and this is where she has gone wrong. We are not about self-deceiving, the evidence of our cognitive success is just far too spectacular to permit this conclusion. Certainly it is one our weaknesses and some are especially prone to it. But this does not, on the whole, characterize the nature of our thinking.

    I would say instead that
    1) our minds are attuned to quick intuitive leaps of judgement, for the reasons I have given earlier.
    2) we are embedded in a social net that strongly shapes our thinking, with the nearest nodes in the net having the strongest effect. Our epistemic authorities are the nodes more closely connected to us in the net.
    3) the Internet has loosened the bonds of the social net so that we can curate our own set of epistemic authorities, unmooring ourselves from established authorities.

    She proposes, as a cure for what she calls self-deception, a set of rule based behaviours. These are useful but flawed. They are flawed because honest thinking is at its heart a normative process and until one attends to the norms the rules are unmoored. In a nutshell, these norms are the intellectual virtues. Inculcating the intellectual virtues is the necessary foundation to the discernment of truth. She fails to understand the importance of a normative foundation. Now, some of the things she says can be seen as a form of intellectual virtue but she fails to link them up and see them(the rules or guidelines) as kinds of intellectiual virtue forming part of a coherent set within the virtue ethics framework.

    That changes things because then the solution to the problems becomes developing intellectual virtues within a virtue ethics framework. I suspect modern, so-called “rationalists” are allergic to this concept and rule based behaviour comes more naturally to them.

  11. From Julia Galef’s book

    ACTUALLY PRACTICING SCOUT MINDSET MAKES YOU A SCOUT
    One evening at a party, I was talking about how hard it is to have productive disagreements, in which people actually change their minds, on Twitter.

    This is so interesting because she is not talking about changing her own mind, but rather about changing the minds of others. Note that this in the context of advocating a scout mindset as opposed to a soldier mindset. The irony is that she is unconsciously displaying a soldier mindset while advocating a scout mindset.

    This happens because she is using military metaphors(scout and soldier). A scout has a clear purpose, to collect information in the service of one side and that inevitably results in directionally motivated reasoning. For this reason her scout metaphor is fundamentally flawed, and with that, the whole book fails.

  12. Mark,

    first you say

    As I see it, we don’t, and don’t need to, “figure out how to believe…” in some general sense. We just need to figure out whether *particular claims* (about paranormal phenomena, or Covid, or the effectiveness of a particular teaching method, or human evolution, or quantum mechanics, or the origins of the universe…) are likely to be true.

    Then you say

    And I am suggesting that, before you can usefully bring in moral elements, you need to have a clear picture of what goes on when we make everyday (as well as more sophisticated, scientifically-informed) decisions about what is the case in the world around us.

    Pardon me, but this seems to be a contradiction. Please clarify.

    1. Peter

      In the first quote I am talking about the (broadly) scientific investigation of various things. But science can’t really deal with moral questions or questions of meaning and meaningfulness. As I said to Kevin: “I was distinguishing between moral (“human”) questions and questions about how the world is. My main problem with your approach is that you blur (or perhaps reject) this distinction.”

  13. Kevin

    “We can’t get to how the world is sans belief about how it is.”

    From its earliest experiences in the world the infant is learning about the world (and how it fits into this world). Each of us starts off without any beliefs. So you *could* say, taking a long view, that we get to know or have a view about how the world is from an original beliefless position. Does the newborn have beliefs? Does the foetus have beliefs?

    Any functioning human (apart perhaps from the newborn) has beliefs about how things are. So, to the extent that your claim is not wrong, it is redundant. *Of course* we come to every question with preconceived ideas. Nowhere did I suggest that this is not the case.

    “It isn’t that a world ‘as it is’ doesn’t exist, but that positing that it does does nothing to solve the problem of what we should believe about it.”

    Again, I didn’t suggest that it does.

Comments are closed.