Liberalism, “Implicit Bias,” and Thoughtcrime: On the Subject of the I.A.T.

by Daniel A. Kaufman

One of the most fundamental values of a liberal society, beyond that of freedom of speech, is liberty of conscience.  One’s thoughts are one’s own prerogative and are thus, rightfully kept private, if one wishes it.  The intrusion upon that mental landscape is the ultimate wrong that the state can do to the individual, and to invade and attempt to control it are the defining characteristics of the totalitarian impulse.  In a liberal society, the state’s – or any person’s – authority over others extends only as far as their behavior and only in those cases where that behavior causes tangible, quantifiable harm.  The notion that there are “thoughtcrimes” must therefore be abhorrent to the liberal sensibility and the current effort by some progressives to redefine ‘harm’ to mean any thought or speech that a person dislikes or which cause his feelings to be hurt must be opposed at every turn, if liberal society is to survive.

These liberal values effectively constrain the ways in which we may permissibly try to change others’ minds.  If one is convinced that another person believes or feels wrongly, one may try to persuade him by way of an appeal to his conscious rationality and emotions, but efforts that penetrate below the surface – which bypass his conscious rationality and feelings – must be forbidden in a liberal society, as must be overt exercises in coercion, where one tries to force someone to change how he thinks or feels, by violence or threat.

The reason for engaging in this little refresher in liberal civics is to provide a backdrop against which to discuss the unfolding situation involving Harvard University’s “Project Implicit” and “Implicit Association Test,” just reported in the Chronicle of Higher Education. [1]  The test purports to identify “implicit bias” in people against blacks and other racial and ethnic minorities, homosexuals, women, etc.; that is, bias which would not be revealed if one asked them directly.  It claims to accomplish this by showing that people are slower to associate pictures of, say, black faces, with positive words than white ones or that they commit more mistakes in doing so.  And it has been enormously influential, having been taken some staggering seventeen million times, and is being used across a wide swathe of American society, from university classrooms to police departments

One might object that the inference from reaction speed and accuracy to social bias seems an obvious non-sequitur and doubt whether any number of controls could possibly screen out all the potential variables that might come into play in affecting how quickly or accurately we respond to pairs of pictures and words, whatever they might be.  The creators of the test are social scientists, not perceptual psychologists or people otherwise expert in the sciences of vision or motor movement, and it could very well be that the relevant reaction times have nothing to do with social conceptions, but with variables that operate at more basic levels of perceptual experience — that is, at lower, non-intentional levels of description — which would tell us nothing about something as steeped in intentionality and representation as social bias.

The more interesting question for me, however, is why anyone would want a test for implicit bias, in the first place.

One answer that immediately comes to mind is that if implicit bias causes or at least, reliably predicts discriminatory behavior, then it would be useful to know what implicit biases people have, and at first glance, this would appear to be the reasoning of the people behind the IAT. “It is well established that implicit preferences can affect behavior” one reads on the test’s website, which also cautions that “it important to know that…implicit biases can predict behavior.  When we relax our efforts to be egalitarian, our implicit biases can lead to discriminatory behavior…”  [2]  And yet, when one presses on, one discovers that the creators of the IAT also categorically reject using the test in any sort of interdictory fashion.  “People may use [the IAT] to make decisions about others (e.g. does this potential job candidate have racial bias?)” the site creators explain, “however, we assert that the IAT should not be used in any such ways…For example, using the IAT to choose jurors is not ethical…”

What should the test and the information it provides be used for then?  The answer we get is little more than a vague appeal to “education.”  We are told that while it would be “unethical” to use the test as part of the selection process for jurors, it would be appropriate to use it to “teach jurors about the possibility of unintended bias” and more generally that “at this stage in its development it is preferable to use the IAT as an educational tool, to develop awareness of implicit preferences and stereotypes.”  But why develop an actual test designed to unmask the unconscious racism, sexism, homophobia of real people and market it to the public, if the aim is simply to teach us that generally speaking, such bias is “possible”?

Perhaps the idea is that knowing about implicit bias and seeing it in my own case, I can endeavor to work on myself; to make myself less racist, sexist, homophobic, etc.  The trouble is that because the biases in question are unconscious, they are beyond the reach of conscious effort. The reason we need the test, remember, is not to identify the people who are overtly racist, homophobic, etc. – those dirty scoundrels are easy to find.  It’s the rest of us, the people who are overtly nice and who make no claim to racist or homophobic ideas whom we need to wonder about.  And the creators of the IAT admit that there is “not enough research to say for sure that implicit biases can be reduced, let alone eliminated.”  So, the idea that the purpose of the test is to provide us with the information necessary to improve ourselves also turns out to be unsustainable.

Rather than try to improve ourselves, the test’s creators tell us, we should “focus instead on strategies that deny implicit biases the chance to operate,” by which they mean the introduction of administrative institutions and practices like the blind reviewing of applications and more generally, the employment of transparent, publicly verifiable criteria.  But these sorts of practices are applied generally and across the board and certainly don’t depend on there being an actual implicit bias test that can identify unconscious racism, homophobia, and the like or on anyone taking such a test.

And this brings us back around to the recent article in the Chronicle of Higher Education. Not only has a 2016 meta-analysis shown that the relationship between implicit bias and discriminatory behavior is minimal at best and that changes in implicit bias have little effect on behavior, it turns out that this is a critique that has been raised against the test for some ten years now, led by Dr. Hart Blanton, who did his own meta-analysis (reaching similar conclusions), back in 2013.  As of now, at least, there is no scientific question then, of there being some  behavioral utility in having this sort of information about peoples’ unconscious biases.  As things currently stand, they appear to be largely unrelated to discriminatory behavior and regardless, changing them has little effect on such behavior. [3]

And yet, the test has not been removed from the internet.  Indeed, not one word of any of this is to be found on the test’s website, which continues to state, unequivocally that “it is well established that implicit preferences can affect behavior” and “implicit bias can predict behavior” and all the rest that I’ve quoted to you.  I find it difficult to avoid the conclusion that the sole point of the test is to encourage people to out themselves and others as subterranean racists, sexists, homophobes, etc. – in short, as guilty of thoughtcrime – with all the illiberal shaming, guilt-mongering, moral posturing, and social harassment that this makes possible, which is why current “Social Justice” types have embraced the test and deployed it with such enthusiasm and aplomb.  I expect as much from them, as they have never evinced much if any respect for liberal values and especially not for the freedom of speech and of conscience, but it is very disappointing to see it in professional scientists, whose job is not advocacy or activism, but rather, the pursuit of the truth.

I hope I am wrong and perhaps, with this latest revelation, the creators of the IAT will remove the test from the internet and put out a statement unequivocally opposing its use, not only until these serious scientific challenges have been adequately addressed, but until they can articulate a sound purpose for public use of the test.  As Jesse Singal wrote in a recent article on the subject:

[T]here’s a case to be made that Harvard shouldn’t be administering the test in its current form, in light of its shortcomings and its potential to mislead people about their own biases. There’s also a case to be made that the IAT went viral not for solid scientific reasons, but simply because it tells us such a simple, pat story about how racism works and can be fixed: that deep down, we’re all a little — or a lot — racist, and that if we measure and study this individual-level racism enough, progress toward equality will ensue. [3]

Alas, I must admit to not being hopeful on this front.  It’s not just the treatment of Blanton, whom the creators of the test apparently tried to smear and discredit, in response to his original critique (a shameful episode recounted in the Chronicle piece), but the fact that in today’s academic climate the idea of ideologically compromised social science is all too believable.

Notes

  1. http://www.chronicle.com/article/Can-We-Really-Measure-Implicit/238807
  2. https://implicit.harvard.edu/implicit/takeatest.html (All quotes from the website are taken from the “Ethical Considerations” and “FAQ” tabs, under “Education”)
  3. http://nymag.com/scienceofus/2017/01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html

33 Comments »

  1. Hi Dan,

    You make a valuable contribution by pointing out the flaws in the IAT test. The idea that delays in reaction time are proportional to negative bias in regards to different physical appearances is highly questionable. Other causes for hesitation must surely be considered. The fact that the testors so readily assign observed delays to racial prejudice says more of their own preoccupation with the race question rather than a objective seeking of the true facts of the matter.

    I independently stumbled upon the test a few months ago and took it. My results indicated that I had slight prejudice against black people which was not too bad considering that only 17% of the test takers showed no bias. ( If my memory is correct )

    I think overall it is a good thing for the general public to be exposed to scientific information. Unfortunately we will always have to be extremely vigilant because the social justice warrior types, and other such partisans, are guaranteed to try to exploit the situation.

    Liked by 2 people

  2. Going only by your description, count me as skeptical of the program.

    I remember hearing people say “all Chinese look alike”. At one time, I might have agreed. But now that I have many Chinese acquaintences, that seems wrong. I think it is a matter of familiarity. People who have seen very few Chinese faces are unable to read the emotions from those faces in the way that they can with more familiar faces. And, of course, the same would apply to black faces.

    So maybe the implicit bias test is itself biased by the importance of familiarity with different kinds of faces.

    Liked by 1 person

  3. Hi Dan, you are right and this seems extremely problematic to me.

    The fact that it is even named “project bias” raises the question whether the researchers have an implicit bias toward the subject they are claiming to study. That is they believe in the existence of implicit bias and that it is meaningful, before having completed research to back such conclusions. While the researchers are likely (and hopefully) sincere, this appears the product of the new hype-based, headline-driven science culture that has been growing over the last few decades.

    On purely scientific grounds, you, Liam, and Neil have already pointed out that there might be other explanations for a delay in reaction (which they are rushing to conclude as bias). Neil raises the issue of familiarity which is exactly what I thought when I was reading about it at PF and again here. It should be noted that the authors address this possibility…

    “Research shows that IAT scores are not influenced by familiarity with the individual items to be categorized. Also, faces used in the IATs here should all be equally unfamiliar to everyone. That said, this is a tough question. Classic research in psychology shows that people tend to like things that they are familiar with. So, there may be a role for familiarity in liking of the categories. But also people avoid things that they don’t like, so it is possible that implicit bias is what leads to unfamiliarity.” (from their site, FAQs)

    Research shows, yet no citations. Faces should be equally unfamiliar and yet no mention of the very problem that Neil raised. One could argue that in that case the processing of negative vs positive should be equally delayed, but this is not clear (we may be prone to extra screening for positive where we are not familiar, or having exceptions where something is familiar).

    Their idea in the last sentence that (what they measure as) implicit bias would be a primary driver for unfamiliarity, rather than the other way around, seems a forced chicken and egg issue. That would assume most people start with mechanisms that allow for equal familiarity with every other ethnicity/culture and loaded with biases against them (through some unknown vector), instead of the more obvious explanation that people are finite beings and so limited sets of what they will be most familiar with.

    If they were going to be honest they should have led with “That… is a tough question” rather than sinking it in the middle, where a casual reader may never get to it, having started instead with such positive statements of (unsubstantiated) scientific support. This kind of trick to affect people’s assumptions is known to psychology, and conmen.

    I’m not sure if anyone got to their followup on “in group” effects…

    “A simple preference for the ingroup might partially explain implicit bias for white respondents. However, it is also more than that. There are plenty of tests on which people prefer one group or the other even when they do not belong to either group. For example, Asian participants tend to show an implicit preference for White people relative to Black people. In this sense the IAT might also reflect what is learned from a culture that does not regard Black people as highly as White people. It is also interesting to note that about half of Black participants show an implicit preference for White people relative to Black people… this would certainly not reflect an ingroup bias.” (again FAQs)

    That first sentence is a stunner. And the reasoning that follows is no less surprising (to a scientist). In a society that is majority white, why wouldn’t all minorities in that society relate to the majority as being part of their in-group? That explanation would also cover the results they discuss in the very next FAQ, where they found that all minorities show the same “bias” toward what would be the majority populations (whether racial, sexual, age, etc though maybe that would not be true for gender). But that’s not how they explain it…

    “Results from this website consistently show that members of stigmatized groups (Black people, gay people, older people) tend to have more positive implicit attitudes toward their groups than do people who are not in the group, but that there is still a moderate preference for the more socially valued group. ” (FAQs)

    Replace “stigmatized” with “minority” and “socially valued” with “majority” and there is no loss in its explanatory value. And again this can go beyond bias toward in-group and reduce to simple familiarity.

    Moving beyond the scientific, this seems tied to a political or ideological concept and movement. As I said in an earlier thread from now on I want to stay away from labels like liberal, progressive etc, though I acknowledge and agree with your usage of those terms (it just seems there are so many who don’t).

    To put it in a more simplistic fashion, if one believes in personal freedom, freedom of thought must be paramount. The “project” here is about trying to find fault with a person at the most rudimentary elements of their thoughts. Policing at the level of information processing. Something which they acknowledge cannot be used in any functional way (on the individual level), and yet act as if it has important implications (when taken as a whole). Indeed it seems it is not even a reliable test for an individual over time.

    I suspect the message here is much like “love the sinner, hate the sin”. They don’t want to blame individuals, but a “thing” that can be present in individuals, and thankfully (for them) fits within an ideology that lays blame on certain groups and praises others. It feeds a narrative that culture itself is biased and so we should start policing it (whether as individuals or society) in order to stop behavior we dislike at the very smallest of points… second to second information processing in the brain.

    There first two solutions (blind oneself or compensate) were known without the insight of this study and is based on affecting one’s actions. Their third solution…

    “Although it has not been well-studied, based on what we know about how biases form we also recommend that people consider what gets into their minds in the first place. This might mean avoiding television programs and movies that portray women and minority group members in negative or stereotypical ways.” (FAQ)

    … is as always, the call of the censor. Chop out, rather than deal with. From Plato on down. Do we really think that TV and movies are to blame for the implicit bias they have “recorded”. Is there any evidence for this?

    This is poor science and policy advocacy. It is a move against free thought.

    Sorry about the length. This thing got my goat, as it did yours.

    Liked by 1 person

  4. Calling this kind of bias ‘thoughtcrime’ is somewhat hyperbolic. Responding to stimuli as fast as possible is a situation where you are not really thinking.

    I believe this test is currently making rounds in corporate HR environment as well. Too bad for people who did not grow up inside a Benetton ad. (But, on the other hand, since corporates seem to favour individuals with psychopathic traits, i.e. acting impulsively and without thought, maybe choosing for a more egalitarian psychopath would still be an improvement?)

    Like

  5. Considering the 2nd paragraph in isolation, does advertising qualify as something intrinsically opposed to liberal values? Or will we assume that all those techniques do not aim at (or succeed in) piercing below the ‘conscious’ reasoning and feeling defensive surface?

    Like

  6. Hi miles,

    “Calling this kind of bias ‘thoughtcrime’ is somewhat hyperbolic. Responding to stimuli as fast as possible is a situation where you are not really thinking.”

    Technically I guess that would make it “pre-thought crime”. That’s heading into “minority report” territory and so I would say that calling it “thoughtcrime” would not be hyperbolic, and if anything understating the issue.

    ………..

    Hi stolzy, I don’t think “advertising” would be against liberal values. That is simply telling people what you have to sell.

    If you mean the commercial propaganda industry that has been accepted as legitimate advertising in our culture, then yes that would be.

    Noam Chomsky has also argued this case, making the distinction I have here.

    Liked by 1 person

  7. Dan, db,
    I was just quibbling about the beginning of the word, “thought” cannot be the subject of this type of test. The crime, or sin, is not having positive enough “feelings” associated to the target stimuli.

    Any technical critiques, such as the accuracy of the test, will in the end only serve the people involved in developing it, showing the necessity of investing in continuously calibrating the test, like is already done for IQ tests. If society at large thinks that this type of test is useful, it will be developed further.

    [Instead of Minority report, which is about determinism, my mind was jumping more to the “therapied” concept in Queen of Angels by Greg Bear: If we can measure something in the brain that we think is harming us, the next logical step could be medical intervention.]

    Liked by 1 person

  8. I can’t believe I have to defend social psychologists! We know there is still considerable inequity within social institutions where there are now policies and laws against explicit discrimination on gender, race etc. We strongly presume that these persisting inequalities do not represent an explicit conspiracy by, for example, male philosophy professors, or even the result of beliefs that have been reached after much deliberate reflection. Whether implicit bias is less important as a factor than other more sociological/structural mechanisms for “maintenance of in-group social dominance”, or whether it is an intermediate variable within an individual, is a question social scientists might test.

    i. There are multiple methods measuring implicit bias; you might remember the various “shooter bias” paradigms. I don’t see any evidence of methodological problems per se in the literature about this test.
    ii. The Implicit Association Test has been applied to other domains than race eg sexual attraction to children, where it successfully disciminates sex offenders and correlates with other measures
    iii. The strength of association with explicit behaviours is not large when expressed as correlation coefficients or mean differences, but this does not mean it is not highly important at the population level (this depends on base rates of behaviours etc).
    iv. Just because I have an “unconscious” tendency does not mean that I cannot adjust for it consciously, once it has been pointed out to me. It may make me more likely to support blinding of job resumes etc.
    v. Social dominance and numerical dominance of particular groups tend to be correlated. It is precisely that one lacks familiarity with a minority group that one will be vulnerable to ascribing particular traits. So it is short-sighted to reduce these effects to being “just” familiarity and so not of interest.
    vi. When Jesse Jackson stated he is reassured when it turns out to be a white person walking behind him at night, we don’t think he is being unreasonable. We just don’t want those associational chains extending to affect schooling, work opportunities etc. And we can expect individuals with power to have a duty to scrutinize their own motivations, and try and minimize effects of their own biases. You might take a utilitarian line and suggest everyone should be given a fake IA test to improve their motivation, but a weakly predictive test where you are informed of the shortcomings might be regarded as the more liberal way to go.

    Like

  9. I took the online test four times. The first time, involving ‘Asian-Americans’ I played straight. What I found was that the game is designed to trick the player, by reversing terms and key-playing strategies twice. This fails to account, not for implicit bias, but tendency toward routinization of behavior at the keyboard. Consequently I was informed I suffered a moderate racial bias towards Asian Americans, even though the number of presumed ‘errors’ was exactly one in every session, as I adjusted to using the keys per directions.

    The second time, concerning youth and age, I attempted to answer alternate questions in apposite ways. So in the introductory questions, I remarked that I preferred young people to old people, and then that I felt warm (liked) old people, but felt cold (disliked young people. In the test itself I simply alternated ‘E’ and ‘I’ keys at a steady pace. In the third test. concerning weight, I chose random selection; in the introductory questions by closing my eyes, and in the ‘E’ or ‘I’ phase by simply pressing both keys at the same time, and letting the computer determine which impulse to send to the test. After both these efforts, I was informed that ‘not enough trials had been taken on which to determine bias. This should not have been the case has the parameters of the test been properly defined. What this suggests to me is, on the contrary, the software was programmed with a hidden bias toward *consistency* – it reads the trajectory of answers and continues the game to completion (determination of bias) only if that trajectory is met with in the answers; otherwise, it effectively reads oppositional play as such and shuts down the game at end point – you can play the game through, but get no result. I suspect this was programmed to avoid the wild gaming I engaged; but it obviously doesn’t mitigate against overt consistent dishonesty.

    Which I did in the fourth game, by adopting the identity of a conservative white woman age 30, who felt strongly that the sciences could be identified with male activities, and the humanities with female. I got exactly the result I intended by doing so, convincing the program I hold beliefs that aren’t mine.

    One doesn’t need to be a specialist to recognize the problems here: The test has an implicit bias; if you play the game according to this bias, you will be rewarded by the result you want; if you play by disregarding this bias, you the programmers will get the result they want; and if you play freely and inventively, there will be no result at all.

    Like

  10. So, the point I wanted to make, is that the IAT is really a game (and rather a silly one, I laughed quite a bit playing it). I would need a much stronger case, but I feel that many, perhaps all, such ‘psychological’ tests really have to be re-addressed as games, rather than tests. Consequently, using such tests for any other reason than data-generation is using them misleadingly. And using them for data generation raises the question, to what purpose?

    I also am worried that application of terms like ‘thought-crime’ might be a little too strong. Dan. But I will brush off my own bug-a-boo. to clarify, concerning bureaucratization of social living. I really think this is more worrisome. We are tested, retested, categorized, subject to demands on the job and from the state that we conform, and that we allow ourselves to be recorded and stuck in one niche or another, and, yes, there are penalties if we don’t, from petty fines to lawsuits to threats of job loss. I am sure this is happening at college campuses, it’s happening everywhere. It is all certainly intended to control human behavior, but I don’t think it’s because anyone gives a damn what we think. After all, without thoughts outside the social norm, there can be no marketing to our darker instincts to advertising. No, it appears to be about something else – the worry that differences, allowed to blossom, might go unmeasured, unanalyzed, uncategorized. (I actually kind of pity the LGBT community, because, in there effort to reach social parity, they have not only allowed but invited such measurement/analysis/ categorization and pigeon-holing of their differences from the main-stream. There has been some benefit to this, but there is also some loss. The more one conforms, the less one’s differences truly matter.) And I see this bureaucratization only getting worse in the future, as there are ever more people to measure, every more differences to categorize.

    For the bureaucratic fear is that we could end up with a part of the population, perhaps a large part, made up of people who are simply unknown – perhaps unknowable. Perhaps that’s why so many fear undocumented aliens. Although they complain about drug dealers and gangstas among the undocumented, drug dealers and gangstas usually end up in the criminal justice system – a bureaucracy wherein they are measured and pigeon-holed, etc. No the real fear hear is of the honest, hard-working, family oriented undocumented aliens, because these operate below the radar. Sociologists examine them repeatedly, of course; but they change every generation. What long-range plans can be made for them? What marketing can be made to appeal to them.

    But I am using the undocumented alien only as an example. My point is, that in the current era, it is the unknown man or woman who is really cause for concern among the bureaucrats. So perhaps the future of resistance is living disguised. Everybody’s been trying to ‘come out’ as something or other, politically, sexually, ethnically… this only makes us available to being filed away with the proper bureaucracy. Perhaps we need, instead, to find a good closet to inhabit – not in order to hide, but to preserve unknowable personalities.

    Liked by 2 people

  11. Very interesting read, both OP and comments. Don’t know enough about the technicalities to make useful comments on the test – or ‘game’ as ejwinner sees it.

    On the broader questions involved, I agree that privacy and freedom of thought are both valuable and at risk. What this discussion brought to mind was an historical point that certain early 20th-century liberals made. They contrasted the pagan Roman Empire (which did not seek to control thinking) with subsequent Christian-based civilizations (which did). So you could say that the so-called ‘soft totalitarianism’ of our secular Western culture has more in common with Christian rather than with classical ways of operating. I wouldn’t want to push this point too far however, because obviously scientific and technological developments have changed the picture so radically.

    Liked by 2 people

  12. Hi David, your reply contained several important errors that must be addressed.

    ……………..
    “I can’t believe I have to defend social psychologists!”

    Here you have conflated one research program (or trend in research) with all of social psychology. Dan was certainly not attacking social psychology as a field of study nor all social psychologists, and in fact cited critics of that one research program that were themselves social psychologists.

    There are valid criticisms of the assumptions, mechanics, and interpretations used in that program. I gave some above. So have others in the thread. It is dangerous to equate criticism of a line of research (or trends in research) as an attack on a field as a whole.

    ……………..
    “Whether implicit bias is less important as a factor than other more sociological/structural mechanisms for “maintenance of in-group social dominance”, or whether it is an intermediate variable within an individual, is a question social scientists might test.”

    This seems to ignore what the actual findings are, as reported (or conceded) by that group, as well as missing entirely the point of Dan’s essay.

    Scientists can come up with all sorts of viable questions that could be tested. And we have many ethics committees, rules, and laws to limit investigation when it might violate important principles regarding personal liberty and privacy, regardless whether they are valid questions.

    …………….
    “ii. The Implicit Association Test has been applied to other domains than race eg sexual attraction to children, where it successfully disciminates sex offenders and correlates with other measures”

    Here we have a definitive answer to everyone dismissing Dan’s use of the word “thoughtcrime” to discuss IAT.

    What you describe is an explicit use of IAT to try to identify people with “bad thoughts” so as to lump them in with people who have committed crimes. That is to say unless you are advocating IAT be restricted to only testing for the presence of such “sex offender” markers on those already convicted.

    But that is to take your claim of its success seriously, which I don’t. Unless you know of studies I am unaware of, outside of some “headline science” claims, there has been no such conclusive identification of IAT “markers” for people that have or will commit any crimes (sexual or other)… beyond patent self reporting or public documented activity, which means we don’t need IAT.

    The best you can have is correlations within populations who have known criminal activity, so that we can “discriminate” *against* people that share such correlated scores.

    ……………….
    “iii. The strength of association with explicit behaviours is not large when expressed as correlation coefficients or mean differences, but this does not mean it is not highly important at the population level (this depends on base rates of behaviours etc)”

    The study we are talking about involves differences in response time required to associate different categories. The idea that this may mean nothing at the individual level, but might at the population level (because there are so many individuals?) involves a deep confusion in what is being studied. One cannot appeal to some “butterfly effect” or treat the response times between individuals as something that has additive effects. Either they influence the individual or they do not. If they do not then there is no effect at the population level.

    …………….
    “v. Social dominance and numerical dominance of particular groups tend to be correlated. It is precisely that one lacks familiarity with a minority group that one will be vulnerable to ascribing particular traits. So it is short-sighted to reduce these effects to being “just” familiarity and so not of interest.”

    While you make a valid point, what has not been excluded is that familiarity itself will lead to differences in response times (the thing being measured), while making no difference with regards to the actual feelings or behaviors of individuals towards groups.

    You are arguing what could be rather than looking at what is the most straightforward explanation (sans ideology), and actually fits the data which shows (they admit) little value (or consistency) at the individual level, and only has implications at the population level with hand-waving techniques that have little relevance to science.

    Like

  13. Hi EJ, see my response to David why I disagree with your downplaying Dan’s use of thoughtcrime, but I agree with the separate issue you discuss.

    “My point is, that in the current era, it is the unknown man or woman who is really cause for concern among the bureaucrats. So perhaps the future of resistance is living disguised. Everybody’s been trying to ‘come out’ as something or other, politically, sexually, ethnically… this only makes us available to being filed away with the proper bureaucracy. Perhaps we need, instead, to find a good closet to inhabit – not in order to hide, but to preserve unknowable personalities.”

    Beautiful.

    Like

  14. Hi ej. See

    Kim D. Voluntary controllability of the Implicit Association Test (IAT) Social Psychology Quarterly. 2003;66:83. doi: 10.2307/3090143

    Hi db. I should have put a smiley rather than than an exclamation mark, or made it “any sort of social psychologist”. Perhaps I should be pleased that you read as closely as you do, given it is just a blog comment. And I might add what I almost added originally, that any work of mine vaguely in this area is in behaviour genetics. So I actually have an implicit bias against much social psychology, and a natural tendency to deflate my estimate of the true size of any effects they report, that is only amplified by my knowledge of statistics. That is to say, I would agree that measures of implicit bias, such as the IAT, are not strongly correlated with the underlying trait (within-occasion Cronbach alpha 0.6-0.8, but test-retest correlation over two weeks is only ~0.3), and as such the size of the associations with behaviour will be small.

    “the actual findings are, as reported (or conceded) by that group”: I don’t think any of the criticisms brought up in those recent magazine and newspaper articles are new. Many of the studies in these more recent meta-analyses would have been turned over in earlier rebuttals eg

    http://www.sciencedirect.com/science/article/pii/S019130850900023 [Paywalled]

    In this article, we respond at length to recent critiques of research on implicit bias, especially studies using the Implicit Association Test (IAT). Tetlock and Mitchell (2009) claim that “there is no evidence that the IAT reliably predicts class-wide discrimination on tangible outcomes in any setting,” accuse their colleagues of violating “the injunction to separate factual from value judgments,” adhering blindly to a “statist interventionist” ideology, and of conducting a witch-hunt against implicit racists, sexists, and others. These and other charges are specious. Far from making “extraordinary claims” that “require extraordinary evidence,” researchers have identified the existence and consequences of implicit bias through well-established methods based upon principles of cognitive psychology that have been developed in nearly a century’s worth of work. We challenge the blanket skepticism and organizational complacency advocated by Tetlock and Mitchell and summarize 10 recent studies that no manager (or managerial researcher) should ignore. These studies reveal that students, nurses, doctors, police officers, employment recruiters, and many others exhibit implicit biases with respect to race, ethnicity, nationality, gender, social status, and other distinctions. Furthermore—and contrary to the emphatic assertions of the critics—participants’ implicit associations do predict socially and organizationally significant behaviors, including employment, medical, and voting decisions made by working adults.

    Make of that what you will.

    The Forscher et al paper is not yet published. It addresses particular questions: a) are there interventions that can change implicit or explicit bias, and b) could such changes in implicit bias – as measured by IAT etc – cause the changes in explicit bias. They conclude that there ARE such interventions that affect both, they do not have particularly large effects (~0.35 SD), the effects on implicit bias always seem LARGER than the effects on explicit bias, and a simple model assuming implicit bias is well measured (as noted above, this is wrong) concludes that the intervention effects on explicit bias are NOT mediated by implicit bias. There were few longitudinal studies, and none long term –
    this one,
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3603687/
    claims effects persisting over 2 months. Forscher detects a lot of between-study variance, some of which can be explained eg interventions are more effective in studies of college students v. those of non-students – not particularly surprising.

    “ii…thoughtcrime”: this was merely to address whether implicit association tests work in multiple domains (incl depression, alcohol dependence), which they do. And what a weird response. You don’t think there are neurological or psychological traits that predispose to behaviours? If it can be shown adoptees with a biological parent who was an alcoholic are at increased risk of alcohol dependence, then they have been found guilty of “thoughtcrime”, or perhaps just of having a “hereditary taint”? You think I am labelling people with a perfectly natural and understandable learnt predisposition as “criminals” – bah! Look at the above cited Devine et al paper – they merely use these results to try and convince medical students to be more thoughtful of others, a procedure closer to ordinary moral teaching than “statist intervention”. I don’t see any evidence they tell participants that these tests have low reliability, but that is up to the IRB.

    “Either they influence the individual or they do not”: no more so than in a drug trial where there is variation in effect between individuals

    Like

  15. Actually, as I indicated in my essay, the test creators very explicitly say that the test should be used *neither* for self-improvement purposes nor for institutional ones. It is thus perfectly reasonable to wonder what purpose the thing serves, and given the current ideological culture in the academy and especially the liberal arts and social sciences, it is not too much of a stretch to make the inferences I and others have made.

    Liked by 1 person

  16. Excellent roundtable on this:

    http://philosophyofbrains.com/2017/01/17/how-can-we-measure-implicit-bias-a-brains-blog-roundtable.aspx

    One of many moneyquotes:

    The recent history of the implicit association test is just the most recent episode in this sad history of irrational exuberance followed by disappointment. We were told that the IAT measures a novel type of attitude—mental states that are both unconscious and beyond intentional control, which we’ve come to know as “implicit attitudes”—and that people’s explicit and implicit attitudes can diverge dramatically: As we’ve been told dozens of times, the racial egalitarian can be implicitly racist, and the sexist egalitarian can implicitly be a sexist pig! And law enforcement agencies, deans and provosts at universities, pundits, and philosophers concerned with the sad gender and racial distribution of philosophy have swallowed this story.

    But then we’ve learned that people aren’t really unaware of whatever it is that the IAT measures. So, whatever it is that the IAT measures isn’t really unconscious. And we’ve learned that the IAT predicts very little proportion of variance. In particular, only a tiny proportion of biased behavior correlates with IAT scores. We have also learned that your IAT score today will be quite different from your IAT score tomorrow. And it is now clear that there is precious little, perhaps no, evidence that whatever it is that the IAT measures causes biased behavior. So, we have a measure of attitude that is not reliable, does not predict behavior well, may not measure anything causally relevant, and does not give us access to the unconscious causes of human behavior. It would be irresponsible to put much stock in it and to build theoretical castles on such quicksand.

    Lesson: Those who ignore the history of psychology are bound to repeat its mistakes.

    Liked by 2 people

  17. Hi David, a guess a smiley would have helped 🙂

    Interesting that you are in behavioral genetics. My (previous) lab had recently merged with a behavioral genetics group. It sounds like we would agree about the IATs lack of correlation.

    …………
    1) I couldn’t get to the first article so I can’t say anything about it, but the quote you gave doesn’t tell me anything other than the proponents do not like the critics and tout the success of their method, despite the many issues found in lit (including the two very well written layman’s pieces Dan cited).

    ………….
    2) The second article was… problematic. And that is to be diplomatic. Not sure where to begin with the methodological issues. Sample size, composition, potential confounding factors, assumptions, interpretations. It was pretty bad. To take one interesting item, look at figure 2 and tell me how you would interpret it.

    Their read is that some change has happened to the test group because of their “intervention” which has had long-term impact. My first question is where have they excluded Rosenthal/Hawthorne type effects? My second would be, and this is much more interesting given the subject… what the hell happened to the control group? They do not address at all the equal spike and extended relative increase in IAT score of the control group. If their “intervention” was the necessary explanation for what happened to the test group, doesn’t something need to be invoked for the effect seen in the control?

    Heck, they don’t even address why the test group IAT scores start well above control, and winds up largely within the SD of the control’s starting IAT score. Indeed despite falling within the SD of the untreated control group’s initial scores, they report their result as follows…

    “As shown in Figure 2, the intervention was successful. Following the manipulation, intervention group participants had lower IAT scores than control group participants… These data provide the first evidence that a controlled, randomized intervention can produce enduring reductions in implicit bias.”

    Given the problems I mention above, this statement of results is pure BS (bad statistics). And how do they start their discussion of the results in total (keyed to that problematic graph)?…

    “Overall, our results provide compelling and encouraging evidence for the effectiveness of our multifaceted intervention in promoting enduring reductions in implicit bias. As such, this study provides a resounding response to the clarion call for methods to reduce implicit bias and thereby reduce the pernicious, unintended discrimination that arises from implicit biases.”

    That is not good science. That is commercial/propaganda style advertising.

    ……………
    On your comment about thoughtcrime, you have switched the subject.

    First, the fact of hereditary effects, or even immediate neuro- or psycho- traits underlying behavior, does not argue that tests should be used (if they could be) to pre-judge someone. You brought up discriminating sex offenders from others based on IAT scores. How were you suggesting it could be used… and how might it be used by others? A desire to “discriminate” between people based on associated scores related to their thoughts, with an idea the identified group has a problem that must be corrected, is to create “thoughtcrime” and “thoughtcriminals”. That some group has not used results of a study to go quite that far, means very little.

    If your point was that you yourself didn’t mean to use it that way, and would not advocate its use in that way… are you sure? Have you taken an IAT to be sure? 🙂

    ……….
    ““Either they influence the individual or they do not”: no more so than in a drug trial where there is variation in effect between individuals”

    ??? I would agree that a person’s IAT score might mean more for one person than another. That’s the point. But your statement means nothing in relating IAT scores of individuals to populations (which was the subject), especially when one has not even found significant relations between IAT scores and individual behavior.

    Liked by 1 person

  18. Hi db. Reading backwards…

    “But your statement means nothing in relating IAT scores of individuals to populations”: If you are measuring using a noisy instrument, any hope of detecting an effect requires you to aggregate over individuals, and the chance you can say anything sensible about any one individual is low.

    “look at figure 2…what the hell happened to the control group?”: If you look at the error bars, you will see that none of the changes over time in the 38 controls were statistically significant, while the treatment groups means are well clear of each other across time – recall information is coming from the before and after comparison as well as from the control group. “why the test group IAT scores start well above control” – again those differences are consistent with chance. It is a small study, and the overall discussion is more nuanced than your quote, esp 2nd and 3rd last paras, and the discussion of gaming the IAT.

    I cited the first abstract just to show blood has been running high for some years – I haven’t read it.

    Here’s my study that engaged in cunning manipulation of participants. Maybe I can trick people into citing more too 😉

    https://genepi.qimr.edu.au/contents/p/staff/CV295.pdf

    Like

  19. Hi David,

    ““But your statement means nothing in relating IAT scores of individuals to populations”: If you are measuring using a noisy instrument, any hope of detecting an effect requires you to aggregate over individuals, and the chance you can say anything sensible about any one individual is low.”

    Again you change what is being discussed. You likened the issue to drug trials and variations in effect between individuals. I was responding to that analogy. Now you are discussing noisy instruments and trying to pull a signal out by aggregation. I don’t see how on earth you can make that second analogy for this subject.

    And worse it seems to cut against the original claim that appeared to suggest effects at the individual level create a combined effect at the population level, not that there is some signal we can amplify for detection (from noise) by aggregation.

    ……………….

    “If you look at the error bars, you will see that none of the changes over time in the 38 controls were statistically significant, while the treatment groups means are well clear of each other across time.”

    Actually I didn’t see any indication of statistical significance anywhere (on the graph), though I may have missed that. It is true that for the test group the dip in IAT is such that at latter time points the SE (or SD) fall outside the SE of the initial range of the test group. And it is also true that the control group’s increase does not fall entirely outside of the SE of the initial range for control. But that does not really affect my point.

    It is sort of trying to have it both ways, by treating the values of test and control as similar to each other at W0 and their differences not significant (worthy of discussion), and control at W0 the same as W4 and 8 and their differences not significant, because in all cases the SE’s overlap to some degree, and then argue that the test group differed significantly at W4 and W8 when their SE’s overlap with control at W0 and 8.

    That is they want to say the test group was the same as control at W0 (and control stayed same throughout), but when it comes time to look at changes, only compared test with itself.

    The point of having a control is to show a normal range of values. Yeah maybe control didn’t vary as much as the test group did with itself, but it is clear the test group did not deviate significantly from normal range seen in control (if we have to treat overlapping SEs as nonsignificant). Thus the best you could say is that it went from middle normal range to low. But it didn’t deviate that much from the norm (no more than control from W0 to W4).

    Really, the degree of change seen argues for much longer (and more frequent) time points to get a real base range of values.

    …………..

    “It is a small study, and the overall discussion is more nuanced than your quote, esp 2nd and 3rd last paras, and the discussion of gaming the IAT.”

    Yes it is a small study, which was part of my initial criticism. Also, I did not deny they had more nuanced discussion elsewhere.

    My point was that they started with a hyperbolic claim, which on top of blowing the results out of proportion, they discussed its relation to a “clarion call” for intervention measures (to solve a problem it suggested was well-supported) that it was hoping to deliver.

    That is not science. That is advertising. It suggests the authors were not approaching the question in an objective fashion, and it acted to front load an idea the results were successful, which might be all a casual reader would see.

    Like

  20. Hi db. I don’t think we should continue the journal club too much longer, but briefly: the logic of the analysis they have done is that there is no systematic effect of occasion other than that of the treatment (they were interested to see if there would be some kind of effect of just completing the IAT in controls, and decided there was none). Therefore, they essentially (via the GLM they fit) combine the 3 multiple measures of the controls AND the first occasion in the treatment group into a single control value which is compared to the t2 and t3 treatment means. You are correct that none of the simple comparisons (T at t2 v. C at t2, T at t1 v. T at t2, etc) are formally significant (I was thinking the bars were 95%CIs, but they are just SEs). I’m too lazy to replicate the analysis using Tables 2 and 3 but will just note their intervention effect size of 0.19/0.42 = 0.45 SD is about what the Forscher meta-analysis found. To detect an effect size delta=0.45 SD in a straight two sample comparison, they would have needed 105 individuals in each arm (90% power), and 172 per arm for delta=0.35.

    Re “advertising”, we are talking about an educational intervention not a biological experiment – you see similar rhetoric when talking about smoking cessation etc. There are no long term data showing if education programs actually reduce subtle biases of the kind we presume partly underlie obvious inequalities by gender and race in academia and industry, or reduce inequity. But these programs are being carried out. If we see these various measures of implicit bias as tools to evaluate short-term intermediate outcomes from these kinds of programs, then I think the weaknesses are less important.

    Re “effects at the individual level create a combined effect at the population level”: Actually I don’t really understand you are saying. This literature does discuss “bandwagon” effects, so if enough individuals change behaviour then it will spread. But my mental model is more along the lines of the (factor analytic) Measurement Model used in psychometrics and epidemiology: there is something real, a predisposition to favour particular groups of people, and the reason there is low correlation between measurements of this predisposition on different occasions or by different methods is because measurement is hard. Then, treatment that has a real effect on the latent trait will be hard to pick out – the estimated effect will be attenuated by measurement error (at least halved for the IAT based on the reliability). On top of this there is the causal relationship from predisposition to actual behaviour. This also will be affected by noise, in the randomness of opportunities to express bias, and is also hard to pick out.

    Like

  21. Hi David, yeah we can wrap up the journal club. I’ll just briefly address a couple comments.

    …………

    “the logic of the analysis they have done is that there is no systematic effect of occasion other than that of the treatment (they were interested to see if there would be some kind of effect of just completing the IAT in controls, and decided there was none)”

    Yes, that is what they want to say, but that is questionable along the lines I brought up (and that is in addition to the other problems I mentioned only in passing, though you already agreed the sample size was small).

    ………….

    “Re “advertising”, we are talking about an educational intervention not a biological experiment – you see similar rhetoric when talking about smoking cessation etc. ”

    I’m not sure how “other people are doing it too” makes it any better. Especially given the topic at hand (and subject of the essay). It certainly tries to come off as scientific and to be part of a body of science on the topic. It is trying to generate buzz for itself and the field (without sufficient evidence for both).

    To be honest, it should be trying to be more rigorous (like a biology experiment). Yes, I know that it can’t reach such rigor, but it can try harder than what was seen here.

    …………

    “There are no long term data showing if education programs actually reduce subtle biases of the kind we presume partly underlie obvious inequalities by gender and race in academia and industry, or reduce inequity. But these programs are being carried out. If we see these various measures of implicit bias as tools to evaluate short-term intermediate outcomes from these kinds of programs, then I think the weaknesses are less important.”

    Wow. If you do not see that as problematic, we simply have very different worldviews on how science should be conducted as well as public policy.

    …………..
    “Actually I don’t really understand you are saying. This literature does discuss “bandwagon” effects, so if enough individuals change behaviour then it will spread. But my mental model is more along the lines of the (factor analytic) Measurement Model used in psychometrics and epidemiology: there is something real, a predisposition to favour particular groups of people, and the reason there is low correlation between measurements of this predisposition on different occasions or by different methods is because measurement is hard”

    If enough individuals change behavior… which it has not been shown that any have. And your mental model is an assumption which begins to sound like “it is caused by fairies… which we know must be there, but are very hard to detect because they are fairies.”

    Frankly this whole section on relating individual to population had nothing to do with the paper but started with my reaction to your statement (which follows)…

    “iii. The strength of association with explicit behaviours is not large when expressed as correlation coefficients or mean differences, but this does not mean it is not highly important at the population level (this depends on base rates of behaviours etc).:

    That seems to say that effects of implicit bias (or IAT) scores may be weakly associated with explicit behavior at the individual level, but highly associated at the population level. That is what I was intending to challenge. Kind of a you can’t get there from here issue.

    …………….

    As a more directly on topic question… is there any level of probing into personal thoughts, and attempts to control actions of individuals directly (through bypassing the will of the agent using neuro- psycho- techniques) that you would find problematic? And if so, how do you define that line?

    Like

  22. “may be weakly associated with explicit behavior at the individual level, but highly associated at the population level”: Oh no, I was making the statistical point that a factor with a small (Pearson) correlation to an outcome can have a large attributable risk (population level health effect). A dinky example: the overall rate of microaggressions carried out in a population is 5%, with half the population being low implicit bias with 3.6% rate, and half a high bias at a rate 5.5%. The Pearson correlation between bias and microaggression is then only 0.05, but reducing the entire population to the 3.6% rate by a fabulous education program reduces aggression proportionately by (5-3.6)/5 = 28%, which would be a worthwhile outcome.

    As to bypassing the will, we haven’t banned nicotine, caffeine, alcohol, advertising, gaming machines (well the latter might depend on your state). I’m happy for “nanny state” interventions at the same level of these undoubtedly effectual agents, given they are generally accepted by most communities. I even think compulsory voting is a good idea.

    Like

  23. Hi David,

    “Oh no, I was making the statistical point that a factor with a small (Pearson) correlation to an outcome can have a large attributable risk (population level health effect). ”

    Okay, then that was not clear.

    …………….
    “A dinky example: the overall rate of microaggressions carried out in a population is 5%, with half the population being low implicit bias with 3.6% rate, and half a high bias at a rate 5.5%.”

    Again, there is an assumed connection here which I would dispute, and you just opened another can of worms with “microaggressions.” I get what you are saying, and would agree in cases where there was solid evidence for causation, but in this case we can leave it with “I disagree”.

    …………….
    “As to bypassing the will, we haven’t banned nicotine, caffeine, alcohol, advertising, gaming machines (well the latter might depend on your state).

    So much wrong here. Actually the US did ban alcohol, and some states and communities still do ban it (I lived in and around a “dry town”). Advertising with respect to many different things is highly regulated and in some cases banned. Same goes for gaming machines.

    Further, many of those things are banned for people under 18 and in some states 21, which is not what is being advocated for education to alter minds.

    But most importantly, none of these (with the possible exception of advertising) are being forced on people, if they effect anyone’s will it is with the (at least initial) consent of the individual. And I guess it is worth pointing out that they (or their “against the will” effects) are not thought to contribute to the public (or individual) good. Once it effects explicit behavior individuals generally try to get it off their backs.

    …………
    “I’m happy for “nanny state” interventions at the same level of these undoubtedly effectual agents, given they are generally accepted by most communities. I even think compulsory voting is a good idea.”

    It sounds like you would be for “lacing the water” programs by the state if it provided some measurable outcome you preferred? I suppose if it was with a fine whisky I might support that, but my guess is most people would not.

    Seriously though… have you considered how much power you want to grant a “nanny state” when that “nanny” could be Trump and Pence? I suppose salt-peter and anti-gay drugs will be just fine to slip into the water systems.

    While I am not for compulsory voting, there is a vast difference between that (which is policing explicit behavior) and trying to effect control of who people would vote for at the level of the subconscious (which is the issue here).

    Like

  24. 1. There is no such thing as a “microaggression.” The concept is part of the larger effort to redefine terms like ‘harm’, in the civic context, so as to justify violating liberal values like freedom of speech.

    2. Glad to hear that you’re “happy for the nanny state.” Alas, the US was designed on a blueprint of liberal values — hence the Bill of Rights — with which nanny-statism is incompatible to a great degree.

    Liked by 1 person

  25. “I hope I am wrong and perhaps, with this latest revelation, the creators of the IAT will remove the test from the internet and put out a statement unequivocally opposing its use, not only until these serious scientific challenges have been adequately addressed, but until they can articulate a sound purpose for public use of the test.”

    I would hope too but so far they have responded to criticism by first acknowledging the problems (and there are a lot) followed by a restatement of their confidence in their procedure for detecting racism. Why they continue I don’t know but part of it may be that economic factors are biasing their judgment (is there a IAT for that?).

    “Consulting Project Implicit scientists and staff provide legal, business and research consulting services” -http://www.projectimplicit.net/consulting.html

    Personally I think implicit bias is a feature of cognition and IATs are one way of showing this, at the same time I think it’s very important to acknowledge the tests can’t reliably predict racism (in any useful sense of the term), and like various supposed lie detection tests I’m completely against them being used in any justice proceedings.

    That’s not to say that I think all procedures that use a measure “designed to detect the strength of a person’s automatic association between mental representations of objects (concepts) in memory” are without cause.

    Liked by 1 person

  26. Hi db. “so much wrong here”…as I said, generally accepted by most communities (perhaps that needs to be “many communities within the West”). I’m curious whether you or Dan K thinks banning alcohol consumption by adults (or even 20 year olds) is nanny-statism or a completely acceptable act of a democratically elected government under the aegis of the US Bill of Rights*. I totally agree with db’s point that such effective methods of belief and behaviour change need to be closely regulated by nanny-statist democratic governments with a view to the common good, rather than willy-nilly wielded by private parties to their own advantage.

    I think we have slid from the original question, which I will now characterize as Does the IAT gives useful information to individuals?, to Do interventions designed to alter mean implicit bias as measured by the IAT in a population reduce inequity in the longer term? DB has argued no, but this is mainly a scientific question which can be answered if we willing to spend enough time and money. Since the interventions usually involve the person consciously practicing techniques that may minimize intuitive biases in judgments, I would argue that even if they are ineffective, they are not illiberal.

    We know impiicit bias is important as a one cause of injustice – I was just reading about another example affecting legal proceedings:
    http://www.brandeis.edu/departments/psych/zebrowitz/publications/PDFs/1990s/Zebrowitz_McDonald_1991.pdf
    Again we assume judges and jurors are not consciously being swayed by aesthetics, so it is only just that we require these individuals to work at reducing these biases by whatever methods work best. That is, there are expectations and duties on members of the liberal society acting in such roles as doctors, educators, employers and judges,

    *In Australia, the argument was if you are old enough to be conscripted and killed in warfare, then you are old enough to drink.

    Like

  27. Hi Marc, I liked your last comment, but want to make clear I disagree with one part: “I think implicit bias is a feature of cognition and IATs are one way of showing this”

    Bias is a loaded term which I don’t think is useful with regard to what is being measured. The same could be said for implicit. Differences in cognitive processing of information at subconscious levels is interesting. Measurement of implicit bias requires ideological assumptions and interpretations, and as you point out can be used to drive a business model catering to such ideologically minded people.

    ………….

    Hi David, I believe that banning alcohol and just about anything else (I’ll leave out exceptions for space) is a form of nanny-statism. I would be against it. But I should make clear that such laws would be about effecting behavior. Even worse to me would be engaging in propaganda campaigns (aka education) that use mechanisms bypassing the will of the individual in order to get people to avoid drinking alcohol.

    “DB has argued no, but this is mainly a scientific question which can be answered if we willing to spend enough time and money. ”

    I would go one step further. While I agree what you stated can be raised as a scientific question, it is not currently being researched in such a fashion. As I mentioned at PF I believe current assumptions, methods, and interpretations have placed this field largely on the line between pseudo-science and bad science.

    “Again we assume judges and jurors are not consciously being swayed by aesthetics, so it is only just that we require these individuals to work at reducing these biases by whatever methods work best. ”

    It seems like people keep trying to make positions taken by Dan and myself as arguing subconscious bias does not exist. That it would never play a factor in people’s decisions. Or that we should not care about putting in corrective measures to limit such effects.

    That is not the issue.

    The issue is what kind of corrective measures would be the most consistent with values of personal freedom and privacy, and which are anathema to that.

    The example you just gave of personal bias in the justice system has instruments (several checkpoints) to try to overcome that. The first is trial by jury, the second is jury selection (and ability to move trial location), the third (and fourth and etc) is the ability to appeal decisions to higher courts as well as other branches of gov’t for leniency.

    Now let’s say none of that can make up for aesthetic bias. Then it would seem a case might be made that trials should be held in a way that reduce the potential further. Trials could be held without the judges and juries being able to see the defendants and plaintiffs, and perhaps even using generic names to avoid race and sex being a factor.

    There are multiple ways we could change the system, rather than working on (or being concerned with altering) the subconscious or even conscious mental activity of citizens.

    Like

  28. db,

    “Bias is a loaded term which I don’t think is useful with regard to what is being measured …”

    I agree. I only realized the problems after I posted. I was thinking of bias in connectivity or at the neuro level, but people usually mean bias as an issue at the socio or cultural level.

    Liked by 1 person