Politics and Brain Surgery

Orin Kerr famously wrote “brilliant people agree with me.”

One of the consequences of confirmation bias is that we are overly impressed by ideas that we happen to share. It’s a natural instinct, if not watched carefully. If you read something that reflects or resonates with your own views, you’ll agree with it. Upon agreeing with it, you’ll think it is highly persuasive. And if it’s highly persuasive, it’s probably brilliant.

That was in 2010, back when times were simpler and our world bucolic. Today the adjectives have expanded to “moral,” “decent” and “just.” All moral, decent and just people will certainly agree, because to disagree makes you immoral, indecent and unjust. This is a subtle but significant paradigm shift, where any disagreement no longer relates solely to your intelligence, but the goodness of your soul. Before you were stupid. Now you’re stupid and venal.

It’s within this framework that Harvard lawprof Cass Sunstein and cognitive neuroscientist Tali Sharrot raise the “Republican Doctor” question.

Suppose you need to see a dermatologist. Your friend recommends a doctor, explaining that “she trained at the best hospital in the country and is regarded as one of the top dermatologists in town.” You respond: “How wonderful. How do you know her?

Your friend’s answer: “We met at the Republican convention.”

Knowing a person’s political leanings should not affect your assessment of how good a doctor she is — or whether she is likely to be a good accountant or a talented architect. But in practice, does it?

An experiment was conducted which addressed two factors, competency and ideology. Would people ignore conclusively demonstrated competency in favor of ideology? They created a fictitious thing they called a “blap,” and went to town using a less-fictitious reward they called money.

To make the most money, the participants should have chosen to hear from the co-player who had best demonstrated an ability to identify blaps, regardless of that co-player’s political views. But in general, the participants did not do this. Instead, they most often chose to hear about blaps from co-players who were politically like-minded, even when those with different political views were much better at the task.

In addition to choosing more often to hear from co-players who were politically like-minded, when making their decisions about whether a shape was a blap, participants were also more influenced by politically like-minded co-players than co-players with opposing political views.

In short, people sought and then followed the advice of those who shared their political opinions on issues that had nothing to do with politics, even when they had all the information they needed to understand that this was a bad strategy.

The rationale behind this “bad strategy” was chalked up to the “halo effect,”

If people think that products or people are good along one dimension, they tend to think that they are good along other, unrelated dimensions as well. People make a positive assessment of those who share their political convictions, and that positive assessment spills over into evaluation of other, irrelevant characteristics.

While the aptly-named halo effect may provide the psychological basis for this facially poor choice, it falls short of the depth of its force in the current atmosphere. People have always believed that they are generally smart and correct, and thus assumed that others who agree with them are similarly smart and correct. As Steven Duffield summed it up:

BREAKING: if you trust a person’s judgment, you trust that person’s judgment.

But that doesn’t do justice to the experiment or its outcome. How does one “trust a person’s judgment” when you have objective proof that they aren’t very good at the very thing for which you’re reposing trust? Even if you agree with someone’s politics, do you really want them to do your brain surgery when you know none of their patients ever survived?

There is a difference beneath the surface today that seems not to be visible to the unwilling. There is no tolerance for disagreement. Reasonable minds cannot differ. There is no possibility that your dogma isn’t true, and similarly no possibility that a conflicting view could be correct. It’s no longer about better or worse solutions between well-intended people seeking to achieve their goals in good faith, but a battle of good and evil.

Even when you know, you conclusively know, that the player whose beliefs you find abhorent and venal is the best person to follow when it comes to the performance of a discrete task, you refuse to do so, to heed the person’s choices, to win. You would rather lose, rather fail, than side with someone who is immoral, indecent and unjust.

The old platitude, that “reasonable minds may differ,” required one to accept the premise that disagreement was reasonable. This was possible only if reason was the foundation for decision-making, that we approached issues with the mindset of finding the most reasonable view.

The “halo effect” is so well-named because it refers to that shiny circular thing above an angel’s head. But it’s more than a mere metaphor these days, as views are embraced or rejected based not on reason but emotion, ideology and blind belief. Not even conclusive proof that the athiest’s solution is correct will shake you off your resolve to believe your bible.

The experiment was quite fascinating, and reveals something far deeper than our confirmation bias and natural tendency to extend approval in one dimension to acceptance in completely unrelated dimensions. It reveals that the depth of our bias is so great that we will ignore facts, objectively conclusive facts, if they conflict with whom and what we want to believe. Where once we believed the people who agree with us are brilliant, we now believe that people who disagree with us are evil. There’s no reasoning your way out of evil.

19 thoughts on “Politics and Brain Surgery

  1. Spencer McGrath-Agg

    You probably need to know the researchers’ politics before you can evaluate their conclusions.

    Do you have the link to the quoted article (apologies if I missed it in the post)? I’d like to see how much skin the participants had in the game.

    1. SHG Post author

      I neglected to include a link to the Sunstein op-ed in my post, which has now been corrected. Here’s the link to ssrn for experiment, which was included in the op-ed that I failed to link.

  2. Patrick Maupin

    People killing themselves by going to the wrong doctor for political reasons is just the tip of the iceberg. How can any workplace, much less congress or society, function effectively if everybody on the “other side” is an evil idiot who is too stupid to teach you anything, and who might stab you in the back at any time?

    1. SHG Post author

      Using a physician as the foil was a wise choice, as it’s both objective and benign. What docs do is both scientific and devoid of political influence. We extrapolate from the conclusions, but the concept can be applied by either tribe based upon the neutrality of using a physician.

      1. Patrick Maupin

        Yes, choice of doctor is a good paradigm, but the extreme example of that (which probably makes the case even clearer to most people) is that some people will see “western medicine” itself as evil and political, and thus won’t see any doctor.

        These research findings are consistent with, and perhaps even completely explained by, a generalization of the Dunning-Kruger theory to extend it beyond self. An individual who mistakenly and consistently believes in his own high competence relative to that of others has already demonstrated both that he lacks objectivity in evaluating competence, and that he has an innate need to believe highly of himself.

        Those two factors taken together practically mandate that he rank the competence of others according to the few criteria he is capable of easily evaluating, including how well their beliefs match his own.

          1. Patrick Maupin

            Yes, we can agree on barbecue, ergo we are both brilliant. No wonder all the vegans are shrilly struggling.

            1. Patrick Maupin

              I tried to send one of them to Ted Nugent to learn some hunting skills, but that didn’t work out so well.

            2. Richard Kopf

              SHG and Friends,

              Barbecue can be faked. Mix up some barbecue sauce and you can throw it on the meat from an old cow that even a slaughterhouse won’t take and call it BQ.

              In contrast, Gentlemen, I give you bacon. Fry and eat bacon and your intellectual powers to discern truth from BS–regardless of one’s tribal instincts–become as sharp as SHG’s tongue.*

              All the best.

              RGK

              * Speaking of Tongue, can you say, “tacos de lengua?” Yum.

            3. SHG Post author

              For the record, I love bacon and my tongue will not appear on the menu of any food truck in the foreseeable future.

  3. JR

    I think the Russians have a saying that goes something like “Idiot’s thoughts soon converge.”

  4. Jack

    I would have been interesting to see the results of the study broken down by the political leanings of the participants. Obviously people on the same side as me would have done better.

  5. Lucas Meyer

    Andrew Gelman, a Statistics professor from Columbia University that blogs about over-hyped research blogged yesterday about this.

    His summary is that there are giant leaps from what was really researched to what appeared in the NYT article. The text below comes from his blog:

    “ ‘Participants were required to learn through trial and error to classify shapes as ‘blaps’ or ‘not blaps’, ostensibly based on the shape’s features. Unbeknownst to the participants, whether a shape was a blap was in fact random.’ And later they had to choose ‘who the participant wanted to hear from about blaps and how they used the information they received.’

    (…)

    Huh? I’ll remember that, next time I go to a doctor and ask him or her to categorize geometric shapes for me.”

    1. Andrew Cook

      Whether or not the blap classification was random isn’t important for the study, as the participants didn’t know it was random. To them, blap classification was a skill you could learn, just as one could learn medicine or law. If they behaved rationally, the participants should have listened to and received guidance from “expert” blap classifiers, even though they became experts by random chance alone. Instead, they listened to their feelz and chose their in-group, including the worst blap classifiers of the bunch.

      … That explains so much about the comments section.

Comments are closed.