Orin Kerr famously wrote “brilliant people agree with me.”
One of the consequences of confirmation bias is that we are overly impressed by ideas that we happen to share. It’s a natural instinct, if not watched carefully. If you read something that reflects or resonates with your own views, you’ll agree with it. Upon agreeing with it, you’ll think it is highly persuasive. And if it’s highly persuasive, it’s probably brilliant.
That was in 2010, back when times were simpler and our world bucolic. Today the adjectives have expanded to “moral,” “decent” and “just.” All moral, decent and just people will certainly agree, because to disagree makes you immoral, indecent and unjust. This is a subtle but significant paradigm shift, where any disagreement no longer relates solely to your intelligence, but the goodness of your soul. Before you were stupid. Now you’re stupid and venal.
It’s within this framework that Harvard lawprof Cass Sunstein and cognitive neuroscientist Tali Sharrot raise the “Republican Doctor” question.
Suppose you need to see a dermatologist. Your friend recommends a doctor, explaining that “she trained at the best hospital in the country and is regarded as one of the top dermatologists in town.” You respond: “How wonderful. How do you know her?
Your friend’s answer: “We met at the Republican convention.”
Knowing a person’s political leanings should not affect your assessment of how good a doctor she is — or whether she is likely to be a good accountant or a talented architect. But in practice, does it?
An experiment was conducted which addressed two factors, competency and ideology. Would people ignore conclusively demonstrated competency in favor of ideology? They created a fictitious thing they called a “blap,” and went to town using a less-fictitious reward they called money.
To make the most money, the participants should have chosen to hear from the co-player who had best demonstrated an ability to identify blaps, regardless of that co-player’s political views. But in general, the participants did not do this. Instead, they most often chose to hear about blaps from co-players who were politically like-minded, even when those with different political views were much better at the task.
In addition to choosing more often to hear from co-players who were politically like-minded, when making their decisions about whether a shape was a blap, participants were also more influenced by politically like-minded co-players than co-players with opposing political views.
In short, people sought and then followed the advice of those who shared their political opinions on issues that had nothing to do with politics, even when they had all the information they needed to understand that this was a bad strategy.
The rationale behind this “bad strategy” was chalked up to the “halo effect,”
If people think that products or people are good along one dimension, they tend to think that they are good along other, unrelated dimensions as well. People make a positive assessment of those who share their political convictions, and that positive assessment spills over into evaluation of other, irrelevant characteristics.
While the aptly-named halo effect may provide the psychological basis for this facially poor choice, it falls short of the depth of its force in the current atmosphere. People have always believed that they are generally smart and correct, and thus assumed that others who agree with them are similarly smart and correct. As Steven Duffield summed it up:
BREAKING: if you trust a person’s judgment, you trust that person’s judgment.
But that doesn’t do justice to the experiment or its outcome. How does one “trust a person’s judgment” when you have objective proof that they aren’t very good at the very thing for which you’re reposing trust? Even if you agree with someone’s politics, do you really want them to do your brain surgery when you know none of their patients ever survived?
There is a difference beneath the surface today that seems not to be visible to the unwilling. There is no tolerance for disagreement. Reasonable minds cannot differ. There is no possibility that your dogma isn’t true, and similarly no possibility that a conflicting view could be correct. It’s no longer about better or worse solutions between well-intended people seeking to achieve their goals in good faith, but a battle of good and evil.
Even when you know, you conclusively know, that the player whose beliefs you find abhorent and venal is the best person to follow when it comes to the performance of a discrete task, you refuse to do so, to heed the person’s choices, to win. You would rather lose, rather fail, than side with someone who is immoral, indecent and unjust.
The old platitude, that “reasonable minds may differ,” required one to accept the premise that disagreement was reasonable. This was possible only if reason was the foundation for decision-making, that we approached issues with the mindset of finding the most reasonable view.
The “halo effect” is so well-named because it refers to that shiny circular thing above an angel’s head. But it’s more than a mere metaphor these days, as views are embraced or rejected based not on reason but emotion, ideology and blind belief. Not even conclusive proof that the athiest’s solution is correct will shake you off your resolve to believe your bible.
The experiment was quite fascinating, and reveals something far deeper than our confirmation bias and natural tendency to extend approval in one dimension to acceptance in completely unrelated dimensions. It reveals that the depth of our bias is so great that we will ignore facts, objectively conclusive facts, if they conflict with whom and what we want to believe. Where once we believed the people who agree with us are brilliant, we now believe that people who disagree with us are evil. There’s no reasoning your way out of evil.