Cite?

The obvious problem with a great many studies is that they’re cited for a proposition by people who never read them. They become part of a myth of a study, such as the Lisak study persistently used to show that only 2-10% of rapes are false. It’s not remotely what the study says, and yet it’s become an article of faith, repeated constantly, believed without question. But that’s just one issue.

For people who care enough, as opposed to people who simply cite a study assuming its validity, there are numerous problems that arise, from a study conflating definitions or issues (such as a study about “rape” that includes in its definition the “ear rape” of hearing unwanted words) to the methodology of size, self-selection, payment or incentive. It’s as if I did a study of what everyone living at Casa de SJ thought about something. It might look as official as any other study, but it wouldn’t be of much value to anyone but us.

But there is another huge gap in the knowledge base upon which we rely to ground our claims of truth. What if someone with a couple of letters after their name had a thesis they desperately believed to be true and kinda made sure their study proved it? What if it was just complete nonsense, but was embraced as a darling study, cited a million times to prove a thesis that conformed with whatever belief was consistent with the current elite othrodoxy?

Except there are gatekeepers, the people who decide what studies get published and what studies do not. Granted, they can be pranked, such as Sokal Squared, and there are journals that pretend to be legit but are merely house organs for junk science grifters, but there are serious journals too, the ones we all know and believe, like the New England Journal of Medicine. Surely they can be trusted to limit what they publish to serious studies about serious matters?

Maybe not.

Academic publishing is famously brutal. You might have a great manuscript that is under review then is rejected based on comments of one anonymous reviewer who thinks that you use too many exclamation points. Or a reviewer who is bitter because you didn’t cite his particular work. Or a reviewer who didn’t really read the manuscript and who goes on to criticize your work for neglecting some important statistical process that you, in fact, implemented plainly and correctly.

And this is just the tip of the iceberg.

I know, because I have published more than 100 academic pieces in my career to date. I’ve pretty much been through it all.

Glen Gehar tried to get a paper published “on the topic of political motivations that underlie academic values of academics,” inspired by a talk by Jonathan Haidt, who founded Heterodox Academy. Nobody wanted to publish it.

Each rejection came with a new set of reasons. After some point, it started to seem to us that maybe academics just found this topic and our results too threatening. Maybe this paper simply was not politically correct. I cannot guarantee that this is what was going on, but I can tell you that we put a ton of time into the research and, as someone who’s been around the block when it comes to publishing empirical work in the behavioral sciences, I truly believe that this research was generally well-thought-out, well-implemented, and well-presented. And it actually has something to say about the academic world that is of potential value.

I’ve never had a paper that was so difficult to publish. Not even close.

Since there were no journal takes, he ended up taking the advice of Clay Rutledge and publishing it on his own.

Honestly, this suggestion seemed kind of genius to me. After all, I don’t need more publications for any extrinsic reason at all. I’ve held tenure since 2004. Further, I know full well that my Psychology Today blog posts receive way more views than do my academic articles. And I know that, in fact, many of these views come from academics themselves.

A bit of irony is that blog posts are often far more widely read than academic articles, but lack the ascribed credibility of “serious” journals. Not to mention, they don’t cite as well, so are easily dismissed. But what did Gehar’s study find?

We designed a study with academics in mind. In short, we surveyed nearly 200 academics from around the US and asked them to rate the degree to which they prioritize each of the five following academic values:

  • Academic rigor
  • Knowledge advancement
  • Academic freedom
  • Students’ emotional well-being
  • Social Justice

Do the “gatekeepers” of cites value academic rigor and advancement of knowledge, or do they value ideological objective?

Some highlights of the findings are as follows:

  • Relatively conservative professors valued academic rigor and knowledge advancement more than did relatively liberal professors.
  • Relatively liberal professors valued social justice and student emotional well-being more so than did relatively conservative professors.
  • Professors identifying as female also tended to place relative emphasis on social justice and emotional well-being (relative to professors who identified as male).
  • Business professors placed relative emphasis on knowledge advancement and academic rigor while Education professors placed relative emphasis on social justice and student emotional well-being.
  • Regardless of these other factors, relatively agreeable professors tend to place higher emphasis on social justice and emotional well-being of students.

Of course, if you want to know more than just the highlights, or whether these highlights are legitimate, or whether the methodology of the study is sound, you would have to read the actual study, even if these highlights confirm what you always suspected about what’s become of academia.

Then again, you won’t be able to cite to this study in a prestigious journal because they would have nothing to do with it, unlike the Lisak study which has since been debunked as a worthless piece of crap, but one that is irrefutable in campus rape mythology.

13 thoughts on “Cite?

  1. Howl

    “After some point, it started to seem to us that maybe academics just found this topic and our results too threatening.”

  2. Richard Kopf

    SHG,

    I have second hand knowledge about the thrust of your post. From that vantage point, your post is spot on. Even the hard sciences are sometimes infected by the peer reviewer’s bias.

    Hypothetically, let’s say it would be good to know about the age, growth, and reproductive dynamics of a certain fish in the southwest Pacific Ocean. If after 90 days at sea and eyes strained to the breaking point peering into a microscope after many dissections of the critters one might conclude that the fish was doing pretty well.

    A highly regarded peer reviewer might note that the study while seemingly valid could be used for nefarious purposes. Perhaps the author could tone it down it was implied. If the reports acceptance by the reviewer meant a PhD, but rejection of the dissertation the lack of a PhD after years and years of study, what would you do if you were the author and researcher if you believed what you wrote was right?

    You would probably put in enough caveats that even Green Peace wouldn’t give a shit. And science, buttressed by the vaunted peer review process, trudges on.

    All the best.

    RGK

  3. B. McLeod

    Trashcan “studies,” designed to “prove” the wokey fad du jour, are leading to a demise in the credibility of alleged science altogether. Once people become aware of a few “ear rape” studies and realize how this publishing thing is gamed, maybe they decide not to believe in pandemics either. By not policing its excesses, the scientific academy is sending itself over a cliff.

    1. SHG Post author

      Why should science get a free ride when the unduly passionate have chosen to destroy the credibility of all political institutions?

  4. Rengit

    When I took psych 101 in college, we learned about the groupthink phenomenon in social psychology, and the paradigmatic case used to explain it was Kennedy’s “best and brightest” leading us into the Vietnam War quagmire. Over 50 years later, and apparently, despite the knowledge of this phenomenon, a great many academics think they’re immune to it. “We all think the same thing, and this study we did confirms the things we all believe? No, that can’t be groupthink, it has to be because we’re right!”

  5. phv3773

    I read a little of the paper to see how the data was collected. The study is based on an online questionnaire. One hundred seventy presumably self-selected people began the questionnaire, and 140 completed it. So about 1 in 5 self-deselected. How many of those are rigor-demanding conservatives? Most of them, maybe?

    If you’re writing a paper quoting statistics, you need to work a little harder selecting your sample.

  6. DaveL

    While the difficulty encountered in trying to publish the study is certainly cause for concern, I’m heartened by the small effect size they actually found. The strongest correlation between their measure of political conservativism and any of the listed academic values was -0.34. That’s an r-squared value of 0.1156, which could be interpreted as left/right political leanings only explaining a little over 10% of the observed variation. The male/female difference was similar, with the difference in average between the sexes being roughly a tenth of the typical variation within each sex. So, we’re not talking about separate camps with incompatible values.

    1. delurking

      Well, I’ve published a bunch of articles, and my most recent one was also, in my opinion, unfairly rejected. “Shit happens” is as reasonable an explanation for both my and Gehar’s rejections as anything else. Gehar’s paper doesn’t say much.

      “Overall, regardless of any gender differences, academic rigor and advancing knowledge were the
      most highly endorsed of the core values.”
      “Interestingly, field of study was not, in and of itself, significantly related to political orientation”
      “No significant effect of the covariate (political orientation) was found” on views on academic freedom.
      etc., etc.

      I’m not going to do the statistical analyses, but just scrolling through the paper and looking at how many different ANCOVAs they ran rings researcher degrees of freedom alarm bells. Maybe the rejection is reasonable and Gehar is just grumpy that the statistical standards are higher now than they were 20 years ago.

      1. DaveL

        Do journals really have such alarm bells? I can’t remember all the times I’ve seen some newspaper article about scientists finding link between (for example) weedkiller and heart disease, then when I find the original study, it’s like “we compared exposure to 20 common herbicides to incidence rates of heart disease, brain cancer, bowel cancer, psychiatric disturbance, renal failure, pneumonia, lupus, stroke, and flatulence…”

        1. delurking

          I don’t know. I doubt that looking for such things has become formalized as part of the journal acceptance standards, but certainly the behavioral science community and psychology more broadly, embarrassed by the replication crisis, is paying more attention. I am still skeptical about medicine, though. I also see a lot of suspicious popular press articles these days about diet and some ailment.

  7. verylosingtrader

    ‘A bit of irony is that blog posts are often far more widely read than academic articles, but lack the ascribed credibility of “serious” journals’

    This is why you won’t be nominated for SCOTUS.

    Well, there’s that and all the mind-raping insults you’ve doled out.

Comments are closed.