In the constant and age-old struggle to end crime, or at least alleviate it to the extent possible, we’ve often turned to social scientists, criminologists and others, to tell us what works. They, in turn, conduct experiments and reach conclusions. We, in turn, adopt their conclusions and base approaches and policies on their experiments. And crime continues unabated.
An article at City Journal rips social “science” to shreds, and in the process, teaches a lesson that many need desperately to learn.
To understand the role of experiments in this context, we should go back to the beginning of scientific experimentation. In one of the most famous (though probably apocryphal) stories in the history of science, Galileo dropped unequally weighted balls from the Leaning Tower of Pisa and observed that they reached the ground at the same time. About 2,000 years earlier, Aristotle had argued that heavier objects should fall more rapidly than lighter objects. Aristotle is universally recognized as one of the greatest geniuses in recorded history, and he backed up his argument with seemingly airtight reasoning. Almost all of us intuitively feel, moreover, that a 1,000-pound ball of plutonium should fall faster than a one-ounce marble.
Yet Aristotle, certainly more brilliant than you or me, was wrong. The critical point is that we can argue, with what we believe to be unassailable logic, all day long, but that doesn’t make it so. Yet this anecdote relates to science, an experiment that can be replicated over and over. Social science, on the other hand, isn’t worth spit.
Crime, like any human social behavior, has complex causes and is therefore difficult to predict reliably. Though criminologists have repeatedly used the nonexperimental statistical method called regression analysis to try to understand the causes of crime, regression doesn’t even demonstrate good correlation with historical data, never mind predict future outcomes reliably. A detailed review of every regression model published between 1968 and 2005 in Criminology, a leading peer-reviewed journal, demonstrated that these models consistently failed to explain 80 to 90 percent of the variation in crime. Even worse, regression models built in the last few years are no better than models built 30 years ago.
So since the early 1980s, criminologists increasingly turned to randomized experiments. One of the most widely publicized of these tried to determine the best way for police officers to handle domestic violence. In 1981 and 1982, Lawrence Sherman, a respected criminology professor at the University of Cambridge, randomly assigned one of three responses to Minneapolis cops responding to misdemeanor domestic-violence incidents: they were required to arrest the assailant, to provide advice to both parties, or to send the assailant away for eight hours. The experiment showed a statistically significant lower rate of repeat calls for domestic violence for the mandatory-arrest group. The media and many politicians seized upon what seemed like a triumph for scientific knowledge, and mandatory arrest for domestic violence rapidly became a widespread practice in many large jurisdictions in the United States.
But sophisticated experimentalists understood that because of the issue’s high causal density, there would be hidden conditionals to the simple rule that “mandatory-arrest policies will reduce domestic violence.” The only way to unearth these conditionals was to conduct replications of the original experiment under a variety of conditions. Indeed, Sherman’s own analysis of the Minnesota study called for such replications. So researchers replicated the RFT six times in cities across the country. In three of those studies, the test groups exposed to the mandatory-arrest policy again experienced a lower rate of rearrest than the control groups did. But in the other three, the test groups had a higher rearrest rate.
Why? In 1992, Sherman surveyed the replications and concluded that in stable communities with high rates of employment, arrest shamed the perpetrators, who then became less likely to reoffend; in less stable communities with low rates of employment, arrest tended to anger the perpetrators, who would therefore be likely to become more violent. The problem with this kind of conclusion, though, is that because it is not itself the outcome of an experiment, it is subject to the same uncertainty that Aristotle’s observations were.
The article goes on to state, and demonstrate, the inability of social scientists to craft programs that success and results that can be replicated. Jim Manzi, the author, concludes, among other things, that there’s just no magic, no matter how high-sounding or logical the solutions appear. The human condition is too varied, complex and unpredictable to be transformed so easily.
This is really a very important piece to consider on many levels, particular for those of us who promote ideas that we would believe to improve criminal justice and its outcomes for people. Ideas that make so much sense to us, or which social scientists tell us will solve the problems that vex us, receive our strong, and often blind, support. We support them at the expense of other concerns and priorities in the belief that they will cure a disease, making it worth the commitment, only to be disappointed when it becomes apparent long afterward, and having suffered for the choice, that it turns out not to be the magic bullet solution we thought/hoped it was.
It would be easy to lift thousands of words from the article to make Manzi’s point, but better that you read it yourself and give it some thought.
H/T Keith Lee at An Associate’s Mind
Discover more from Simple Justice
Subscribe to get the latest posts sent to your email.

What’s frightening is that while human subjects research in the hard sciences (esp. biomedicine) requires investigators to jump through all sorts of hoops and subject their plans to all sorts of institutional review in order to minimize the potential of harm to research subjects (or at least to balance it with the goals of the research being undertaken), no such ethical constraints seem to exist in social science policy experiments of this sort.
I would imagine that being arrested would constitute harm to a “research subject” (community resident) in these experiments. That there’s no ethical oversight of these studies is disturbing.
Agreed.
It’s pretty awful that these policies are being implemented in real world settings when there is no accurate, empirical evidence backing them up.
There just isn’t a method to accurately model/predict human behavior to the degree that the “social sciences” seem to purport. I don’t want to completely blast them – I’d prefer that they attempt to come with repeatable, predictable tests in which to base their hypothesis – as opposed to mere speculation. However, as you noted, these “test” subjects are real people, in the real world, with real consequences. And I don’t think anyone would argue that they volunteered to be test subjects.