Chesterton’s E-Fence

Michelle Alexander’s seminal work, The New Jim Crow, Mass Incarceration in the Age of Colorblindness, was a brilliant exposé of how “our prison system is a unique form of social control, much like slavery and Jim Crow, the systems it has replaced.” Buried in there is a problem atop a problem, that the New Jim Crow was born of reform of the old Jim Crow. Or to be blunt, we fixed one problem by creating a new problem.

And now Alexander raises the newest Jim Crow.

Since 2010, when I published “The New Jim Crow” — which argued that a system of legal discrimination and segregation had been born again in this country because of the war on drugs and mass incarceration — there have been significant changes to drug policy, sentencing and re-entry, including “ban the box” initiatives aimed at eliminating barriers to employment for formerly incarcerated people.

This progress is unquestionably good news, but there are warning signs blinking brightly. Many of the current reform efforts contain the seeds of the next generation of racial and social control, a system of “e-carceration” that may prove more dangerous and more difficult to challenge than the one we hope to leave behind.

By “e-carceration,” she refers to our adoption of algorithms as the colorblind, scientific fix for human frailty.

Bail reform is a case in point. Thanks in part to new laws and policies — as well as actions like the mass bailout of inmates in New York City jails that’s underway — the unconscionable practice of cash bail is finally coming to an end. In August, California became the first state to decide to get rid of its cash bail system; last year, New Jersey virtually eliminated the use of money bonds.

But what’s taking the place of cash bail may prove even worse in the long run. In California, a presumption of detention will effectively replace eligibility for immediate release when the new law takes effect in October 2019. And increasingly, computer algorithms are helping to determine who should be caged and who should be set “free.” Freedom — even when it’s granted, it turns out — isn’t really free.

As a wag might put it, we’re just changing the head on the corpse, identifying problems that are undoubtedly real, but substituting simple fixes that passionate advocates are certain, just certain, will fix everything, only to come to the realization after the shit hits the fan that their beloved solutions not only fail to fix the problem, but create even more, worse problems.

There is a reason intractable systemic legal problems are intractable and systemic. There’s a reason all those smart and passionate folks that came before you failed to bring you a perfect world, and it’s not that they’re so stupid and you’re so much smarter. Nor, as so many of the most passionate advocates believe with all their heart and soul, that until this wondrous moment in history, everyone was hateful and cynical, deliberately creating a system designed to be racist and sexist.

No, the problem is that problems can be very hard to fix, that there may be no simple, easy, painless solution that will miraculously eradicate awful things and bring us a new age of good feelings.

Finding solutions to problems begins with identifying the problem to be solved. Even this step has become nearly impossible to accomplish, as potential root causes are removed from consideration for their failure to comport with social justice ideology. This doesn’t mean, for example, that black guys are more criminalish, but maybe the dreaded bourgeois values of hard work, sacrifice, education and family structure have a lot to do with how kids do in school, how they come out on the back end, whether to college or prison.

Or even whether our “American Dream” belief that a college degree is a magical path to financial security and happiness, as opposed to a skilled trade. And someone still has to empty the garbage cans in Utopia, or we’ll be awash in garbage. Maybe the path we promote leads us from one problem to another because we refuse to come to grips with the fact that ideology can’t change reality, so every easy fix that suits progressive desires crashes against the impenetrable wall of facts.

When and if we define a problem, we’re next confronted with how to fix it. People want fixes, and the fixes they want not only have to mesh with their beliefs, but be relatively painless. This Menckian approach isn’t surprising. Who doesn’t want magic bullet solutions that solve everything and cost nothing? Except if it was that easy, it would have been fixed long ago, except of course for the fact that everybody until now was evil and hateful.

As Alexander correctly notes, we’ve entered another stage of reform to fix that last stage of reform that fixed the original Jim Crow, and it’s got problems.

Under new policies in California, New Jersey, New York and beyond, “risk assessment” algorithms recommend to judges whether a person who’s been arrested should be released. These advanced mathematical models — or “weapons of math destruction” as data scientist Cathy O’Neil calls them — appear colorblind on the surface but they are based on factors that are not only highly correlated with race and class, but are also significantly influenced by pervasive bias in the criminal justice system.

As O’Neil explains, “It’s tempting to believe that computers will be neutral and objective, but algorithms are nothing more than opinions embedded in mathematics.”

The reliance on algorithms is a great palliative measure, taking the onus off judges and putting it on mathematics. “Hey, it’s not me being racist, but the algorithm says so.” We substitute proxy criteria for racist criteria and end up in the same place, or worse. And the fix, ironically enough, is to then ignore the disparate impact of algorithms by human gerry-rigging them to achieve the ideological outcome we expected them to have, but they don’t, which undermines the entire point of using algorithms if we’re going to ignore them as soon as they fail to give us the results we wanted.

Challenging these biased algorithms may be more difficult than challenging discrimination by the police, prosecutors and judges. Many algorithms are fiercely guarded corporate secrets. Those that are transparent — you can actually read the code — lack a public audit so it’s impossible to know how much more often they fail for people of color.

These algorithms are created by corporations whose business model is to sell them to police and courts. For obvious reasons, they want to protect their IP, since revealing it would mean anyone could create their product and sell it. Except without transparency, the system is making determinations about people’s lives based on a “black box says so” without any idea of how the black box makes its decisions.

But even if the black box secret is revealed, and turns out to be just as discriminatory as the system was before, that doesn’t mean there will be a better algorithm that will produce the results we want. At best we’re identifying what appears to be a problem, but that doesn’t mean there is any solution, or a solution that won’t be too painful for us to accept.

So the New Jim Crow is now the Newest Jim Crow, until it become the even Newer Jim Crow when we continue to make the same mistakes for the same reasons. In the meantime, we’ve torn down the systems in place before, which may well have been better, or at least more transparent, than the reforms that replace them to the huge applause of passionate but simplistic advocates.

6 thoughts on “Chesterton’s E-Fence

  1. Guitardave

    If you explore how the algos are working out for traders, it seems their biggest problem is that it finds patterns within the noise, causing more problems ( AI rabbit holes). Human traders consistently get 5-10% higher gains than A.I.. This is an extremely bad idea…..it ain’t gonna be someones money that gets ‘lost’.
    ( Thanks for the dystopian idiocracy nightmares, Admiral.) But I’m just a dinosaur…

  2. Ayoy

    Experian, the credit rating agency, are partnering with UK police to do this kind of profiling.

    Bigbrotherwatch looked into the data they are using to power the model. Health data, exam results, postal code, ethnicity to name a few.

Comments are closed.