It was going to be our savior. It was going to remove the human factor, the bias whether explicit or implicit, from the mix. Artificial intelligence would rid us of the inherent racism of humans that we spent decades trying to shake but just never seemed to go away. After all, an algo can’t be racist. An algo has no feelings. An algo can’t love or hate. An algo is just an algo. Algorithms would save us.
Neither “woke” nor “social justice” were in vogue yet, so it would be unfair to characterize its proponents as such. They were against racism in the legal system, as were we all, but they weren’t “anti-racists” as that word is used today to characterize the new racism. That was back when eliminating racism was the goal rather than substituting new racism for old racism. And algos were the answer.
Then people began to realize that an algo was neither more nor less racist than the developer coded it. It used cold data, but the data came from humans. It applied it without favor, but fear might be built into its source. It was a dilemma. Factors like jobs and family ties were strong predictors of defendants returning to court or not committing new crimes, but jobs and families favored white people over black people. Algos worked, to the extent math works, but ended up providing similar disparate outcomes. Since the belief was that these outcomes were, per se, racist, the algos were racist.
The failing was made abundantly clear in Cathy O’Neil’s “Weapons of Math Destruction.” which demonstrated that the assumptions that went into the numbers perpetuated error in AI but hid it behind the seemingly cold, neutral readout. Since then, AI flipped on its head, another evil black box that enabled racism to remain in place while masking it with “science.” Prawf Frank Pasquale argues that it’s not enough that some realize this, but that evil algos must be prohibited.
[T]here is a risk of discrimination or lack of fair process in sensitive areas of evaluation, including education, employment, social assistance and credit scoring. This is a risk to fundamental rights, amply demonstrated in the United States in works like Cathy O’Neil’s “Weapons of Math Destruction” and Ruha Benjamin’s “Race After Technology.” Here, the E.U. is insisting on formal documentation from companies to demonstrate fair and nondiscriminatory practices. National supervisory authorities in each member state can impose hefty fines if businesses fail to comply.
Frank has a point about not trusting AI to do our dirty work any more than Tesla crashes instill confidence that self-driving cars (hear much about them lately?) won’t crash into big rigs. But the E.U. regs he promotes define an evil AI by its outcome, “nondiscriminatory practices.”
If the AI is predicated, for example, on prior criminal history, and black people are going to have more significant priors because cops treat them like dirt, focus their attention on black people, and consequently cause a feedback loop where racist policing gives rise to black people with more priors, which gives rise to AI treating black people as more prone to criminality, which is used to justify greater deployment in black neighborhoods such that more black people are arrested and have more priors, then the math works but it’s garbage in, garbage out.
But that doesn’t mean all data that ends up with disparate outcomes is discriminatory, either. Poverty correlates with crime, and not just for the obvious reason that poor people commit crimes because they have nothing to eat. There are cultural factors, two-parent families, good role models, a strong appreciation of education, what are derisively called “bourgeois values,” that come into play. The arguments about why don’t do much to change the short term effects of black on black crime, violence, drugs, theft and people being physically harmed. When someone is about to shoot you, it’s not a good time to argue over social welfare programs or whether the SAT is racist.
A.I. developers should not simply “move fast and break things,” to quote an early Facebook motto. Real technological advance depends on respect for fundamental rights, ensuring safety and banning particularly treacherous uses of artificial intelligence. The E.U. is now laying the intellectual foundations for such protections, in a wide spectrum of areas where advanced computation is now (or will be) deployed to make life-or-death decisions about the allocation of public assistance services, the targets of policing and the cost of credit.
There is an ideological assumption that these three factors, “fundamental rights, ensuring safety and banning particularly treacherous uses of artificial intelligence,” aren’t in internal conflict. Frank’s point is that AI developers should be careful not to develop algos that perpetuate racist input and then wrap it up in a pretty math bow. At the same time, what purpose is there to AI if it does its job well, is fundamentally sound, and still produces what Frank calls “treacherous uses” because it results in disparate outcomes?
Just as it’s wrong to hide racist assumptions within the data used by algos, is there any point to putting math to use when it’s only allowed to tell us 2+2=5 because that’s the outcome we want it to tell us? Don’t be falsely discriminatory, but also don’t be falsely non-discriminatory. But is there any support for accurate AI that doesn’t comport with our preconceived biases? Not if any algo that results in disparate outcomes is per se prohibited, no matter how factually accurate it may be.
Discover more from Simple Justice
Subscribe to get the latest posts sent to your email.

I think the Sentence-o-matic 1000 will work just fine after we get those pesky, unpredictable humans straightened out…
Not even then.
Ken Jennings after losing to Watson on Jeopardy.
“ I for one welcome our new computer overlords.”
How’s that Watson doing lately? Turns out that winning at Jeopardy is easier than serving any actual useful function.
Watson is used for all sorts of things. A lot of them are for problems where the answer can be checked. For example, if it’s used to accept a list of symptoms in order to suggest a diagnosis, the docs are going confirm with additional tests if available.
I don’t think EU-style, hyper-bureaucracy is going to find a solution in the criminal justice context.
It’s not as if Watson wasn’t the subject of a “where are they now” story less than two weeks ago. You ever wonder if people just make up stuff about which they have no clue?
What’s curious is how this was once such a huge idea with vast potential, and now it’s either uncontroversial or no one cares anymore. You’ve handled this was nuance, as you usually do, and I would have expected this to warrant heated debate.
Yet here we are, on a sleepy Saturday, with barely a thoughtful comment.
Kinda makes me wonder why I bother.
I worked on a project whose goal was to identify fraud in a government program using AI. The process went like this: Investigations were performed on a randomly chosen group of participants and a yes/no determination of whether or not there was fraud was produced for each. A data set was also provided containing values such as age, sex, race , zip code, marital status, family composition, and so on for each member of the group. The (obvious) goal was to develop an algorithm that predicted fraud from these variables.
The AI algorithm we developed was around 95% accurate. This was duly communicated to the project sponsors, who liked the result just fine but didn’t like the fact that one particular variable from the list above was the key to the algorithm’s effectiveness. I bet you can guess which variable that was.
When that variable was removed from the data set accuracy dropped to around 70%. Not good. Then someone got the bright idea of first predicting that one variable from the others, then feeding that into the original algorithm. Accuracy went up to around 90%. Good enough.
This result was comminicated to the project sponsors, and that was that. (They never called, they never wrote…)
Now, the primary source of bias here should be obvious: The investigations. These were necessarily done by people in the field, and it seems … unlikely that they looked at everyone equally. Definitely problematic.
Or was it? I lied about a couple of things in this account: This happened in 1980, when I was still in college. There was no “AI” involved, only a fairly simple regression analysis courtesy of SPSS. The government program was welfare, and investigations were done as part of some kind of oversight process. The problem was the more fraud the oversight process turned up the less money you got. So the goal wasn’t to detect fraud per se, but rather to be able to direct the limited local investigatory resources available to the find fraud the oversight process was likely to detect.
The only thing AI might have changed is it’s possible that a machine learning approach could have teased out that variable on its own, and if it had it’s highly unlikely you could figure out that’s what happend from examination of the resulting neural net or whatever.
In conclusion, two points. First, this issue has been around for a long time, and has very little to do with the tools involved. There are a lot of reasons to be skeptical about AI specifically, but this isn’t one one of them.
Second, whether or not something counts as bias depends on your true goals. In this case has there been a source of completely unbiased data the results would not have been as good.
Perhaps you have a point, but one would never know it from what you’ve written.