Tuesday Talk*: Is AI In Law Enforcement Worth It?

While calling it “artificial intelligence” is somewhat new, the use of algorithms in law enforcement has been going on for a while now, and nobody knows whether the cost/benefit analysis makes it worthwhile.

The Office of Management and Budget guidance, which is now being finalized after a period of public comment, would apply to law enforcement technologies such as facial recognition, license-plate readers, predictive policing tools, gunshot detection, social media monitoring and more. It sets out criteria for A.I. technologies that, without safeguards, could put people’s safety or well-being at risk or violate their rights. If these proposed “minimum practices” are not met, technologies that fall short would be prohibited after next Aug. 1.

As tech emerged which purported to provide a new mechanism for law enforcement to be more effective, it’s been adopted without either fanfare or critique. Facial recognition, for example, is some really cool stuff in the movies, but it has also been the product of some spectacular failures. Notably, the failures tend to be very much racial, as its effectiveness in recognizing black people doesn’t seem to be nearly as valid as white people. Much as we don’t leap to find excuses to blame racism, this is very much a racial problem.

Consider the cases of Porcha Woodruff, Michael Oliver and Robert Julian-Borchak Williams. All were arrested between 2019 and 2023 after they were misidentified by facial recognition technology. These arrests had indelible ­­­consequences: Ms. Woodruff was eight months pregnant when she was falsely accused of carjacking and robbery; Mr. Williams was arrested in front of his wife and two young daughters as he pulled into his driveway from work. Mr. Oliver lost his job as a result.

All are Black. This should not be a surprise. A 2018 study co-written by one of us (Dr. Buolamwini) found that three commercial facial-analysis programs from major technology companies showed both skin-type and gender biases. The darker the skin, the more often the errors arose. Questions of fairness and bias persist about the use of these sorts of technologies.

Other technologies, from license plate readers to shotspotter, have been criticized for a variety of issues, from intrusiveness to error to ease of manipulation resulting in hiding abuse behind the curtain of tech neutrality. While they may be great when they work, are they great enough to overcome when they don’t? How would we know?

As scholars of algorithmic tools, policing and constitutional law, we have witnessed the predictable and preventable harms from law enforcement’s use of emerging technologies. These include false arrests and police seizures, including a family held at gunpoint, after people were wrongly accused of crimes because of the irresponsible use of A.I.-driven technologies including facial recognition and automated license plate readers.

The office of management and budget is proposing “minimum practices” for technology to catch up to its use and create a paradigm for whether it’s overall a good thing or bad thing, whether we are willing to suffer the cost of errors for the benefits tech purports to provide.

Here are highlights of the proposal: Agencies must be transparent and provide a public inventory of cases in which A.I. was used. The cost and benefit of these technologies must be assessed, a consideration that has been altogether absent. Even if the technology provides real benefits, the risks to individuals — especially in marginalized communities — must be identified and reduced. If the risks are too high, the technology may not be used. The impact of A.I.-driven technologies must be tested in the real world, and be continually monitored. Agencies would have to solicit public comment before using the technologies, including from the affected communities.

In the rush to embrace cool technology as it appears on the market, law enforcement has done little to implement safeguards and limits in its use. If it makes their job easier, or believed to be easier at least, they buy in. They don’t ask the public whether it’s a good idea. They don’t admit to its failings, which are usually swept under the rug since nobody wants to admit that their shiny new toy sucks, at least toward some people. And the determination of whether the tech is worth it is largely left up to law enforcement itself, without the rest of government or the public getting a chance to question it or call bullshit on its implementation.

Should law enforcement be empowered to latch onto any new tech that promises to be the cool new solution to crime and capture, or should it first require public comment and, to the extent anyone in government cares, approval? Do we wait until facial recognition is proven to be no more valid than dog sniffs to have our say, long after it’s become so deeply incorporated into their process and likely the law to ever disentangle it because it turns out to be mostly a big sham? But what if it really does work, and all the harm it might have stopped is inflicted while we dither around with its potential flaws?

*Tuesday Talk rules apply, within reason.

7 thoughts on “Tuesday Talk*: Is AI In Law Enforcement Worth It?

  1. rxc

    It is good that you mention drug dogs, because they are actually a form of AI. Most people think AIs are like ChatGPT, which interact with humans in natural language. But there are other AIs that do not talk to you, but instead they alert or spit out a rating number.

    Dogs are “trained” to alert when they detect something, and most of these AIs “alert” when they decide that the alert trigger has been detected. Or they give you a number that indicates the probability that an alert trigger value has been reached. In both cases, the owner/designer of the AI cannot give you a specific explanation why the alert occurred, just like a drug dog handler cannot explain exactly why the dog alerted. All they can say is that the dog or the AI were trained to alert. Bad/wrong alerts just require additional training.

    This guidance document about the use of AI is not very useful, because it just says that users of AI should use them carefully, and try not to let them make mistakes. It does not explain how to evaluate the internal algorithms for errors, or check the data used to train them for errors.

    Garbage in, garbage out has been a fundamental warning to everyone who has used computers since the very beginning. Now we have AIs, who have no way to do QA on the input data. They just ingest it into their unfathomable algorithms, and spit out answers. To paraphrase Mary McCarthy – you should never assume that anything that is said by a computer is correct, including “and” and “the”.

    1. rxc

      The following headline popped up in my feed this morning in ArsTecnica, which is an unabashed supporter of AI:

      “ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate”

      “It was bad at recognizing relationships and needs selective training, researchers say”

      Just think what it could do with the justice system.

Comments are closed.