What Role Should AI Play In Judging?

A couple federal judges were humiliated when it was revealed that they had used AI to write their decisions, as AI did what AI does, and used fake hallucinated citations. Oopsie. But fake cites are only the easiest problems to find. When cites and quotes don’t exist, it’s merely a matter of someone doing the legwork of checking, whereupon the error becomes obvious. Unfortunately, such obvious mistakes are not the only way in which AI is infiltrating the judiciary

When Xavier Rodriguez, a Texas-based federal judge, prepares for a hearing, he usually begins by turning to artificial intelligence. He feeds the relevant court filings into an AI tool that quickly produces a timeline of the case and the claims that parties are making for him to review.

“My law clerks would be wasting 30, 45 minutes, an hour, developing a chronology of events,” Rodriguez told The Washington Post. “This thing does it instantaneously.”

Before a hearing, Rodriguez might also ask AI to suggest questions to ask an attorney or identify weaknesses in a plaintiff’s argument. In an area of law in which he feels particularly well-versed, Rodriguez sometimes — after deciding on his judgment — uses AI to draft the ruling he will issue.

In all likelihood, AI handles what Judge Rodriguez asks of it pretty well most of the time, even if it’s dubious calling the efforts of law clerks “wasting” time to make sure it’s done accurately. After all, the iron triangle makes no exception for judges. Much of what judges do in preparation isn’t exactly brain surgery, but rather mundane drudgery. Still, it has to get done, and why not have AI do the drudgery?

A Northwestern University study published this week, co-authored by Rodriguez, that collected responses from 112 federal judges found more than 60 percent reported using one of a set of popular AI tools at least once in their judicial work. Around 22 percent of the judges said they used AI daily or weekly in their duties.

That judges use AI doesn’t mean AI works well, or well enough. It just means some judges are early adapters or find it more convenient. And given the likelihood of its ubiquity, it’s unsurprising that it’s found its way into legal tech, in general, and legal tech being directed toward government use by the judiciary, in particular.

Courts are also pursuing partnerships with legal vendors developing AI tools for judicial work. The Los Angeles County Superior Court announced a pilot program in March with Learned Hand, a legal start-up that develops an AI tool for judges. Learned Hand’s AI is also being used in trial courts in 10 states and the Michigan Supreme Court, the company said. Legal research companies Thomson Reuters and LexisNexis have contracts to provide AI tools to the federal judiciary.

Judges claim to be cautious about AI, as well they should be. But its use, and consequently reviews of its use, conceals an insidious problem. If AI is being used for background work,* from providing caselaw to summarizing papers (which some lawyers labor over to make sure every word is precisely what it meant) to identifying weaknesses in arguments, what if it’s wrong? Not by some huge, flagrant margin, but in the little details that distinguish a win from a loss? On the surface, it will appear as if AI is doing a swell job, providing a generic overview or a plain vanilla summary, but critical nuance is lost to AI and the judge, consequently, never knows it.

Judges say they’re aware of the risks, even as some experts worry that AI’s unreliability could compromise their authority.

“Judges, they’re responsible for making decisions that are very important to people and resolving disputes that are very significant,” said Eric Posner, a law professor at the University of Chicago. “They just can’t gamble with a technology that is not fully understood and that is known to hallucinate.”

The upside, proponents say, could be a more efficient judiciary better equipped to process heavy caseloads.

Fast, easy and less work, or as lawyers are prone to say, an efficient use of judicial resources. But Posner is only partially right. While judges are, indeed, responsible for their decisions, they certainly can, and do, gamble with a technology that is not fully understood. After all, it’s not as if judges were perfect before and never erred. That’s why they built appellate courthouses, right?

Except judges were chosen, for better or worse, to wear the robes, sit on the bench and, hopefully, not ruin people’s lives or fortunes. Some judges are hard-working and brilliant. Some are lazy and, well, not the sharpest knife in the courtroom. Either way, they should take responsibility for doing the job no matter how facile some Lexis AI tool might be in getting them out to the golf course to make that 3 pm tee time.

*Addressed here is the use of AI in judicial performance not involving reaching decisions or writing opinions, which raises very different concerns.


Discover more from Simple Justice

Subscribe to get the latest posts sent to your email.

7 thoughts on “What Role Should AI Play In Judging?

  1. MollyGodiva

    I am torn. Judaical decisions and all the legal reasoning needs to be done by a human judge. On the other hand courts move way too damn slow and AI could help. A judge would enter the decision, the legal reasoning, and all the citations. AI would produce the document that the judge reviews. That is acceptable.

    1. LY

      That is called a “clerk”. And hopefully they won’t make stuff up to add to the document.

  2. Skink

    None. Federal judges have both permanent and temporary lawyers as clerks, plus student clerks. AI adds nothing to the responsibility to see if the papers are legit. The same is true of the opposing lawyers, only more so. After all, it’s their case.

    Made-up stuff by lawyers has been happening for a couple centuries. “It takes too much time” is an idiot’s response.

  3. Hunting Guy

    Is it a screwdriver being used as a chisel?

    You can use a screwdriver as a chisel, but it would make a mess of the hinge slot on a door and a chisel would make a poor screwdriver.

    On the other hand, a sharp chisel would cut wood cleanly and a proper sized screwdriver would tighten the screws quickly.

    Is the AI being used appropriately, has it been trained for legal use rather than general purposes? Then it becomes a useful tool.

    Does it make mistakes? How does the error rate compare to human error doing the same job?

    I need more information before I condem it or embrace it.

  4. Michael Miller

    AI for the judiciary is appealing, for something like the same reason, as that old Saturday Night Live sketch for a restaurant called “Pre-Chew Charlie’s.”

  5. Uriah Cheetham

    At our Firm, we have found that much of what is described here as “judgment” is in practice the product of sufficiently structured inputs. Once those are in place, the resulting decisions rarely surprise.

Comments are closed.