Beware The “Likelihood Ratio”

There are two things that judges and jurors absolutely adore: cool names for pedestrian things and the reduction of complex, difficult scientific and mathematical concepts into something so simple even a third-grader could get it. Meet the “likelihood ratio,” coming to a courtroom near you.

Two experts at the National Institute of Standards and Technology (NIST) are calling into question a method of presenting evidence in courtrooms, arguing that it risks allowing personal preference to creep into expert testimony and potentially distorts evidence for a jury.

The method involves the use of Likelihood Ratio (LR), a statistical tool that gives experts a shorthand way to communicate their assessment of how strongly forensic evidence, such as a fingerprint or DNA sample, can be tied to a suspect. In essence, LR allows a forensics expert to boil down a potentially complicated set of circumstances into a number—providing a pathway for experts to concisely express their conclusions based on a logical and coherent framework. LR’s proponents say it is appropriate for courtroom use; some even argue that it is the only appropriate method by which an expert should explain evidence to jurors or attorneys.

After word started to trickle out that the “experts'” claims of a one in 27 gazillion chance the defendant wasn’t totally the murderer might overstate the probability by about 27 gazillion, a new paradigm was needed. After all, if experts can’t condemn the guilty on behalf of their paymaster, how will they make the payments on that new Mercedes?

And like Johnny on the spot, the Likelihood Ratio appeared. It’s essentially a measure of confidence in the outcome.

However, in a new paper published in the Journal of Research of the National Institute of Standards and Technology, statisticians Steve Lund and Hari Iyer caution that the justification for using LR in courtrooms is flawed. The justification is founded on a reasoning approach called Bayesian decision theory, which has long been used by the scientific community to create logic-based statements of probability. But Lund and Iyer argue that while Bayesian reasoning works well in personal decision making, it breaks down in situations where information must be conveyed from one person to another such as in courtroom testimony.

Who doesn’t love Bayesian decision theory, right?

Bayesian reasoning is a structured way of evaluating and re-evaluating a situation as new evidence comes up. If a child who rarely eats sweets says he did not eat the last piece of blueberry pie, his older sister might initially think it unlikely that he did, but if she spies a bit of blue stain on his shirt, she might adjust that likelihood upward. Applying a rigorous version of this approach to complex forensic evidence allows an expert to come up with a logic-based numerical LR that makes sense to the expert as an individual.

There are two prongs to this manner of reasoning. First, that logic gives rise to greater confidence, even if the hoofbeats you hear will occasionally come from zebras. Second, that the degree of confidence is tacitly grounded in the expert’s life experience, his “common sense,” which may not be universal. But isn’t logic better than the alternative?

The trouble arises when other people—such as jurors—are instructed to incorporate the expert’s LR into their own decision-making. An expert’s judgment often involves complicated statistical techniques that can give different LRs depending on which expert is making the judgment. As a result, one expert’s specific LR number can differ substantially from another’s.

“Two people can employ Bayesian reasoning correctly and come up with two substantially different answers,” Lund said. “Which answer should you believe, if you’re a juror?”

Then again, maybe both experts are wrong. Just because somebody claims to be an expert, is allowed by a judge to testify as to her opinion, has an impressive array of letters following her name and speaks with an air of authority, doesn’t make them right.

“We’re a bit presumptuous as expert witnesses that our testimony matters that much,” Champod said. “LR could perhaps be more statistically pure in the grand scheme, but it’s not the most significant factor. Transparency is. What matters is telling the jury what the basis of our testimony is, where our data comes from, and why we judge it the way we do.”

The NIST authors, however, maintain that for a technique to be broadly applicable, it needs to be based on measurements that can be replicated. In this regard, LR often falls short, according to the authors.

All of this sounds heartwarming, but that’s from the scientific side of the courtroom. On the jury side, the string of words proffered in the interest of transparency sounds like “blah, blah, blah,” in a Charlie Brown special. What they will hear loud and clear is the Likelihood Ratio number.

“Our success in forensic science depends on our ability to measure well. The anticipated use of LR in the courtroom treats it like it’s a universally observable quantity, no matter who measures it,” Lund said. “But it’s not a standardized measurement. By its own definition, there is no true LR that can be shared, and the differences between any two individual LRs may be substantial.”

Note that the “saving grace,” to the extent there is one, is the assumption that there will be two LRs offered in the courtroom. Except that’s not always the case, and indeed, is often not the case. The prosecution has the money and access to bring in experts, while the defense has neither. That means the jury only hears one LR, and it’s not from the defense.

“Just because we have a tool, we should not assume it’s good enough,” Lund said.

But it’s a tool, and it’s got a cool name, and it relieves the judge and jury from having to listen to all that boring science-y talk, do math and think hard. One number that tells you if the defendant is guilty. The only problem is that the number is little more than the data as filtered through the subjective personal malarkey of the guy being called an expert. What could possibly go wrong?


Discover more from Simple Justice

Subscribe to get the latest posts sent to your email.

18 thoughts on “Beware The “Likelihood Ratio”

  1. LTMG

    Correctly using basic statistics is generally beyond the typical juror. It’s not that they aren’t intelligent enough to learn, some of them, but if applying statistical results was purely mechanical, then the the differences between two LRs could not be substantial as stated by Lund. Lund and Iyer are very right to caution how LRs are used. Can an attorney really instruct the judge and jury within the scope of a trial so they truly learn the nuances of LRs, and what they can and can’t do? Very unlikely.

    1. grberry

      Sigh. I have a Bachelors in Mathematics from an engineering school with coursework focused on statistics. I took ten college courses in probability and statistics. So I’d hope I’d do better than the typical juror in using statistics.

      But it has been multiple decades, and I don’t use statistics on a daily basis. So I don’t trust myself to get them right when I do need to use them for work. I keep a statistics textbook on my shelf at work (more at home), and also make use of Dr. Google. If I was a juror I’d want – but be forbidden to – use the same tools in order to get it right.

      Having now used those tools, yuck – this is information that often ought to be discarded as a distraction/prejudicial.

      1. SHG Post author

        There aren’t too many judges who experienced and survived GIRs. The prosecution only needs one to give this validation the gloss of judicial approval.

  2. Pedantic Grammar Police

    There are two kinds of experts. There are experts who do things; for example martial arts experts, or computer programming experts, and there are experts who make pronouncements that we are expected to believe; for example bite mark experts or hair analysis experts. 99% of the second type of expert are bullshit artists. This LR nonsense is an attempt to rehabilitate this second type of expert. Everyone but prosecutors and judges knows based on the DNA evidence SNAFU that this second type of expert is probably nothing but a con artist. If they don’t do something, this common knowledge will eventually filter up to the judges and then they will have to get real jobs.

      1. Pedantic Grammar Police

        In law, an “expert” is a con artist who is permitted to testify as to his opinion. FIFY

        1. Jeff Gamso

          In 1995, the New Mexico Senate unanimously approved an amendment to a bit of legislation:
          “When a psychologist or psychiatrist testifies during a defendant’s competency hearing, the psychologist or psychiatrist shall wear a cone-shaped hat that is not less than two feet tall. The surface of the hat shall be imprinted with stars and lightning bolts.

          Additionally, a psychologist or psychiatrist shall be required to don a white beard that is not less than 18 inches in length, and shall punctuate crucial elements of his testimony by stabbing the air with a wand. Whenever a psychologist or psychiatrist provides expert testimony regarding a defendant’s competency, the bailiff shall contemporaneously dim the courtroom lights and administer two strikes to a Chinese gong.”

          Sadly(?), it did not become law.

          1. DaveL

            You can’t just walk into court, declare that you’re a wizard, and expect the court to believe you. You have to have a certificate that says you’re a wizard, then you’re all set. Hey, it works for Drug Detection Experts.

            1. Jeff Gamso

              No small thing, that certificate.

              I have one identifying me as an “ANTI-TERRORIST EXPERT.”

              it’s posted in my office, and says I’m a certified graduate of the “Anti-Terrorist Training Program” having “met the marksmanship qualifications and standards set forth by the rules and regulations of the performance and certification committee.”

              I paid, as I recall, $5.95 for it. Perhaps there was an additional charge for postage, but I don’t actually recall.

            2. SHG Post author

              Dang. Mine cost $6.95. You got a good deal. But I was also ordained a Pastafarian minister, included in the price.

      2. Frank

        Expert (n): From the term “Ex-” meaning “has been” and “spurt” meaning “a drip under pressure.”

  3. KP

    “Your Honour the defendant has an L/R ratio of 73%. The other suspect had one of 71% and we decided he didn’t do it.”

    What could go wrong?

  4. John Neff

    I guess it is better to have a judge decide what statistical mumbo-jumbo is admissible than Charles Grassley.

  5. Charles Morrison

    So it’s no longer “within a reasonable degree of scientific certainty.” Rather, it’s based upon “my very scientific understanding of the facts and my common sense.” Cool… not that the former was great, but every time folks come up with a new way of communicating guilt, we ought to be suspicious. Color me crazy, but I’d like to defend the ‘ole style.

    The problem is, from a defense perspective, admittedly, is that we’re always chasing/herding cats. when we find out they’re dogs, people have already suffered. At least this time, maybe some in the community have an issue before too many bodies have been buried?

    This is no different than fingerprint experts, but as I learned (the hard way, by the way) jurors often don’t care if an expert admits his willingness to connect A to Z is looser than anothers. That’s the only expert before them, right?

    Thanks for the post, SJ, I’ve not seen this yet… but it’s on the horizon, even in Ohio, I bet.

Comments are closed.