This comment by John Neff got me thinking. Lawyers have this nasty habit of believing in the validity of our own methods. We proclaim it, usually in some catchy phrase, repeat it incessantly, and then it becomes our truth. Actually, truthiness. It is because we believe it is. One such belief is that we can determine who is lying and who is not.
This reminded me of Eugene Volokh’s post about the first court test of the fMRI, coming in a San Diego juvenile sex-abuse case. Not surprisingly, it will be offered by the defense to prove that the defendant is innocent. According to Wired :
The company that did the brain scan, No Lie MRI, claims their test is over 90 percent accurate, but some scientists and lawyers are skeptical…. The company’s report says fMRI tests show the defendant’s claim of innocence is not a lie.
Laboratory studies using fMRI, which measures blood-oxygen levels in the brain, have suggested that when someone lies, the brain sends more blood to the ventrolateral area of the prefrontal cortex. In a very small number of studies, researchers have identified lying in study subjects with accuracy ranging from 76 percent to over 90 percent (pdf). But some scientists and lawyers like [Stanford law professor Hank] Greely doubt that those results will prove replicable outside the lab setting, and others say it just isn’t ready yet….
[On the other hand,] even if the science behind a technology isn’t fully established, Brooklyn Law School’s Edward Cheng, who studies scientific evidence in legal proceedings, said it might still be appropriate to use it in the courtroom.
“Technology doesn’t necessarily have to be bulletproof before it can come in, in court,” Cheng.
It’s that last line by Cheng that should scare you. Lawyers have an entirely different level of tolerance for ambiguity than scientists, largely because we have such an awful understanding of science and, more importantly, probability. And don’t forget, judges, those gatekeepers of science in the courtroom, use to be lawyers before they went on the public dole.
Do we applaud the “No Lie MRI” scan? Is this the miracle of modern science that will cleanse the courtroom of lies and deception? After all, anything that proves a person’s innocence can’t be all bad, right? And certainly no one can fault the defense for trying to find something, anything, to prove the defendant innocent when sincere but mistaken testimony would likely convict him otherwise. But then, once the fMRI camel’s nose is inside the tent, it won’t be long until the test is used as a sword rather than shield, and defendants will be convicted based on their lies as well.
So what of the validity of the fMRI? The hearing will consist of the proponents of the test showing that it is 90% valid. Ninety percent is pretty good when it comes to lies or truth. It’s certainly a high enough percentage, on its face, to raise a reasonable doubt. Is it enough to eliminate a reasonable doubt? Well, since no court has ever established a percentage belief required of a jury, that remains a mystery. But if the scan is accepted into evidence, perhaps courts will be forced to decide whether a 90% valid test is sufficient to prove guilt beyond a reasonable doubt.
Still, there’s a problem. One that wouldn’t occur to most lawyers or judges. On its face, we look at a 90% accuracy rate and assume that to mean that it will be correct 9 times out of 10, and conversely have a failure rate of 1 in 10. Hah! More lawyerly truthiness. We are so arrogant.
Unless you have some serious interest in probability, it would never even occur to a lawyer to consider that this number is simplistic and misleading, since, as noted here, tests have 2 basic accuracies and 2 predictive values. Didn’t know that, did you. But what about the Base Rate Fallacy, raised here? Consider this example:
In a city with 100 terrorists and one million non-terrorists there is a surveillance camera with an automatic face recognition software. If the camera sees a known terrorist, it will ring a bell with 99% probability. If the camera sees a non-terrorist, it will trigger the alarm 1% of the time. So, the failure rate of the camera is always 1%.
Suppose somebody triggers the alarm. What is the chance he/she is really a terrorist?
Imagine that all 1,000,100 people pass in front of the camera. About 99 of the 100 terrorists will trigger a ring — and so will about 10,000 of the million of nonterrorists. Therefore 10099 people will be rung at, and only 99 of them are terrorists. So, the probability that a person who triggers the alarm is actually a terrorist is 99 in 10,099 (about 1/100).
Things aren’t as simple as they first appear, which gives rise to this generous conclusion :
When it comes to the accuracy of diagnostic testing, the cluelessness of lawyers and courts is staggering.
But given our tendency to truthiness, a test like the fMRI, should it be accepted by the judge, could well displace the fact-finding function and obviate the need for those icky, messy jury thingies. After all, who needs twelve people to rubber-stamp what a machine has already concluded? And if a judge decides that the fMRI is sufficiently accurate to be admitted into evidence, it then is sufficiently accurate, regardless of whether it can discern a lie from truth 9 times out of 10.
Cheng’s observation, that technology doesn’t have to be perfect to be admissible, is an accurate reflection of the law, which is why it should scare the daylights out of us. This machine will, on its own, free or imprison people. We know that juries love to shift the responsibility of deciding people’s fate elsewhere, and will defer to any “expert” at the drop of a hat. We also know that jurors, like lawyers, judges and college students, are bored to tears with in-depth discussions of statistical analysis and probability, precluding the likelihood of counter-experts carrying any weight to challenge the conclusion of the fMRI. And finally, we know that there is no viable definition of “beyond a reasonable doubt” (assuming the jury can follow the instructions at all), and that most convictions derive from the jury’s determination that a defendant is more likely guilty than not.
So in our quest for truth, we trade one version of truthiness for another, this one far more difficult to get around by argument than any we’ve faced before. But hey, we’re lawyers! There’s nothing we love more than legal fictions. And since Daubert allows it, and if a judge will admit it, we’ll finally have that magic machine we’ve always dreamed of. So what if it’s imperfect. Isn’t everything in the law?
Discover more from Simple Justice
Subscribe to get the latest posts sent to your email.

Very interesting post but when DNA was first introduced in court the jury was unconvinced and disregarded the evidence.
From what I have learned about memory given sufficient time it would be possible for a person to alter their memory so they think their version of the story is the truth. We already have evidence that that can happen in cases of identification by an eyewitness. In such cases the truth test (by jurors or machine) would be either false or ambiguous.
It does not seem practical to use an MRI on all witnesses and it appears to me that the application is premature.
Whenever a new technology is introduced the scientists immediately try to find uses for it and some of the early applications get a lot of publicity. Once the new results have been tested a common result is the early publicity turns out to be hype.
Good post. I was halfway through when a bunch of red flags got raised in my mind – and then some were treated in the rest of the post, and the comment by John Neff.
It’s not just lawyers being “arrogant” about understanding probability and statistics, it’s that people are just horrible (if not absolutely unable) to intuit probabilities properly. If you ask someone to generate a series of random digits from 1 to 10, they just can’t do it. And Bayesian inference or the overall prevalence, as the references in the post show, are also not intuitive. There’s a good YouTube video on this topic.
Another good example is the Monty Hall paradox, popularized by Marilyn Vos Savant in Parade magizine in the early 90’s. Even professional mathmeticians had a hard time overcoming their intuitions in attempting to properly analyze that situation.
I think Martha Farah’s work on false memories supports John Neff’s hypothesis about fMRI, or for that matter, any physiological “lie detection” method: people who genuinely believe a false memory are going to come up as truthful in any test. Too much reliance on “scientific lie detection tests” would be a big mistake, as I bet appropriate studies will eventually demonstrate.
Great stuff. Keep it coming.
I wonder how much attention the courts pay to research on memory or to memory impairment associated with age, alcohol/drug abuse and disease.
Obviously that have to take Alzheimer’s’ and demential into account in commitment cases but if they do not allow jurors to take notes the don’t know much about medium term memory.