For years, decades really, I’ve explained and argued why most “new big things” in legal tech are either not going to work or fill a need that doesn’t exist. Legal tech guys really hated me for calling bullshit on their baby. Some who paid me to “consult” thought their money would buy my endorsement of their mutt. They learned. I was regularly accused of being a tech hater, but as Keith Lee succinctly explained, “It’s not that lawyers are anti-technology, it’s that they are anti-bullshit.
AI is all the rage at the moment. Remember blockchain? Remember NFTs? Remember self-driving cars? Remember Google Glasses? The newest billion dollar baby is ChatGPT. How’s that working out for lawyers?
The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. When the circumstance was called to the Court’s attention by opposing counsel, the Court issued Orders requiring plaintiff’s counsel to provide an affidavit annexing copies of certain judicial opinions of courts of record cited in his submission, and he has complied. Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations. Set forth below is an Order to show cause why plaintiff’s counsel ought not be sanctioned.
That was from Judge Kevin Castel, SDNY. He was not amused by this tech faux pas. The lawyer on the case explained that he relied on the work of another lawyer with 30 years experience. The other lawyer explained that he relied on ChatGPT.
[7.] It was in consultation with the generative artificial intelligence website Chat GPT, that your affiant did locate and cite the following cases in the affirmation in opposition submitted, which this Court has found to be nonexistent: …
[8.] That the citations and opinions in question were provided by Chat GPT which also provided its legal source and assured the reliability of its content. Excerpts from the queries presented and responses provided are attached hereto.
[9.] That your affiant relied on the legal opinions provided to him by a source that has revealed itself to be unreliable.
[10.] That your affiant has never utilized Chat GPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibiiity that its content could be faise.
[11.] That is the fault of the affiant, in not confirming the sources provided by Chat GPT of the legal opinions it provided.
This was insufficient to assauge Judge Castel’s irritation at being fed papers with phony case names and cites, and there is an Order to Show Cause pending as to why the lawyer shouldn’t be sanctioned. The lawyer may not have intended to deceive the court, but as he admits, he failed to demonstrate any diligence as a lawyer before submitting papers, and that’s his fault entirely. That he was unaware that this new, shiny magic tech “solution” was a massive failure and just made shit up doesn’t diminish his duty, both to the court and to his client.
To the legal tech aficionados who can muster an excuse for any failure, the problem is that AI suffers delusions delightfully called “hallucinations,” because calling them massive fuck ups would be bad for business. Until then, knowing that AI is untrustworthy and just makes things up is good enough, shifting the burden on users to verify that anything ChatGPT does is accurate. The contention is that it’s less effort to check its cites than to find the cites in the first place, justifying using ChatGPT despite its failings.
Bullshit.
Initially, most people, and this is particularly true for lawyers, won’t do it. They won’t bother. Few enough lawyers care about anything more than getting words on paper and submitted to the court on time. Whether it’s good work, bad work, or phony work, just isn’t the critical factor for a lawyer who has a deadline coming and nothing to show for it. They won’t check. They won’t be bothered to check. So what if ChatGPT’s work is, at its very best, pedestrian, unimaginative and almost certainly ineffective. It’s words on paper submitted on time, and that’s all they care about.
Sure, some lawyers are lazy, and can bill 20 hours for an opposition to a summary judgment motion that took them less than a hour with ChatGPT. The incentives are obvious. The pressures are clear. The outcome is, well, hopefully not sanction worthy, or at least not unforgiveable. Stercus accidit, right?
Secondarily, even if the cites were legit and checked out, and the lawyer put in the time to run them through Lexis to make sure, at the very least, they existed, the work product is still crap. Sure, it may be better crap than the lawyer can produce, not because ChatGPT is any good at lawyering, but because the lawyer who relies on it is even worse than ChatGPT.
There’s a reason why we have to jump through hoops on the way to being entrusted as lawyers with other people’s lives and fortunes. We are supposed to be competent lawyers. We are supposed to think long and hard how to zealously represent our clients. We are supposed to put in the time, the effort, the thought and the imagination it takes to win cases, or at least give our clients every possible chance to prevail.
ChatGPT does not. It’s just a program that generates words, strung together to create the appearance of legitimate legal thought without the spark that distinguishes a lawyer from, well, a computer program. Would you turn to a computer engineer to represent you? Then why turn to a computer engineer’s product to do your lawyering for you?
Maybe there will be a place for AI in law, filling in the boilerplate that no one reads and even fewer care about, but it will not be a substitute for a good lawyer. It will never be a substitute for a great lawyer. It might be a substitute for a crap lawyer, but that’s not really an endorsement of AI as much as a condemnation of incompetent lawyers.
This is the legal equivalent of the Babble Hypothesis – i.e., the thought that writing or saying a lot of words in a superficially persuasive manner is effective regardless of the words’ lack of substance or coherence.
Wish this were just an AI problem. The number of times major firms cite cases (usually real) for concepts or quotes THAT DO NOT APPEAR in the cases is … astonishing. And, all too frequently, the courts take them at their word – or in some cases, will cite stuff for exactly the opposite of what the prior court said – often echoing lawyers’ misrepresentations. And don’t get me started on prestige law reviews.
There isn’t anything more fun in a brief than making fun of a nonexistent quote.
‘Here are the facts of the case he/she cited, and boy do they love me,’ is better in a utilitarian sense, but not more fun.
Oh Come on, Oh come on, Oh come on . . .
False Non-reproducable “Hallucinations” is a growing problem in scientific research.
Or maybe it has always been and is just more easy to discover and publicize today.
“AI” is a bit of a misnomer, similar to “Full Self Driving” (into fire engines and police cars). ChatGPT is a “Large Language Model” in which huge amounts of information (crap) are scraped from the Internet and fed into a computer program that uses it to predict the most likely response to a given prompt. In essence, it’s a clever search engine. There is no validation, and there can’t be. Anyone who relies on the correctness of any “AI” result without checking it is headed for trouble.
AIs just generate phrases that sound plausible. Words and phrases are used to describe concepts used to build stuff, grow food, treat llnesses , and argue legal and philosophical matters. The AIs do not understand the underlying principles or concepts, so their output is less than reliable – it is truly dangerous, because people who do not understand the concepts will belive that the AI speaks truth.
I’ve learned enough around here to know you are correct about the capacity for AI to replace a lawyer in the present. I caution anyone who thinks they can predict the evolution of technology in the future. Silicon Valley has a funny way of saying: “Hold my beer” every time someone says “tech will never replace…”
If a good legal tech app ever happens, its lead software developer will be both (1) a coder and (2) a bona fide lawyer with specialized experience. If you want to create a great app for trial lawyers, you need to be a trial lawyer first and a coder second. There is no way a generic coder will figure it out by consulting with lawyers.