In one of the first examples of the massive failure of AI, artificial intelligence, Southern District Judge Kevin Castel was not amused when he realized that a brief submitted included citations to non-existent cases. The attorneys involved were ordered to explain, and when they admitted that they relied on a new technology, ChatGPT, which had a tendency to engage in what’s curiously called “hallucinations,” were sanctioned for their misfeasance.
Plaintiff admits that he used Artificial Intelligence (“AI”) to prepare case filings. [This yielded hallucinated citations to nonexistent cases. -EV] The Court reminds all parties that they are not allowed to use AI—for any purpose—to prepare any filings in the instant case or any case before the undersigned. See Judge Newman’s Civil Standing Order at VI. Both parties, and their respective counsel, have an obligation to immediately inform the Court if they discover that a party has used AI to prepare any filing. The penalty for violating this provision includes, inter alia, striking the pleading from the record, the imposition of economic sanctions or contempt, and dismissal of the lawsuit.
In contrast, the Eastern District of Texas has issued a general rule to “alert” pro se litigants to its failings.
Litigants remain responsible for the accuracy and quality of legal documents produced with the assistance of technology (e.g., ChatGPT, Google Bard, Bing AI Chat, or generative artificial intelligence services). Litigants are cautioned that certain technologies may produce factually or legally inaccurate content. If a litigant chooses to employ technology, the litigant continues to be bound by the requirements of Fed. R. Civ. P. 11 and must review and verify any computer-generated content to ensure that it complies with all such standards. See also Local Rule AT-3(m).
The Local Rule is directed toward lawyers.
If the lawyer, in the exercise of his or her professional legal judgment, believes that the client is best served by the use of technology (e.g., ChatGPT, Google Bard, Bing AI Chat, or generative artificial intelligence services), then the lawyer is cautioned that certain technologies may produce factually or legally inaccurate content and should never replace the lawyer’s most important asset—the exercise of independent legal judgment. If a lawyer chooses to employ technology in representing a client, the lawyer continues to be bound by the requirements of Federal Rule of Civil Procedure 11, Local Rule AT-3, and all other applicable standards of practice and must review and verify any computer- generated content to ensure that it complies with all such standards.
Is there a reason why any of these rules should exist? It’s understandable that some lawyers find the use of generative AI an easy way to get their papers written without the muss and fuss of actually doing work. Whether they tell their clients what they did, and whether they charged their clients as if they did the work, is another matter. But it’s not a problem with AI, but a problem with lawyer honesty. If you didn’t do 20 hours of writing, then you don’t charge for 20 hours of writing. Does this really require a rule?
The mechanics of how a lawyer produces work product is entirely up to the lawyer. Maybe he does his work himself. Maybe he hands it off to an associate or paralegal. Maybe he uses generative AI. Regardless of how the work product is produced, the lawyer is completely and without reservation responsible for both its accuracy and its competence. Some lawyers produce dreck because that’s the best they can do. There are a lot of really poor lawyers out there pumping out pro forma crap. Is AI any worse? Granted, fake citations are about as bad as one can get, but bad writing out of a human isn’t much different than bad writing out of a chatbot. And there’s a decent chance the chatbot will write better than bad lawyers.
But the point is that it has always been, and will always remain, the responsibility of the lawyer to provide effective assistance of counsel. Regardless of who does the work, the lawyer is responsible. Regardless of whether papers are produced by chatbot or partner, the lawyer is responsible. If there is a cite to a non-existent case, the lawyer is responsible. If there is an argument that misstates the law, the lawyer is responsible. If a critical argument is left out, the lawyer is responsible. If the papers contain a lie, the lawyer is responsible. A trend is beginning to emerge.
Having tested AI a few times now, it is my view that it’s not remotely trustworthy to produce work, even as a foundation for a lawyer to finalize. It is grossly unreliable and doesn’t come close to the depth of understanding and analysis that would be expected of a modestly competent lawyer. In other words, it sucks, and anyone using AI at this stage is begging for sanctions, although neither admonition nor monetary sanction is sufficient to make the point to any lawyer who would be so cavalier with his client’s life.
If your primary concern is your own well-being, financial or otherwise, then consider what using AI says about the competency and quality of your work. Your clients will not be pleased should you fail them. If the judge spanks you, your reputation within the legal community will be even worse than it already is. You will neither feel good about yourself nor be able to sustain a practice should clients feel that retaining you is tantamount to flushing their money down the toilet.
But to the extent you care about the clients (you remember them, the people for whom the legal profession exists?), you fail them. They have entrusted you, not a chatbot or AI, with their lives and fortunes. If you fail them, you are wholly responsible, regardless of whether your used generative AI or wrote your briefs in crayon. There is no need for a new rule, as the old rule more than suffices. You are responsible. To yourself, to the court, and most of all, to your client.