In one of the first examples of the massive failure of AI, artificial intelligence, Southern District Judge Kevin Castel was not amused when he realized that a brief submitted included citations to non-existent cases. The attorneys involved were ordered to explain, and when they admitted that they relied on a new technology, ChatGPT, which had a tendency to engage in what’s curiously called “hallucinations,” were sanctioned for their misfeasance.
Since then, judges have been crafting rules about the use of AI, including its general prohibition.
Plaintiff admits that he used Artificial Intelligence (“AI”) to prepare case filings. [This yielded hallucinated citations to nonexistent cases. -EV] The Court reminds all parties that they are not allowed to use AI—for any purpose—to prepare any filings in the instant case or any case before the undersigned. See Judge Newman’s Civil Standing Order at VI. Both parties, and their respective counsel, have an obligation to immediately inform the Court if they discover that a party has used AI to prepare any filing. The penalty for violating this provision includes, inter alia, striking the pleading from the record, the imposition of economic sanctions or contempt, and dismissal of the lawsuit.
In contrast, the Eastern District of Texas has issued a general rule to “alert” pro se litigants to its failings. Continue reading →