Despite Eugene Volokh’s almost daily post about lawyers (and occasional judges) shamed for using AI to perform their work, producing hallucinated citations, phony quotes and generally mediocre content, it has nonetheless managed to finds its way into the offices of a great many law firms. To say they’ve been warned is unhelpful. AI is quick, easy and crappy, unable to defy the iron triangle. Yet that is apparently good enough for these lawyers.
But unlike the use of AI to produce insipid written argument in court, what could possible be wrong with using at as the magic note-taker to reduce to writing the discussions, whether as a helpful way to remember the details or for later use as evidence of what transpired? Note taking is a bore. It’s often difficult for the note taker to keep up. Often times, the note taker fails to appreciate the “hidden meaning” or subtleties behind words and misstates the meaning when written in shorthand notations. Could AI be any worse?
Productivity powered by artificial intelligence is all the rage. Skipping meetings and sending an A.I. note taker instead has been called “the latest office power move.” Wallet-size recorders that use A.I. to log live interactions have become a product category. And at least one C.E.O. has endorsed the idea of adding an A.I. board member. (Maybe one programmed to behave like Warren Buffett?)
But to lawyers like [Jeffrey] Gifford, inviting an A.I. bot to meetings introduces a ticking time bomb of legal risk.
Technology is seductive. If there is an easier way to do things, why not? If it takes a burden off your shoulders, why not? Plus, it makes you look kind of hip and cutting edge, rather than one of those olds who prefers an yPad to an iPad. Who wants to be the dinosaur in the room? So what could possibly go wrong?
A.I.-generated transcripts, which some video call apps allow users to turn on by default, preserve all sorts of things — offhand comments, quickly corrected statements, jokes — that humans would rarely write in the meeting minutes. And they show up in meetings that would otherwise not be recorded.
In a lawsuit or an investigation, that can make every word uttered discoverable.
One of the hallmarks of AI is its lack of humanity, its inability to distinguish between things that matter and things that don’t, or shouldn’t, in the course of discussion. To a bot, words are words, without regard to humor or sarcasm. People don’t speak the way we write, with the ability to review our words and correct them to be sure they accurately reflect our point or intentions. When memorialized by AI, and parsed at some later point in time during discovery, words spoken in jest or mistakenly used become just as conclusive as words written after thoughtful deliberation and careful phrasing.
Even worse, say corporate lawyers: Sharing the meeting with an A.I. bot may void attorney-client privilege, making conversations that would not otherwise be subject to discovery fair game in a lawsuit.
There is a very real question whether the AI bot is a “member” of your legal team, and thus covered by your attorney/client privilege, or a third party tool introduced into a meeting that vitiates the privilege. SDNY Judge Jed Rakoff held that a non-lawyer’s use of AI for legal advice fell outside the privilege. Detroit Judge Gershwin Drain held that it’s protected, and a pro se plaintiff could not be compelled to reveal her chatbot’s content under the Third Party Doctrine. Which way will it ultimately go remains a mystery at this point, but are your clients willing to take the risk?
And then there’s the problem of error.
One concern is accuracy. An A.I. transcript could, for example, record “does matter” as “doesn’t matter.” If that sentence comes up in court years later, the mistake may be difficult to remember.
Sometimes, we enunciate poorly. or speak with an accent or in jargon shorthand. Will the AI get it? Will anyone notice or care at the time? But it may be critical years later when the specific words are the lynchpin between a win and a crushing defeat. That’s when the problem hits you square in the face. The AI bot wrote what it wrote, and it’s not as if you can put the bot on the stand and challenge its efficacy, its memory. its competence. It’s a machine, kids, and it’s going to do what machines do, which is whatever it’s programmed to do. Claude can be absolutely dead wrong, but it cannot lie.
Warnings about the limits, failings and risks inherent in AI abound. And yet, lawyers keep using it because, well, they’re lazy, crappy or “cutting edge” early adapters. It’s hard to practice law right, exercising the degree of care and competence that protects your clients from the unanticipated and unnecessary risks from which they expect you, as their lawyer, to shield them. Sure, it’s more work, more time, more effort, that you wouldn’t need to exert now that AI is there to do the work for you. But then, is it about making your life easier, at least for the moment, or about zealously representing your clients?
Discover more from Simple Justice
Subscribe to get the latest posts sent to your email.
