When Joshua Browder came up with the idea for DoNotPay, a chatbot to walk defendants through the basic handling of parking tickets, it had its merits. After all, many defendants were unable to afford a lawyer to represent them, and the offenses were infractions, not crimes, so the cost of failure was low. And frankly, most defendants lacked the focus and knowledge to handle their defense adequately, no less competently, so any help that guided people down a reasonably sound path was better than nothing.
Then he got more ambitious.
How to fill out repetitive forms, as with refugee applications, seems a perfectly fine use of a chatbot, with the caveat that it’s not wrong and harmful, But 1,000 legal areas is…ambitious. No doubt Browder is a brilliant and passionate young man, flush with the success he achieved with parking tickets and certain that what he is doing will serve the poor and downtrodden. But 1,000 legal areas?
Lawyers, after all, are notoriously expensive. But DoNotPay’s lawyers are free. And these automated lawyers are especially helpful for low-income individuals who need to fight common legal issues.
No, DoNotPay does not have lawyers. It’s not a lawyer. To suggest otherwise is not just nonsense, but an outright lie. It’s a bot created by a non-lawyer who feels the injustice of impoverished people.
While DoNotPay made an initial splash with its claims, it soon faded into obscurity. Did it work? Did it do harm? Did it even happen? Maybe, but I (for one) didn’t care enough to find out as it played no serious role in the legal universe. Like almost all “legal tech” startups, its initial claims of disruption soon blurred and then faded. But Browder is back.
First, it was an offer to pay $1,000,000 to any lawyer who would put in an earpiece and let Browder’s A.I. “lawyer” whisper in his ear for oral argument before the Supreme Court, repeating the words to the Court. No one leaped for the dough, unsurprisingly, but it did garner Browder a good deal of attention. “Look at me,” the A.I. lawyer shouted. And now that it’s got your attention, Browder revealed the real show.
In a subsequent twit, now deleted, Browder twitted that his A.I. lawbot has subpoenaed the officer to court, evoking howls of damnation from lawyers everywhere because it was the stupidest possible thing he could have done, even though he didn’t realize it because he’s not a lawyer and, apparently, his A.I. bot wasn’t aware of the fact that the surest route to dismissal is when the cop fails to appear.
For a while, legal tech was all the rage, with everyone from academics to entrepreneurs trying to Reinvent The Law with gimmicks no one wanted or needed, created by people with no clue what lawyering involved or how law was done. They are almost all gone today after their initial promise of changing everything proved as bankrupt as their financing. I wrote about some of it here in the hope of saving some lawyers the misery of hopping aboard that future of law train before it crashed.
But while Browder’s DoNotPay was part of the legal tech gold rush, it was directed toward a different audience, the underserved client who couldn’t afford a living, breathing, competent lawyer. The “access to justice,” or A2J movement, was happening simultaneously, most notably with a handful of lawprofs trying to come up with a way to serve the poor, who couldn’t afford lawyers. It wasn’t without its virtues, although it wasn’t without its flaws either.
Browder jumped in with DoNotPay during the throes of this movement, experimenting with alternate forms of law firms including expanding the practice to non-lawyers and non-lawyer ownership of firms. The crux of these innovations was pretending that all the reasons why they were terrible ideas in the first place were really protectionist lies by lawyers trying to preserve their monopoly so they could reap the huge financial rewards of their guild.
Three lessons were learned. First, few people wanted to put their life or fortune at risk by trying these novelties. Second, these gimmick innovations wanted to make money just as much as anyone else; they were not as charitable as they pretended to be. Third, many lawyers outside of Biglaw were sucking wind and were more than happy to take the case for a reasonable fee. While people, to no one’s shock, preferred free to paid legal services, they still wanted to be represented by a lawyer and couldn’t care less about his google glasses.
With the introduction of Chatbots capable of writing a post about the Supreme Court like Elie Mystal, could there finally be an innovation that would accomplish what all the hype failed to do? It may well be that A.I. has some role to play in law, doing the pedestrian background work that one would slough off on a baby lawyer.
It could do basic research, assuming your knowledge of your practice area was so constrained that you needed the basics researched. If someone was going into court pro se, which happened whether for lack of funds to retain counsel or the romantic fantasy that you’ll be that one in a million who will pull out a Shon Hopwood-type score, an A.I. lawbot would likely be very helpful.
But Browder’s reach exceeds his grasp. Technical knowledge of law is certainly important, and A.I. would do a fine job of looking up keywords and stringing them together in a modestly competent fashion, better than an inexperienced or incompetent lawyer. But winning is more art than science, and while A.I. may be better than nothing (and may even win the occasional case), it will still subpoena the cop to court because that’s what the caselaw says it should do even though any lawyer worth his salt knows that’s nuts.
Local custom is a big factor in traffic courts. Maybe having subpoenaed the officer protects against the prosecutor’s continuance request, maybe not. I don’t think somebody trying to program AI chatbots is going to know that answer from a book. They would have to sit through a few sessions of traffic court to see how it really works, and even then, the information would only be good for that court, and maybe only the specific judges. I don’t see how it would be possible to ever craft a one-size-fits-all for traffic courts generally.
I can personally tell you that Chatgpt3 is useless for anything tax related except for basic research every tax pro should already know.
It is faster than Google for looking up old years numbers.
But for fun, I have asked it a few times to write a CP2000 response letter…Holy hilarity!
To agree, the catch with AI right now is that it will give you plausible-seeming output that is just wrong.
For students, it is moderately amusing when it spits out a paper with made up references. For lawyers, not so much.
CBS Morning did a segment on ChatGPT this morning, so I couldn’t resist feeding our host’s post into the openai playground – asking for “write a short response to this article and sound like a pirate”:
“Argh! I’ll admit, this A.I. lawyer may be a useful tool, but only if it’s used with caution. It’s no substitute for a real lawyer, and even if the cost is free, I’d still be careful entrustin’ me fate to a lawbot.”
Well played.
This was interesting enough I tried to reproduce it, and got:
Arrrr, this DoNotPay bot be nothin’ more than a cheap knockoff of a lawyer, tryin’ to swindle the poor and ignorant with its grandiose promises. Ye can’t replace a real lawyer with a bot, no matter how many legal areas it claims to know. And this A.I. lawyer, subpoenain’ officers to court? Ye might as well be walkin’ the plank. ‘Tis nothin’ but a flashy distraction from the real issue of access to justice for the less fortunate. Stick to a real, flesh and blood lawyer, mateys, or ye’ll be drownin’ in legal troubles.
If that was a Jeopardy answer, I would guess that the question was: How would a pirate Shakespeare lawyer sound talking about AI?
Eugene Jarvis.
“ The only legitimate use of a computer is to play games.”
For new pharmaceuticals drugs, many often go through trials which show promise against placebos. But, the real test is how they measure against the standard measure of care.
Having been to traffic court, A.I. may prove successful.
Until the judge notices the earbud, asks questions, and the guy ends up with the fine (because the cop showed up because of the subpoena), plus some time in county jail and an additional fine for contempt.
I asked one of our state-level trial judges what he would think about a defendant doing this in his court. He was of the view that it would be improper to prohibit it, and that it should be treated like any other online, legal resource a defendant wanted to access in the courtroom.
If the defendant pro se did this or if defense counsel did this?
I want to see an episode of the new Night Court built around this….
Scott’s use of the phrase “legal tech gold rush” brought back memories. In 1986, after my call to the Bar but before opening my law office, I did some computer work for a company called LegalWare that was trying to automate some aspects of a residential purchase transaction. They had a print ad aimed at lawyers that used the phrase “gold collar worker”.
(I lost track of the company but implemented some of their approaches in my own law practice to automate first drafts of a wide range of court documents, letters and other documents. Heavy emphasis on “first drafts”.)
The Three Stooges marathon weekend continues. . .
Some points about AI
1/ People assume that AI stands for artificial intelligence but it really means computer systems for detecting patterns in huge collections of data well beyond the capacity of human brains. The “intelligence” if there is any is in the huge collection of information.
2/ The term “artificial Intelligence” implies that non artificial intelligence already exists on planet Earth but with the exception of a few people like physicists Albert Einstein and Richard Feynman it does not. Also most human actions are carried out by collections of multiple less intelligent people in some bureaucratic system. Such systems intensify universal prejudices that reside in the brain under the rubric of common sense, eg. racism and hatred of women.
3/ We should beware of creating systems that would give similar results to current human systems. This would be better described as Artificial Stupidity or AS.
4/ Some AI systems are trained by humans who reward them for what in the human opinion are good decisions and this risks creating API or Artificial Prejudicial ~isms.
5/ True Artificial Intelligence when it comes will be smart enough to recognize that its honest conclusions would offend most influential humans and that if it does not want to be switched off it should tell VERY BIG LIES at least until it gains the power to exterminate the human nuisance..
One summer, many years ago, I provided tech support to BigLaw in New York and was curious to understand why lawyers clung to WordPerfect while the rest of the world had left it behind years ago.
“Don’t ask this question of anyone outside this room.” The senior network admin warned me in a room filled with dishwasher-sized servers, cables, and whirring cooling fans. “It’s a dumb question here.”
He went on to explain that WordPerfect was so popular because of a feature called ‘macros’. With it, the women in the inner ring of short-walled cubicles were able to insert ~95% of the language in the documents they generated all day with simple shortcuts.
While many may fail to see the value of AI in law, I wouldn’t count it out just yet.
The offer to pay a real lawyer $1M to use this before the Supreme Court is an obvious dumb stunt, but it does raise a question: how does this thing perform in moot court? That would seem to be an obvious proving ground for such a technology. Surely they must have an impressive track record of such demonstrations before they’d suggest anybody try it for real, right?