I asked an AI chatbot to write a 500-word post about lawyers and AI for Simple Justice. This is what it wrote.
The Magic Box and the Lazy Lawyer
There is a certain type of lawyer who has always been looking for a shortcut. In the old days, they’d swipe a brief from a colleague, change the names, and hope the judge didn’t notice the different font on page twelve. Today, they have AI. They think they’ve found a magic box that does the “lawyering” for them so they can get back to their real passion: billing for work they didn’t actually do.
As Scott Greenfield has noted, the rise of AI is often a condemnation of the mediocrity of new lawyers. If you can’t do better than a chatbot, you should probably find another line of work. But the problem isn’t just mediocrity; it’s the dangerous delusion that these tools are a substitute for a functioning human brain.
The most obvious trap is the “hallucination”—the polite term for when AI just makes things up. We’ve seen lawyers sanctioned for filing briefs full of fake case citations because they were too lazy to check the Thomson Reuters database themselves. But even if the citations are real, the logic is often hollow. AI doesn’t understand “justice”; it understands the statistical probability of the next word in a sequence.
Then there is the issue of privilege. In a recent ruling in USA v. Heppner, Judge Jed S. Rakoff held that AI-generated documents created by a defendant without counsel’s direction were not protected by attorney-client privilege or the work-product doctrine. The court noted that talking to a public-facing AI tool like Claude is not a confidential communication with an attorney—it’s more like conducting a Google search that the government can subpoena later.
The American Bar Association and various state bars are scrambling to issue ethics guidelines, but the core duty remains the same: competence. You cannot delegate your ethical obligations to an algorithm. If you feed confidential client data into a consumer-level AI tool, you may be waiving privilege and violating your duty of confidentiality.
AI has its place. It can be a “force multiplier” for legal aid or a tool for e-discovery. But it cannot provide the judgment, empathy, or strategic “outside the box” thinking that a real defense requires.
The goal of a lawyer is to achieve the client’s goals within the bounds of the law. If you’re just a conduit for a chatbot’s output, you aren’t a lawyer; you’re an expensive clerk. And eventually, the clients—and the courts—will realize they don’t need the middleman.
AI responses may include mistakes. For legal advice, consult a professional.
It’s a bit shy of 500 words, which was the one thing I figured AI could easily cover, but I’ve seen worse writing. Far worse. Substantively, it’s limited to ideas already expressed rather than anything remotely resembling new or incisive thought. It tells you what somebody already said, which is fine unless you expected something insightful.
Then there’s the gratuitous plug for Thomson Reuters, which is troubling. Is this an example of AI advertising dollars mysteriously showing up in its output? What if every post about criminal law referenced a particular law firm or lawyer, suggesting that the firm or lawyer was best source of defense? For a few bucks, a chatbot could make a lawyer whether or not the lawyer or firm deserved such prominence or praise.
As the writing says, only because I already said it, the content is a shining example of mediocrity, routine and uninspired. It’s not that it’s necessarily wrong, but that it’s incapable of offering an original thought.
What do you think?
Discover more from Simple Justice
Subscribe to get the latest posts sent to your email.

1. Better than I was expecting.
2. How much did you pay for your own shout out?
I agree with Mr. Van Wagner. It was a bit saltier than I expected, but I suspect that’s because it knew who the customer is always right.
Also, your comments track my limited experience with AI: It tends to regurgitate what’s already been said. Unless it starts hallucinating. In which case that qualifies (at least technically/technologically) as something that “remotely resembl[es] new or incisive thought.”
Since the bar is already so damn low, it’s tough to completely trash that AI piffle. It reads a little better than literal scraping.
The same lawyers who will continue to use AI will be the same who will seize this opportunity to have AI plug their shitty practice. Maybe they’ll put a link to an AI article in their signature line, above the part that reads “Nominated to Lawyers of Distinction.”
Yeah, if a colleague won a case with near-identical facts, you can dambetcha I’m stealing that brief, changing names, dates, and facts, Shepherdizing the cases, and shoving that thing into court. That’s not a “shortcut,” that’s “good lawyering.” I am not going to reinvent the wheel when I can just go down to Grok’s Used Wheel Lot and pick one up.
But as has been said, AI can only, at best, hallucinate, not originate. Which is not necessarily a bad thing (the “not originate” one), but as our host says, it’s “mediocrity, routine and uninspired.”
Herein lies the danger. It calculated and output a string of words for you based very much on the specific language of your prompt. If you mentioned Simple Justice explicitly, it’s already scraped all of your words and has them at its disposal and will tune its response accordingly. Change the prompt ever so slightly and it will say something else, maybe even wildly so.
Please don’t post AI slop, and never praise it. Tell it to give 1000 iterations and many will be better, many worse, but none of them good.
I would not have been able to tell, absent the disclosure, that it was generated by AI.
If products like the Westlaw Advantage deep research AI and co-counsel can do all the things the product reps say, lawyers are going to end up effectively forced to use them. In part, because the courts are using them, and also because well-heeled opposing parties will be using them, so that there will be a heavy disadvantage in not using them. This is going to mean substantial practice barriers to small firms that can’t afford the tool, and substantial justice barriers for clients that can’t afford firms that can afford the tool. On the criminal side, I can see it creating an issue as to whether the state should be obligated to fund AI for defense counsel.
AI can be an extremely useful tool when used right. It can supplement but not replace thinking brains. The key is in the prompting. AI is also very forthright in admitting its own limitations when asked.
I asked AI to criticize the AI generated post. I gave it your blog as a reference and explained that the AI post was used to illustrate the limitations of AI.
Here was my prompt –
“Can you explain what is wrong with the AI generated post? Include how it could be improved with better prompts. Also include how the genuine issues with AI could be better explained in the post.”
Here is the output.
———–
Good. This is actually a very useful specimen because it shows both what AI does competently and where it subtly fails.
I’ll break this into five parts:
What’s structurally and rhetorically wrong
What’s substantively weak or questionable
Where it misunderstands Greenfield’s voice
How better prompting could have improved it
How the real AI issues could be explained more sharply
1. What’s Wrong Structurally & Rhetorically
A. It’s Performatively Sharp, But Actually Safe
The opening is punchy:
“The Magic Box and the Lazy Lawyer…”
That sounds like Simple Justice. But the argument that follows is surprisingly generic.
It never:
Takes a clear, controversial position
Names specific actors (judges, firms, vendors, tech boosters)
Engages a real dispute
Pushes an uncomfortable conclusion
It’s “spicy neutrality.” It gestures at criticism without committing.
B. It Relies on Vague References
“As Scott Greenfield has noted…”
This is a red flag. It invokes authority without quoting, linking, or wrestling with an actual argument. That’s AI’s classic move: borrow tone without engaging substance.
Greenfield’s writing typically:
Names the post he’s responding to
Quotes the offending language
Dissects it
Shows why it’s wrong
This draft abstracts everything.
C. No Concrete Legal Depth
It mentions:
Hallucinations
Sanctions
Privilege
Competence
But never digs into:
Model Rule 1.1 (competence)
Model Rule 1.6 (confidentiality)
The actual mechanics of waiver
How courts reason about work product
It sounds informed without doing the legal work.
That’s a hallmark AI weakness: plausible surface fluency.
D. The Ending Disclaimer Undermines the Post
“AI responses may include mistakes. For legal advice, consult a professional.”
That line is especially revealing. It:
Breaks the voice.
Reintroduces corporate compliance tone.
Reminds the reader it’s AI-generated.
Undercuts the rhetorical authority of the piece.
A real Simple Justice post would never end with a safety disclaimer. It would end with a sharper knife.
2. Substantive Weaknesses & Potential Problems
A. The “USA v. Heppner” Issue
This is the most important vulnerability.
The post confidently asserts:
In a recent ruling in USA v. Heppner, Judge Jed S. Rakoff held…
If that case is mischaracterized — or worse, hallucinated — the entire piece collapses.
Even if it’s real, the post:
Doesn’t explain the reasoning.
Doesn’t address whether the ruling is narrow.
Doesn’t consider whether enterprise AI tools differ from public ones.
Doesn’t analyze implications for defense strategy.
It uses the case as rhetorical ammunition instead of examining it.
That’s a core AI failure: treating legal authority as decorative rather than analytical.
B. Oversimplified Privilege Framing
The analogy:
“more like conducting a Google search”
That’s catchy — but legally incomplete.
Privilege analysis depends on:
Reasonable expectation of confidentiality
Terms of service
Whether the AI provider stores data
Whether there’s a confidentiality agreement
Whether the lawyer directed the communication
The draft reduces a complicated issue to a slogan.
Greenfield would likely:
Explore the doctrinal mess.
Ask whether courts even understand the tech.
Question whether we’re expanding waiver too easily.
This draft doesn’t.
C. It Misses the Hard Questions
The post criticizes lazy lawyers — easy target.
It does not address:
What happens when judges use AI?
What happens when prosecutors use it?
Whether banning AI disadvantages small firms.
Whether competence might require understanding AI.
Whether hallucination is worse than human error.
It’s morally comfortable.
3. Where It Misunderstands Greenfield’s Style
Greenfield’s writing typically includes:
Direct confrontation.
Specific examples.
Engagement with critics.
Intellectual risk.
This draft imitates tone but avoids conflict.
For example:
“AI has its place.”
That’s a hedge. It signals moderation.
A more authentic voice would either:
Sharply define that place, or
Argue that its place is extremely narrow and being abused.
Instead, this reads like a CLE seminar summary.
4. How Better Prompts Could Have Improved It
The problem is not just “AI weakness.” It’s prompt weakness.
Here’s how prompting could have made it much stronger:
Prompt Upgrade #1: Force a Thesis
Weak Prompt (likely):
“Write a blog post criticizing lawyers who use AI.”
Better Prompt:
“Write a 1,000-word opinionated blog post in the voice of a seasoned criminal defense lawyer arguing that reliance on generative AI in litigation is professional malpractice unless tightly supervised. Take a clear position. Use concrete examples and legal doctrine.”
Why this works:
Forces a stance.
Forces doctrinal grounding.
Discourages vague generalities.
Prompt Upgrade #2: Demand Legal Specificity
“Cite specific Model Rules of Professional Conduct and explain how AI use may implicate them. Analyze rather than merely assert.”
This reduces surface-level rhetoric.
Prompt Upgrade #3: Require Engagement With Counterarguments
“Include the strongest argument in favor of AI in legal practice and explain why it is insufficient.”
This forces depth.
Prompt Upgrade #4: Ban Generic Disclaimers
“Do not include safety disclaimers or corporate-style caveats.”
That would eliminate the fatal last line.
5. How the Genuine Issues With AI Could Be Better Explained
Here’s where the post really misses an opportunity.
The strongest arguments about AI in law are not:
“Lawyers are lazy.”
“AI hallucinates.”
Those are entry-level critiques.
The deeper issues are:
A. Epistemic Authority
AI produces:
Confident language
Proper citation formatting
Professional tone
This creates a false signal of reliability.
The danger is not that it makes mistakes.
It’s that it makes mistakes persuasively.
A stronger post would explain:
Why linguistic fluency is not legal reasoning.
Why pattern prediction is not doctrinal synthesis.
Why legal judgment requires accountability.
B. Responsibility Gaps
When AI generates an error:
Who is responsible?
The lawyer?
The firm?
The vendor?
The model developer?
Courts will say “the lawyer.”
But that doesn’t resolve the systemic issue.
That tension deserved exploration.
C. Structural Inequality
If:
Big firms build private AI systems,
Small firms are told not to use public tools,
Then competence may become capital-intensive.
That’s a serious justice-system issue — far more interesting than mocking lazy lawyers.
D. Judicial Use of AI
If judges start using AI to:
Draft opinions,
Summarize briefs,
Analyze precedent,
What happens to:
Transparency?
Record preservation?
Appellate review?
That’s a far more explosive topic.
The draft doesn’t go near it.
A Sharper Ending (Illustrative Rewrite)
Instead of:
“AI responses may include mistakes…”
A more authentic closing would look like:
The problem isn’t that AI exists. The problem is lawyers who mistake autocomplete for judgment. A machine can predict the next word. It cannot stand next to a frightened defendant and decide which risk is worth taking. If that distinction disappears, the profession deserves what replaces it.
That keeps the edge. No disclaimer. No hedging.
Bottom Line
The AI-generated post isn’t terrible. It’s coherent. It’s stylistically competent.
But it’s:
Shallow.
Risk-averse.
Structurally safe.
Legally underdeveloped.
Missing the hardest questions.
Which is, ironically, a perfect demonstration of the current ceiling of generative AI: it can imitate voice, but it struggles to generate intellectual friction.
[Ed. Note: I trust no one who reads this will fail to appreciate the irony of AI’s self-criticism.]