News broke yesterday that Casetext, the legal-space start-up by Jake Heller and Pablo Arredondo, was bought by Thomson Reuters for $650 million. Having followed Casetext from the beginning, when it started with a dumb idea of a collaborative legal research tool to its pivot into legal research, and further pivot to include AI legal research. As recent experience before Judge Kevin Castel showed when papers included non-existent cases invented by AI, bad legal tech can cause some very real problems for lawyers and their clients. It’s avoidable with a little effort, certainly, but bad tech nonetheless.
But the ethos of “move fast, break things,” assumes that no one is going to be harmed should an attempt at innovation go awry. When it comes to most tech and innovation, that’s mostly true, although there can always be an argument made that by a few gyrations harm ultimately befell someone. And when it comes to spotty tech like generative AI in the law, what sort of lazy, sloppy lawyer wouldn’t check the cites generated by AI to make sure they existed? After all, who would believe that when ChatGPT tells you that Smith v. Jones is a 1978 District of New Jersey case that was on all fours, it pulled it out of its artificial anus?
This ethos, however, wasn’t necessarily understood as being an acceptable course for inconsequential innovation in the minds of the overly passionate and unduly simplistic, because a truism to rationalize any and every zany untested, irrational, baseless novel idea that popped into somebody’s noggin, even when actual human lives were on the line.
“At some point, safety just is pure waste,” Mr. Rush said in an interview with David Pogue of CBS. He even suggested that safety was used as an excuse by “industry players who try to use a safety argument to stop innovation.” OceanGate put it this way on its website: “By definition, innovation is outside of an already accepted system.”
In Mr. Rush’s telling, innovation was the province of maverick individuals, not stodgy legacy players and certainly not cumbersome government bureaucracies. Mr. Rush was perpetuating a myth — one that is particularly popular in Silicon Valley and among technology start-ups — that governments are just an obstacle and that innovation comes from bold trailblazers moving fast and breaking things.
That story is often wrong, and it was 100 percent wrong in this case.
The quote refers to the Titan submersible, which was very cool and innovative until it imploded and killed everyone within. This argument, the tension between regulation, testing, safety and efficacy, and the cool, new “why not?” challenge to whatever existed before.
The theory in Silicon Valley is that if one in a hundred new concepts flies, the benefits will more than cover the cost of the 99 failures. Most of the time, nobody dies as a result of those 99 failures, and the only risk is the loss of incubator capital, which is accepted by the stupid rich money kids.
But the phenomenon of “move fast, break things,” didn’t stop with harmless tech innovations. As it extended to physical technological innovations like the Titan submersible that couldn’t talk its way out of the laws of physics, it’s also extended into pretty much every twitter-depth solution for society’s ills. Defund police? What could possibly go wrong?
In the early days of Casetext, the innovative idea of a collaborative legal research tool was interesting and, as it appeared to be, unworkable on numerous levels. Had Jake and Pablo not pivoted, Casetext could have been another Shpoonkle. Most “innovations” end up being failures, not because people are unwilling to try anything new, but because being “new” isn’t the same as being viable or sound.
For the passengers and crew of the Titan submersible, the adoration of innovation to the exclusion of safety cost them their lives. For the lawyers who thought ChatGPT could save them the effort of being a competent, responsible lawyer, it cost them money and humiliation. For society, the myriad schemes that will “fix” society’s intransigent problems can just as easily damage the people they passionately claim to want to help, and are more likely to cause harm than solve anything.
Sure, it’s possible that something new, cool and innovative will be great and work. But possible isn’t good enough when people’s lives are on the line. There’s a reason not to take down Chesterton’s Fence until we understand what damage it could do. That Casetext ended up well doesn’t mean that Shpoonkle wasn’t a idiotic disaster. That the Titan submersible cost five human lives informs us that “breaking things” is only acceptable when the thing broken doesn’t matter.
Moving fast may be the ethos for tech doodads, but it’s no way to run the law, social policy or submersibles when real people can die.
Discover more from Simple Justice
Subscribe to get the latest posts sent to your email.

While I reflect on this blog, let me thank you for the link to the one about Chesterton’s Fence. Very thought-provoking and just as relevant here, 9 years later. Maybe even more so.
Something about being condemned to repeat history.
Safety rules and regulations are written in blood.
Like Glenn Ford’s final speech in the movie “Fate is the Hunter”.
Chesterton’s Fence is great for reformers. Most of us have a concept of “fence”, “gate”, and “road” and what they can be used for. The very use of the word “fence” implies purpose; fences are not often constructed for no reason. Innovators however are often delving into uncharted territory. “Fences” are not necessarily labelled or recognized as such. As with the Titan(*), the de Havilland DH.106 Comet was tested and completed trips before some crashed. The Curies (and their lab workers) all suffered significantly from radiation damage. Were these incidents primarily because of a “move fast, break things” mentality or a venturing into the unknown? Even cautious innovators can experience unanticipated difficulties.
(*) I don’t know enough to have an opinion on the Titan disaster.