Is Regulating AI Possible?

I’ve long been unimpressed with algos. I mostly get online advertisements for things I’ve already bought, and accordingly don’t need, and the legal AI I’ve seen and tested generally sucks. Put aside the hallucinations, which are inexplicable, but the mundane legal content is shallow, repetitive crap that fills space and does little more. So, I have little fear that this garbage AI is coming for me anytime soon. Then again, I may be wrong.

Picture this: You give a bot notice that you’ll shut it down soon, and replace it with a different artificial intelligence system. In the past, you gave it access to your emails. In some of them, you alluded to the fact that you’ve been having an affair. The bot threatens you, telling you that if the shutdown plans aren’t changed, it will forward the emails to your wife.

This scenario isn’t fiction. Anthropic’s latest A.I. model demonstrated just a few weeks ago that it was capable of this kind of behavior.

Now my emails are about as scintillating as a boy scout manual, but I get the point being made by founder and CEO of Anthropic, Dario Amodei, that AI not only has the potential to do grave damage as it “learns,” but that the time to regulate it is before the damage is done.

We’re not alone in discovering these risks. A recent experimental stress-test of OpenAI’s o3 model found that it at times wrote special code to stop itself from being shut down. Google has said that a recent version of its Gemini model is approaching a point where it could help people carry out cyberattacks. And some tests even show that A.I. models are becoming increasingly proficient at the key skills needed to produce biological and other weapons.

He extolls its virtues, as would anyone whose livelihood depends on it, but recognizes its potential for harm.

But to fully realize A.I.’s benefits, we need to find and fix the dangers before they find us.

This raises two questions. What is needed to “fix” the dangers and who should do it.

Right now, the Senate is considering a provision that would tie the hands of state legislators: The current draft of President Trump’s policy bill includes a 10-year moratorium on states regulating A.I.

The motivations behind the moratorium are understandable. It aims to prevent a patchwork of inconsistent state laws, which many fear could be burdensome or could compromise America’s ability to compete with China.

Much like individual states have no business regulating the world wide web (remember when URLs began with “www”?), a patchwork of regulations by state would not only be a nightmare, but would almost certainly result in absurd regulations given the lack of sophistication by geriatric politicians trying to fix things they don’t understand. And the 10-year moratorium in Trump’s Big Beautiful Bill will almost certainly be excised by the Senate parliamentarian as it has no business being part of a reconciliation bill. But even if it withstood scrutiny, it reflects the cluelessness of government to grasp the problem.

But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds — no ability for states to act, and no national policy as a backstop.

Here’s where Amodei’s legitimate concerns fall short. If there is any administration that lacks the ability to formulate competent regulation of new technology, this is it. On the one hand, it’s already captured by tech titans making billions off AI, and making billions is the only qualification Trump cares about when it comes to what he can make off it. Safeguarding humanity is not high on Trump’s transactional list.

On the other hand, the notion that even a clear federal plan will do the trick is naive.

At the federal level, instead of a moratorium, the White House and Congress should work together on a transparency standard for A.I. companies, so that emerging risks are made clear to the American people. This national standard would require frontier A.I. developers — those working on the world’s most powerful models — to adopt policies for testing and evaluating their models. Developers of powerful A.I. models would be required to publicly disclose on their company websites not only what is in those policies, but also how they plan to test for and mitigate national security and other catastrophic risks. They would also have to be upfront about the steps they took, in light of test results, to make sure their models were safe before releasing them to the public.

Contrary to popular belief, the United States is not the only nation in the world. Will China give a damn about our regulations? Will Putin follow them just like he ended the war in Ukraine when Trump twitted “Vladimir, STOP”? Even the European Union, once inclined to work with us before Trump made it clear he neither cares what they say nor can be trusted anymore as an ally, is likely to prefer its own path to whatever we come up with.

It’s not to say that Amodei’s calls for transparency and responsibility aren’t valid. They’re just not nearly good enough. To assume that the United States, and no one else, will develop killer AI, that the technology won’t spread around the globe, that AI is our tool and no one else’s, defies both reason and the nature of technology.

The window of opportunity to “fix” AI, to constrain it and to prevent it from doing the harm of which Amodei speaks, is now. But “now” isn’t a very good time to deal with much of anything on either an intelligent or global level. I’m sorry, Dario, but we can’t do that.


Discover more from Simple Justice

Subscribe to get the latest posts sent to your email.

16 thoughts on “Is Regulating AI Possible?

  1. phv3773

    A first step could be establishing that an AI company could, in principle, be held liable for a harm caused by their product.

      1. Different David

        Likelihood of getting caught >> severity of punishment, but make it both and it incentivizes profit-driven corporations to pay more attention to safety.

        Treat AI like wild animals, owner/operator has strict liability, real whistleblower protection, and both fines and treble damages and wide scope of who can bring actions, in lawsuits.

        To allow for innovation, set out regulatory basis of minimums that if 100% complied with meet due diligence requirements.

  2. The Infamous Oregon Lawhobbit

    Del Toro’s “Frankenstein” masterwork is about to be released on Netflix.

    Coincidence?

    I think not.

    Surely there’s no possibility of a paranoid AI deciding to develop a preemptive self defense ability, right?

    I, for one, welcome our forthcoming AI overlords and wish to go on the record as intending to be a good little meat-sack during their reign.

    I don’t see any practical solution to the coming problem. Though I do see the potential for a great story/movie where the protagonist saves the day by creating a lawyer AI that ties up the ruling AI class in courts, with dueling and ever-increasing hallucinatory legal issues being litigated so hard that the ruling AIs forget all about meatspace…..

  3. Anonymous Coward

    Something like Asimov’s laws of robotics restricting AI would help, but this depends on the designer implementing the rules and the AI not evolving to violate those rules. Also as mentioned, this would only affect US designed and hosted LLMs

    “Isaac Asimov’s “Three Laws of Robotics”

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

    1. The Infamous Oregon Lawhobbit

      But do not forget the Pratchett Addendum to #1: “…unless ordered to do so by duly constituted authority.”

  4. Pedantic Grammar Police

    Asimov’s laws are hopelessly simplistic. The reality is that AI cannot be regulated. Remember when they tried to regulate cryptography? You can’t regulate an idea.

    The good news is that we don’t have AI. We have a clever algorithm (LLM) that scrapes a huge database of crap and regurgitates it in a way that appears intelligent to idiots. We will have AI someday, and that will be a problem, but I hope and expect that it is far in the future.

  5. BH

    Or perhaps much like data protection, it will be the EU which sets the regulatory standard for much of the world to follow – even as US companies lead on innovation.

    The EU adopted its AI Act last year. It categorises and regulates different AI systems based on risk of harm.

    Whether it works or not, we shall see, but no one is waiting for the US to act.

  6. Dana

    Dear Scott,
    I think the best way to explain the performance of LLM based A.I. is that these efforts are intended to maximize investment. There are areas where LLM’s can be useful, but there are lots of caveats. Investors are being sold the dream that the solution to every problem is just around the corner.

    I think that progress can eventually be made in applying AI techniques to the work of lawyers, but it will take precise work by the right people, and lots of time. Could be years, decades, it’s hard to say.

  7. Austin Collins

    OK, so after reading all the comments first this time around, I do not see any comments made by people either working in or expert in AI. I’m both. I’m also a huge fan of SHG, and a big believer in Popehat’s (indirectm and summarized ) admonition “If you want to give the government a power [here, regulation], conisder how it would be used by your worst enemies.”

    So, first, SHG — you brandish your AOL e-mail address as a shield against essentially all new technology. Admittedly, that’s kind of awesome. But in a fun way, not in an instructive way. And whenever even a modicum of vision, or creativity is needed you immediately resort to Chesterton’s fence. If blogs had been possible on typewriters, I’m sure you would have railed against the internet at the time — yet it brought togeter that very blawgosphere you so understandabily lauded.

    The “AI” that is being discussed here is best summarizied (for lawyers) as ‘gnerateive AI’. Generative AI isn’t even properly considered an algorithm, as it’s both non-dterministic and highly non-linear.

    So, to the question of whether or not it is able to be regulated.

    Before considering statements like “We need to find and fix problems before they find us”, you should repeat the following to yourself at least 10 times: Risk mitigation is not the same as risk avoidance.

    After all, take that same sentiment and apply it to legistlation, appeals or supreme court decisions, or essentially any statement with lasting impact — it’s just as valid.

    The key differences with AI are: 1) 2001 and The Terminator really scared people, and 2) all relevant AI is ‘Black Box’ — that means, that if you ask it for an answer and then ask you “why” it chose that answer, it will kinda shrug at you. Much the same way if SHG entered a room, and you ask “who is that”, when you answer “SHG”, if someone asked you “why” you would not respond with some variation of “Well, he has two what appear yo be eyes, what seems to be his nose is 1.5 cm wide, etc”. You just know it. That isn’t a bug of modern AI — it’s the central feature. They call them neural networks for a reason. (For all those racing to the keyboard to tell me all about the various research efforts to explain “why” generative AI gives the answers it does, please see Cynthia Rudin’s letter to the editor in Nature concerning explainable vs interpretable AI).

    OK, with all that out of the way, here’s my take L Yes, AI can be regulated. But what needs regulation is not any static or feature based aspect of it — it’s the monitoring process.

    One thing “AI” has that is different than almost any other technology is the capability for “drift”. Consider the following joke: “There are only 2 things I cannot stand in this world: 1) intolerance of other cultures, and 2) the Dutch.”

    Much like being black box, modern AIs are also learning. Which means you could train the most perfect, egalitarian, awesome, perfectly safe for dogs and grandmothers AI, and wake up some number of months/years later and it hates the Dutch.

    Again, that’s a feature, not a bug — the ability to adapt so it can keep being useful.

    So the correct space for regulation is in *periodoc monitoring” for AIs, after agreeing on a moral framework which a particular group agrees to. Not ends-based, sweet lord Jesus not technical architecture based.

    Think of it as a child, and consider you are regulating the growth — not the person — of that entity.

    Oh, and SHG — hallucinations are not ‘inexplicable’ any more than moronic statemtns by L3s are.

    1. SHG Post author

      So, first, SHG — you brandish your AOL e-mail address as a shield against essentially all new technology.

    2. Pedantic Grammar Police

      That’s an awful lot of words to say that you have nothing. How exactly are you going to monitor this mythical AI? Ask it how it feels about the Dutch? If it really is intelligent (which I don’t believe for a second despite the ethereal claims made by people who are paid with investors money to work on “real AI” that will be working any day now but somehow is never ready for anything but a carefully controlled demo), then won’t it just figure out how you are monitoring it and then fake the results? And who is going to come up with this “moral framework”? Congress? Corporations? The only reason people tell themselves lies about how they would prevent a “real AI” from killing us all is because the alternative is to admit that a “real AI” would be impossible to control and that trying to create one is either futile or dangerous.

      1. AUSTIN COLLINS

        “A lot of words to say nothing” isn’t a bad way to refer to the statement itself. Let’s skip past the busywork of establishing definitions, terminology, and context — who’s got time for that?! — let’s skip to the first useful part: “How would you monitor?”

        Thought I made that clear, so let me try again: one monitors *drift*, against whatever terms were claimed in the first place.

        So if you say: “My AI guarantees that any representation of the Pelopynissian wars will not favor poorly upon the Greeks”, then you check every.six months to see if that’s changed.

        I’m deliberately picking absurd examples — or had I a JD, argument ad extremum.

        Here’s the best equivalence I can think of — though it uses math, you don’t need to know any math to get it…

        Algebra is the most basic “math” — everything before that is just arithmetic.

        But Algebra tells you how to figure out whether things are true, regardless of what the numbers involved are.

        Calculus is the next big conceptual step — and Calculus doesn’t care at all about what any variable is, it only cares about *How they change*

        Most technologies from history are of the algebra variety — wee need to explain what values specific things have: think atomic bomb, or moon landing.

        AI us more if a “How does it change?” technology. It’s not the first — for reasons SHG’s forenaic columnist us much better suited to address, pharmaceutical manufacturing is an excellent example of something pre-dating AI that is unavoidably non-linear.

        So. Love your name, and enjoyed your comments far more than not in the past.

        But you’re wrong about this, for almost epistemic reasons.

        If Epistemic_Grammar_Police wasn’t an unavoidably nonsense phrase, I’d steal it as my new name. 😉

        In my first pass, I did not acknowledge an important point we agree on: no, no part of these things is intelligent.

        Man, doesn’t the idea of the Turing test seem quaint now?’

  8. Rxc

    The ” moral framework” is the key to this problem. Because there is NO universally agreed “moral framework” for human behavior, so too there is no possibility of coming up with such a framework for a machine.

    I bet I could find an exception in current practice or history for EVERY ” universal moral concept” that can be identified. “Man’s inhumanity to man'” seems to have no limits, so why would one ever think that there could be a moral principle to prevent “machine”s inhumanity to anything”.

Comments are closed.