It seems clear that the op-ed couldn’t be written by Pete Hegseth, as it was coherent and all the words were spelled correctly. But it came as a shock that it was written by Frank Kendall, who served as Secretary of the Air Force under President Biden. Given the current Secretary’s obsession with war and lethality, the effort to hijack a private company upon threat of its destruction by being named as a “supply chain risk,” thus making it unusable either to the government or to any enterprise doing business with the government, one would have expected, hoped?, that the response would have been a robust defense of private enterprise. But that wasn’t where Kendall went.
Anthropic is insisting that the government agree to specific restrictions that would prevent the use of its model to conduct widespread surveillance of Americans or to control autonomous weapons like drones without a human in what is called the “kill chain.” The company reiterated on Thursday that it has no intention to change its position. The government says that the only requirement its contractors can insist on is that their products be used lawfully.
This is a somewhat misleading characterization of the situation. The conditions of use were already part of the deal struck with Anthropic. But deals are for suckers in the Trump administration, to be struck and unstruck at will. While claiming it has no intention to use Claude, Anthropic’s AI agent, to conduct surveillance or kill without human involvement, the Pentagon has refused to abide by the terms of the deal, instead using the weasel words that the only limit be that it’s use lawfully. The word “lawfully” means in whatever way the president or his proxies decides, since whatever the president decides is, by definition, lawful.
Kendall recognizes that this creates an untenable situation.
The tool Anthropic is providing to the government is enormously powerful; like other tools, it can inherently be used for good or evil. Anthropic is rightly concerned that its tool could be used in ways that are unsafe or malicious. The company doesn’t want to see its A.I. model used without human control, which could result in the killing of noncombatants or friendly troops by automated weapons, nor deployed to spy broadly on Americans in ways that could violate dearly held values like privacy and freedom from illegal search and seizure or could suppress political dissent. Most Americans would probably agree.
On its side, the Department of Defense will not accept constraints on the use of products it has purchased. The government has a point. America’s national security team needs to have the freedom to use the products it buys within the law and not be beholden to preferences from the sellers.
The Pentagon purchased the use of Claude with these conditions. Contracts are contracts. Nobody forced the Pentagon to contract with Anthropic, and if the terms of the agreement were not acceptable, it could have said “thank you, but no.” But it took the deal and then realized it didn’t want to abide by the limits of the deal. To characterize this as “beholden to the preferences from the sellers” is a disingenuous framing. Hegseth knew what he was buying and bought it anyway. He then wanted to take the product, the AI, while refusing to be held to the terms of the agreement. That’s not how it works in America. Or at least not how it’s supposed to work.
It’s not as if Anthropic was the only game in town. The Pentagon could be giving billions to Musk for Grok, even though it’s nowhere near as good as Claude. And if it now believes that the deal struck with Anthropic fails to serve its needs, it can cancel the contract and work with the second string, using whatever weasel words it chooses. Certainly Musk won’t refuse to cash the check. But Hegseth isn’t satisfied with moving on to the next option, and wants to flex his manly muscles to show Anthropic who’s bosss.
The government is trying to force Anthropic to capitulate with two threats: invoking the Defense Production Act to force Anthropic to provide its product with no additional restrictions, and designating Anthropic as a “supply chain risk” contractor. The first of these is unusual but consistent with the law. Claude, Anthropic’s large language model, is the only A.I. product approved for use on classified Pentagon networks. It is not unreasonable for the government to assert that it must have access to Claude for national security reasons until a comparable product from a competitor becomes available (something that appears to be fairly imminent).
But Kendall, despite giving far too much credit to the military to be trusted with a weapon that could destroy humanity in the wrong hands, has a weasel solution of his own.
If contract provisions are not an appropriate way to prevent government misuse of emerging A.I. technologies, then what is appropriate? Regulation by Congress.
Congress? Do we still have a Congress? Does anybody give a damn about Congress anymore?
I fully support that recommendation. We regulate most of the products we buy, from automobiles to airplanes to appliances. Existing and emerging A.I. models entail far more risk and scope of potential harm than these products. Congress needs to pass, as part of comprehensive A.I. regulation, restrictions on the most dangerous uses of these tools despite the Trump administration’s strong resistance to such limits.
This is stunning naivete. On the one hand, Congress has no capability to regulate something as sophisticated and beyond its grasp as AI. Hell, it lucked out with Section 230 and has proven itself otherwise incapable of regulating the internet, which it still fails to grasp despite 30 years of experience since the Communications Decency Act.
On the other hand, bad actors don’t seem to give a hoot what Congress says anymore. How’s the Epstein Transparency Act going? Of course, there are always court orders to compel the government to comply with the law. Only kidding. While leaving it in the hands of a private corporation to contract with the government with limits that prevent somebody with the morals of a Hegseth to have his finger on the AI button may well not be an optimal solution, it’s the best there is under the circumstances. Anthropic should hold firm and if Hegseth carries out its threat at 5:01 this afternoon, it will be added to his long list of disgraceful and unAmerican conduct in office. Yet another reason why Hegseth can’t be trusted with AI. And why Kendall can’t either.
Discover more from Simple Justice
Subscribe to get the latest posts sent to your email.

“I am altering the deal. Pray that I do not alter it any further.”
Secretary of Education Linda McMahon wants to assure parents, “There’s no need to fear A1”.
As the meme goes, “You want Skynet? This is how you get Skynet.”
It doesn’t help matters that there was a group that ran a bunch of war games, using AI against each other, and the silicon life forms absolutely loved going nuclear, and suffered from a severe lack of reverse gear.
Robert Heinlein.
“We live in a society exquisitely dependent on science and technology, in which hardly anyone knows anything about science and technology.”
I believe that was first voiced by Carl Sagan.
1990: “We live in a society exquisitely dependent on science and technology, in which hardly anyone knows anything about science and technology” (The Skeptical Inquirer Vol. 14, Issue 3)
Sorry.
Looks like my source was wrong.
I’ll try again.
Pablo Picasso.
“Computers are useless. They can only give you answers.”