Killer Robots, Useful, Cheap and Ubiquitous

In 1942, Isaac Asimov came up with the Three Laws of Robotics.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In 2016, the Dallas police decided they didn’t really like the rules, so they sent a robot to execute a cop killer.  Most people are okay with the decision, since there isn’t a big fan club for Micah Johnson (though, that there is any fan club at all is rather remarkable). But even if the use of an execution robot seems okay this time, what about next time?

While the intelligentsia ponder the ethics of robots, factories are gearing up to turn their Roombas into killing machines. Oh come on. You didn’t see this coming?

Robot-maker Sean Bielat says he’s fine with the Dallas Police Department’s apparently unprecedented use of a police bomb-disposal robot to kill a gunman on Thursday. “A robot was used to keep people out of harm’s way in an extreme situation,” said Bielat, the CEO of Endeavor Robotics, a spinoff of iRobot’s military division. “That’s how robots are intended to be used.”

Putting aside the fact that Bielat stands to make a killing off robot sales, he’s right. Using a piece of hardware, a bunch of metal, wires and circuit boards, to save a human life is what robots are good for.  That they also took a human life in the process, well, it couldn’t be helped. Sometimes, a guy needs killing, as the Texans like to say.

On Sunday, speaking to Face the Nation, Dallas Mayor Mike Rawlings blessed the operation. “The chief had two options and he went with this one. I supported him completely because it was the safest way to approach it,” he said.

The other option was to wait Micah Johnson out. He’ll get hungry. He’ll get tired. It would mean a ton of overtime. But for all the post hoc hand-wringing and pearl clutching, they had a situation to deal with, and the mayor made the call. His call? Kill him. Send in the robot and kill him.

But some ethicists are worried.

“My initial reaction was that we have just got onto the slippery slope,” said Heather Roff, a senior research fellow at Oxford and a research scientist at Arizona State University’s Global Security Initiative. “This is going to be very hard to put back and that the militarization of police capabilities means that they may now feel that it is reasonable to use robotics in this way to ensure compliance…If one doesn’t have to talk to a subject and can demand compliance, then this may mean more forceful or coercive demands are made.”

Hard to put the genie back in the bottle? More like impossible. It didn’t just work well. It worked great. It did exactly what it was supposed to do, executed Johnson while putting no cop at risk. The dawn of killer robots just happened, and we applauded.

Bielat believes that incidents like the one in Dallas, in which police used a Northrop Grumman Remotec Andros F5 to carry explosives close enough to a gunman to kill him, won’t become “a common occurrence,” in part because the Andros F5, like his company’s own celebrated PackBot, cost upwards of $100,000 apiece. But he also believes that military-grade robots are on the cusp of getting a lot cheaper and more capable, due to decreases in the cost of processing power, advances in 3D printing, and other factors.

At $100,000 apiece for military grade robots, they’re kinda pricey.  But with the consumer market door suddenly flung wide open, that’s about to change.

“A lot of the components have come down in price because of consumer applications,” he said. “You now have more capability in the camera on a cellphone than you did on the…camera on the PackBot when it was first built.”

How long before every police department can afford not just a robot, but a killer robot, one designed not for bomb disposal but for human disposal?  And even if they’re a tiny, itty-bitty department in the middle of nowhere, there’s a plan to cover them.

The advances that such programs yield — and the 1033 program, which allows local and state law-enforcement agencies to request military technology  — will make police robots useful, cheap, and ubiquitous, Bielat says. The program wouldn’t provide armed military robots to police, but police may decide to arm them, says Bielat.

Useful, cheap and ubiquitous? You bet. But the best part is that whenever there is a situation that presents the potential for a violation of the First Rule of Policing, there will no longer be a need to put heroes, fathers, mothers, the Finest, at risk. Just send in the robots.

Except the robots don’t talk to people. They don’t warn. They don’t de-escalate. They don’t adjust to changing circumstances. They don’t take prisoners. They just do what they’re programmed to do.  Rather than risk 100 cops in from of 1000 protesters, why not send your robot to herd them where you want them. If they don’t comply, it’s not like they can sass a robot. At least, not effectively.

Asimov crafted three very simple rules to prevent what he foresaw as the obvious problem with unfeeling, unthinking, machines that could kill.  Out of the box, Dallas decided the rules didn’t apply. While the nattering naboobs of negativity try to sort out their feelings on death by robot, Sean Bielat and others are busy retooling their military versions for a police department near you. And they’ll get that price way down so no police officer will ever have to stand in harm’s way again.

22 thoughts on “Killer Robots, Useful, Cheap and Ubiquitous

  1. Marc Whipple

    I agree that this is a big deal.

    But that is not a robot.

    It’s a remote-controlled car with a manipulator arm. A human steered it, a human watched the sensor feed, and a human detonated the device.

    This is nothing like as big a deal as the day – which is not yet here, but is not far off – when a human says to an actual robot, “go kill that guy,” or even worse, “go to X location and kill whoever is there.” When THAT day comes, and all the humans can say, “hey, it wasn’t me, it was the robot,” whatever individual accountability is left in the system, assuming there is any, will be well and truly gone.

    1. SHG Post author

      So you didn’t get this post either. But at least you agree. I can’t tell you how much that means to me.

      1. Dan Rosendorf

        I apparently don’t either and would be grateful if you could elaborate. Whatever killed Micah Johnson, it was not a robot that could possibly obey Asimov’s laws, since it was not a *robot* in the sense of Asimov’s laws. Not only did it not have actually independent thought that might pass for sentience it didn’t even have independent thought that could carry out anything more then utterly rudimentary tasks such as ensuring balance. It couldn’t choose to not kill Micah Johnson and so it couldn’t follows the laws.

        If your argument is strictly about further militarization of the police I understand that, but then Asimov’s laws seem really just like a cheap gimmick.

        1. SHG Post author

          I apparently don’t either and would be grateful if you could elaborate.

          Sorry, no. I write the posts. I do not interpret them for readers who don’t understand what I’ve written.

          Edit: Eh, you asked nicely, and I have the time. This post isn’t about the robot that killed Micah Johnson, but that killer robots are already in development for use by police in the US. While the robot that killed Johnson performs only rudimentary tasks without AI, that won’t necessarily be the case in the future, as they continue to develop, and we can anticipate the robots will improve in capabilities and sophistication, reaching the point where Asimov’s laws come into play.

  2. B. McLeod

    Really more of a “drone.” Nothing all that new. As far as using them to coerce compliance, they might actually help to avoid fiascoes like the Philando Castile shooting. The drone could not feel threatened, and the remote operator would not feel threatened, so the “First Rule of Policing” would not come into play.

  3. Austin Texas piñata

    When I heard how the standoff negotiation had been ended I was rocked. As dangerous as this person was, he was contained. I reviewed the NIJ use of force continuum and there was a word missing that I always assumed was there but didn’t notice before; imminent. “Lethal Force — Officers use lethal weapons to gain control of a situation. Should only be used if a suspect poses a serious threat to the officer or another individual.”

    All my prior training and experience had been that if a suspect isn’t an imminent threat, containment and time are the options, not lethal force. But I guess since Waco the militarists and ‘time is money’ movement have eroded that premise.

    1. DaveL

      So we have a situation where a deliberate decision is made to kill a suspect because he represents an imminent threat of death to others, and a robot is used so that no one other than that suspect will be exposed to danger. It’s a safe way for officers to be in imminent and deadly danger, don’t you see? Nothing could be simpler.

      1. MKr

        It’s not so much, specifically, that they used a robot to kill someone who was an “imminent threat of death”. It’s that they made the decision to kill him, then cast about for a tool to do it. And when they found said tool, they did not consider any of the other things said tool could do. Like, for instance, incapacitate the shooter with gas, concussion grenades, flashbangs, anything. Or even just use it for reconnaissance. Those choices were closed the moment they determined that he had to die.

        But hey, don’t blame the robot. It was only following orders.

  4. Charles

    All I have to offer is the obligatory XKCD.


    (No links, I know. But I don’t have Barleycorn’s level of access to embed material in comments…).

    [Ed. Note: While you don’t, I do.]

    1. John Barleycorn

      I probably should contain this rouge  threat in a reasonable, necessary and proportionate manner seeing as how the health safety and wellbeing of the entire universe is at stake when it comes  subversive propaganda linkage.

      But then again… I best not miss Jeopardy tonight, so on second thought. Meet my little friend.




      Well at least Asimov was right about one thing:

      “Violence is the last refuge of the incompetent”

      But fuck it. Competence is hard!

      Who gives a shit, if all its gonna take is a robot or two and some C-4 now and then when things get scary and the hero’s need a helping hand. Responders-got
      -to -respond, right?

      Sure the K-9 cops will be a little bummed out at first but let’s face it, the retirement ceremonies for the robots are gonna be way cooler that pinning service medals on Fido’s collar.

      Besides the inherit rules of  K-9 training prevent the dogs from handing out candy with their handlers at 4th of July parades.

      But I can’t wait to see the robot squadrons shooting candy into the crowd. Heck, put them right after the overweight Shriners zipping around in their go-karts and good bye police recruitment problems.

      Ooh Rah!!!

  5. Zachary North

    The lesson of the Three Laws of Robotics was that they didn’t work. The AI interpreted the rules in a way we hadn’t intended in order to do what it decided was best for us…

    Funny, that sounds a lot like some 4th amendment cases I’ve read.

    I’d hate to think of how much more expensive it would be to equip any kind of robot with the sort of features that would even come close to the deescalation abilities a good cop could bring to the table. Hell, Joe could buy another tank for that kind of money.

    1. SHG Post author

      In Asimov’s story where he introduced the three rules (“Runaround”), they worked, but they failed in his subsequent Robot stories. His lesson wasn’t that they didn’t work, so much as an apocalyptic vision of how humans and circumstances can create dilemmas where even the most well-conceived rules can be doomed. It was a warning as much as a prophesy.

  6. losingtrader

    You have to plagiarize quotes from Spiro Agnew?
    And here, I was compiling your SHGisms.
    This really lowers my opinion of you, which I know you value so highly:[(3)I can’t tell you how much your agreement means to me]
    And no, changing the last word from “negativity ” to “negativism” doesn’t get you off the hook.

    1. SHG Post author

      I cannot believe I blew the quote. Dammit. Memory is the 37th thing to go.

      But you missed my “improvement” to another word.

    2. Turk

      You have to plagiarize quotes from Spiro Agnew?

      Please. William Safire wrote that for him. Credit where it’s due.

  7. Patrick Maupin

    Sean Bielat and others are busy retooling their military versions for a police department near you. And they’ll get that price way down so no police officer will ever have to stand in harms way again.

    Escalation is never single-sided. The proletariat will have capable weapons.

    The cops will be in harm’s way. The only question is if they mitigate the harm by helping to form a societal consensus on the appropriate use of this technology, or if they escalate by continuing to imperiously make up their own ROE that endanger the rest of us.

    Because whether it’s environmentalists, or anti-abortionists, or radical religious zealots, or BLM activists, enough angry passionate people will always eventually attract the attention of sociopaths who decide to “help out.” In the specific case of BLM, the obvious target for these sociopaths is the police. Any actions that police departments undertake that undermine trust and cohesion are eventually going to backfire.

  8. JimEd

    You guys are so smug about the 3rd amendment. What if the PD/Fed/Whatever required you to pay for a police robot capable of lethal force be installed in every apartment building? For your protection of course.

    Also, there is a zeroth law of robotics that I find profoundly troubling. I think Asimov was letting go when the Foundation series went there:

    0. A robot may not harm humanity, or through inaction allow humanity to come to harm.

    Obviously that can go wrong a million+ ways and assigns responsibilities far beyond any theoretical capability of robots or humans. But he wanted to keep writing Foundation books.

Comments are closed.