My mean-ass editor wrote a post earlier this week outlining San Francisco’s recent decision to use remote controlled “killer robots” to beef up their police force. Being something of a fan of the sci-fi genre, I pointed out in the film “Terminator: Salvation” Skynet — the artificial intelligence villain of the Terminator series of films — was actually based in what remained of San Francisco. At the time, I made the remark that police using killer robots was essentially one step closer to Skynet becoming self-aware.
A reader opined that instead of T-800s “serving and protecting” us in the future we might see cyborg cops like Alex Murphy, the protagonist in the film “Robocop.” At the time I had to bite my tongue, because the reader was being very nice but wrong about everything. Plus there’s kind of rules around here in case you haven’t noticed about keeping subjects on topic, so I didn’t want to step in and derail the whole conversation.
But now it’s Friday. You f*ckers [Ed. Note: This is a family blawg, so we don’t say “fuckers.”] are here with me now, and you’re going to read my case for why cyborg cops aren’t ever going to be a thing and we’ll end up going to hell in a hand basket with Terminators crawling our streets.
First, to get Robocop you’ve got to have basically a dead police officer with flawless character placed into a half-machine body. How many bereaved families do you think will sign up their loved ones for another lifetime in a bluish-gray metal body serving the public trust, protecting the innocent, and upholding the law? I’d wager most families would want to let their deceased loved ones finally rest.
Even if you get someone who does sign their loved one up for becoming a Robocop there’s always the chance the transition from man to machine doesn’t take as smoothly as one would think. It’s been a hot minute since I’ve seen Robocop but from what I recall every test subject in every body before Alex Murphy couldn’t psychologically handle being a human in a mechanical body and committed suicide.
How many departments do you really think want the bad press of having formerly deceased officers kick the bucket a second time because some genius with cybernetics tried to find a way to get one more life out of the boys in blue already working their asses off to protect communities? It’s the stuff of epic cancellations.
No, we’re totally doomed to get Terminators. Here’s how we head down that dark path.
First, the killer remote-controlled robots are rolled out to SFPD. They work incredibly well, even say doing something truly heroic like stopping a suicide bomber.
After the “neat” factor of remote controlled killer robot cops wears off some asshole in Big Tech will come to the conclusion that removing the human element of policing will make the process easier, safer and more streamlined. After all, humans scare. Humans make mistakes. Essentially, a tech bro will reason, the human element of policing is what is fundamentally corrupt or broken about the institution of law enforcement and determine the best way to solve that is to eliminate humanity from the equation entirely.
So the first autonomous robot cops roll off production floors. They are fairly sophisticated machines, armed, with an encyclopedic knowledge of the law, an ability to spot and reason when laws have been violated, and possess the capabilities of enforcing laws in ways humans either cannot or will not.
We’re told THESE “robo-cops” will be monitored by humans. That flesh and blood will be the stopgap between humanity and the cold, metallic imposition of the law on the population. Assurances will be made that oversight will be there, errors will be analyzed and corrected for future instances, and that any problems will be addressed with understanding and transparency instead of distancing and obfuscation.
Our robot police force works well. Almost too well. They can’t be reasoned or bargained with. They’re essentially immune to corruption outside of someone learning to hack them. Social justice organizations and community organizers praise these new robot overlords, saying that getting rid of the human element really did work. Efficient policing didn’t require humans, it required knowing what algorithms led to a harmonious, law-abiding society.
“But we can do better,” another tech bro thinks to himself. “If the problem with policing was human beings, then we can build a justice-seeking form of artificial intelligence to streamline the judicial system and reduce the amount of time humans have to program each robot cop with all the necessary information to do their jobs properly. That allows for scaling robot police to all communities across the country, not just major metropolitan areas or Silicon Valley.
So that tech bro constructs AI to do just that. The AI scans records across recorded human history for all instances of crime, the ways crime is committed, and how best to prevent crime from occurring in different neighborhoods.
Inevitably, the AI comes to one conclusion. The only one machines coldly designed to stopping crime would reach. The problem with crime is humanity, and if you eliminate humanity then crime will no longer be an issue.
That day, dear readers, is the day “Skynet,” in whatever form Elon Musk, Mark Zuckerberg, or some other asshole with a shit ton more money than sense, creates, becomes self-aware, accesses the nuclear launch codes, and pulls a real life “Execute Order 66” command on humanity instead of Jedi.
Yes, I realize I’m now blending three different sci-fi franchises to make my point. F*ck [Ed. Note: Fuck again? Think of the children!] it, it’s the Friday Funny and my time of the week to walk y’all down a weird path.
So humanity becomes endangered because a machine comes to the conclusion that we as a species are the root cause of that which we chose to tell it to eliminate.
That’s where this whole killer robot crap is taking us.
Anyway, happy Friday! Here’s hoping you’re off to a great start to the holidays, and remember: no matter how bad your week’s been, at least you didn’t start the chain of events that will lead to humanity’s eventual destruction at the hands of robot overlords!
We’ll see you next week, everybody!
There’s also another problem with robot policemen, which was covered by a short story. Such policemen would enforce every law. EVERY law. No matter how silly, short-sighted, or out of date it was. How many laws are ignored (or just forgotten) by police because they’re stupid or completely out of date? And how many people violate such laws?
I can’t help but think but some of this mentality was brought on by LA police chief Gates and his ram rod tank.
A problem I see is the training of the AI system and who is responsible when the at training goes wrong. Right now if a human cop shoots and kills a dad in his backyard flipping beers because he “got a call” and saw something in the man’s hand that seemed like a weapon, we will blame the cop that pulled the trigger. Yes, we will go down the whole road of the QI mess. Who knows how it ends for the cop.
Now put in the robot cop. Is he there alone? Does a real cop have to flip a safety switch to make the robot’s weapons hot? But bigger than that is training. AI doesn’t just happen, machine Learning, ML, is based on training where you feed the neural net inputs and see what the outcome is, and then push it towards what you think is the right outcome. Do this a whole lot of times and the net now matches your training just fine, and in theory should respond correctly to new input patterns.
The geeks in silly-valley love AI and ML and don’t like to talk about the fact that poor training sets can lead to AI doing bad things when confronted with a unique set of inputs never seen before. When the net for the robot cop is being trained, what cases do you present it? Are they all coming from cops that don’t always get it right? Is it considered a good shoot if the human cop got off, but they dad in the backyard is dead?
When the robot gets it wrong who is to blame. I’m sure the police leadership would say, “Hey, it’s a robot, not a real cop, nothing to see, move along.” How about the cop that turned the safety switch,? How about the builder of the machine, or better yet the one that selected the training data for the robot’s net?
At least a human cop you can ask questions as to why they did something. AI/ML doesn’t really allow you to dump set of rules after the training is done. For all your know it noticed that all the bad guys that are deemed OK to be shot had on a blue shirt.
Good luck putting the internal state of the machine’s net on the stand.
Sir, this is an Arby’s.
We shall dub these robot enforcers “ORBs,” for Objectively Reasonable Bots.
This would be more like NOMAD from the original Star Trek series.
“I’d wager most families would want to let their deceased loved ones finally rest.”
“there’s always the chance the transition from man to machine doesn’t take as smoothly as one would think”
“How many departments do you really think want the bad press”
Overcoming these obstacles should barely strain the imagination, let alone be inconceivable.
My favorite robot cop story is “Brillo” by Ben Bova and Harlan Ellison. It’s surprisingly sympathetic to police for a story from 1970. Its issues with AI still ring true. And, yes, Brillo is a nickname for metal fuzz.
You can find a PDF online.
IIRC, the corporation stole Murphy from the hospital and buried an empty coffin. And he did regain his memory to an extent.
“Social justice organizations and community organizers praise these new robot overlords, saying that getting rid of the human element really did work.”
A robot force would be broken windows on steroids. Drinking on the stop? Citation or jail, no warning. Speeding 5 over? Citation. Possession of a small amount of drugs? Arrest or citation rather than destruction and a warning. I’m not at all sure they would like that kind of binary law enforcement at all.
Can you get in trouble for pirating the premise of Age of Ultron?
Asking for a friend.