Who’s Really Driving That Shiny Car?

We’re past the proof of concept part of the shiny death of the great American pleasure of cruising down the highway, top down, wind in your hair.  Route 66 is closed. Instead, we can sit there like uninvolved blobs, because that’s what shiny-lovers really want out of toys so they can check their text messages instead.

Cool future? Maybe not. As Karl Bode at Techdirt explains:

As Google, Tesla, Volvo, and other companies make great strides with their self-driving car technology, we’ve started moving past questions about whether the technology will work, and started digging into the ethics of how it should work.

Wait, what?  Are you saying it’s not just groovy technology making our life even more tech-tastic?

Just how much power should law enforcement have over your self-driving vehicle? Should law enforcement be able to stop a self-driving vehicle if you refuse to? That was a question buried recently in this otherwise routine RAND report (pdf) which posits a number of theoretical situations in which law enforcement might find the need for some kind of automobile kill switch:

“The police officer directing traffic in the intersection could see the car barreling toward him and the occupant looking down at his smartphone. Officer Rodriguez gestured for the car to stop, and the self-driving vehicle rolled to a halt behind the crosswalk.

Well, okay, sort of. After all, no reason for a cop to get mowed down by texting mommy in a minivan, right? Seems legit. Except that’s just the start of the questions raised by the report, commissioned by the National Institute of Justice.

“Imagine a law enforcement officer interacting with a vehicle that has sensors connected to the Internet. With the appropriate judicial clearances, an officer could ask the vehicle to identify its occupants and location histories. … Or, if the vehicle is unmanned but capable of autonomous movement and in an undesirable location (for example, parked illegally or in the immediate vicinity of an emergency), an officer could direct the vehicle to move to a new location (with the vehicle’s intelligent agents recognizing “officer” and “directions to move”) and automatically notify its owner and occupants.”

Or, to provide a little context, sell your stock in Harris Corp., makers of the infamous StingRay cell tower spoofer, because the cops won’t be needing it once you’re cruising in your google car, which can give up your every move and location. And, of course, there is no reason to suspect that “appropriate judicial clearances” won’t keep the lid on government excess, because the automobile exception has worked so well for the Fourth Amendment.

But once the government has the magic passkey to every self-driving car on the road, enabling it to do all those noble things the government says it needs to do to protect the children, the power to control cars is available to anyone with sufficiently mad tech skillz as well.

Thanks to what will inevitably be a push for backdoors to this data, we’ll obviously be creating entirely new delicious targets for hackers — who’ve already been poking holes in the paper mache grade security currently “protecting” current vehicle electronics.

When you return to where you left the car after a fun night of getting totally shitfaced, secure in the knowledge that your google car will get you home safely, and it’s not there, who you gonna ask?  Do you even speak Russian?

But all this aside, there are some deeper, more disturbing ethical conundrums at work here, that really need to be considered. Remember the Trolley Problem?

“Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that’s why you have this switch. But on the alternate track there’s more trouble: Your child, who has come to work with you, has fallen down on the rails and can’t get up. That switch can save your child or a bus-full of others, but not both. What do you do?”

This is a long-standing ethical and philosophical conundrum, in which great minds have differed as to the “right” outcome.  Yet, when it’s a google car, it’s all safety first.

What would a computer do? What should a Google, Tesla or Volvo automated car be programmed to do when a crash is unavoidable and it needs to calculate all possible trajectories and the safest end scenario? As it stands, Americans take around 250 billion vehicle trips killing roughly 30,000 people in traffic accidents annually, something we generally view as an acceptable-but-horrible cost for the convenience. Companies like Google argue that automated cars would dramatically reduce fatality totals, but with a few notable caveats and an obvious loss of control.

Due to the law of big numbers, the Trolley Problem and its innumerable variations will certainly arise, and arise with some frequency.  The only difference is that it won’t be the driver deciding where to come out, or taking some heroic measure that wildly ends up with everyone surviving.

Nope. The good folks at Google will program the car so that someone will die.

This isn’t to say that the safety otherwise programmed into the grandma-mobile won’t save thousands of lives. It likely will. It’s just that the life it saves may not be yours. But it’s still shiny, and isn’t that enough?


26 thoughts on “Who’s Really Driving That Shiny Car?

  1. Keith

    “Which model would you prefer, sir?”

    I’d like the Kantian car programming, please. That Bentham car will get you killed.

  2. Bill O'Brien

    the philosophical question that the “Trolley Problem” concerns is whether killing someone is morally worse then letting someone die. you can act, by pulling the lever or whatever, and thereby save 10 people and kill 1, or refrain from acting, thereby letting 10 people die but not killing the one. my answer is that you morally ought to save the 10, because killing is not inherently worse than letting die. But however you answer that philosophical question, its hard to see how it has any bearing on automated cars.

      1. Bill O'Brien

        the moral choice would be made by those (persons, you know, moral agents) who set up the programming. they have to ask ‘should we set it up so the car under conditions x,y and z, will turn the wheel and run over 1 or not turn the wheel and run over ten?’ the moral distinction (if there is one) between action and deliberate omission doesn’t come up. so, you’re right. its not hard..
        ..unless you think computer programs make moral choices. …..then i guess they could be criminally prosecuted. i could use the business but i don’t know where you’d draw the line: tornadoes? the common cold?

        1. SHG Post author

          Bill, you’re wearing thin. Yes, the moral choice is made by the persons who do the programming. Not sure how that eluded you before. But your grasp of the Trolly Problem’s implications here are too tenuous for further discussion.

          1. Bill O'Brien

            your rejoinder had no bearing on my point because it didn’t address how the trolley problem applied to either the program or the programmers.

            But, I’m “wearing thin.” Fine then. I’m not going to go back and forth on this. You wrote about something you didn’t quite understand and you got it wrong. not a big deal. but your (for lack of a better term) intellectual dishonesty makes a discussion like this pointless.

            i won’t be back, but I will say before i leave that you write a lot of good posts on legal issues, and I thank you for the work that you put into it.

            1. SHG Post author

              Well, obviously if I don’t agree with you, I’m being intellectually dishonest. And worse still, if you won’t be back, I will be terribly lonely here. And there’s nothing remotely psychotic about your comment. Bye, Bill.

            2. David


              Nobody cares that came. Nobody cares that you left. But thank you for announcing it, just in case anyone else shares your delusion that you’re the center of the universe.

    1. Keith

      But however you answer that philosophical question, its hard to see how it has any bearing on automated cars.

      You lack imagination.

      Your Google automated car is about to get into an accident and possibly kill 2 motorcyclists. It can avoid it, by swerving and hitting a single motorcyclist, but this guy isn’t wearing a helmet like ones about to get hit. The moral question is whether it should hit the motorcyclists wearing the helmets (with greater chance of survival) or the other one that isn’t wearing a helmet.

      Who gets sued, who gets a summons, and what discovery is permitted when the automated software runs a real life version of the trolley problem?

      1. SHG Post author

        If you’re going to play that game, consider the empty school bus versus the motorcycle. Or the truck filled with workers in the back versus the Ferrari. There are innumerable permutations.

        But no matter what variation is chosen, the fact remains that the choice is made by google, not the driver who is about to die.

  3. Taco

    I think the button that pops the trunk, rolls down every window, and swings all the doors out is what the cops are really excited about. Officer safety!

  4. David Stretton

    I’m assuming that the programming of vehicles will be done by the same good folks who do our smartphones. So an Android car full of nuns and an Apple car full of paediatric cancer surgeons are racing headlong towards each other. A malfunction! Oh, no! Too late to avoid catastrophe! The Android car decides to veer off into for the anti-vaccer rally, and the Apple car decides to head for the Westboro Baptist Church protest…

    Interesting times ahead. Oh yeah.

  5. Nigel Declan

    The countdown until a self-driving car is pulled over and disassembled under the pretext that it might have achieved sentience continues unimpeded.

  6. Jyjon

    You guys seem to be thinking tech is perfect or something. It’s going to ‘glitch’ and select the option that kills everyone, including you cause the cop is going to be the hero and shoot you for running everyone over. And his defense will be he was scared he was going to be run over too and he saw the car crossed the line, even though all the surveillance cameras in the area show the car didn’t, but it happened in Illinois were officer testimony trumps cameras.

    Self driving cars though, will be a huge boon for smugglers. Program the route with the goods inside, somewhere along the route they stop it extract the goods. If the police find the car, no one is inside, how they gonna find the smuggler?

    1. SHG Post author

      As the devs here are quick to remind me, it’s not the tech but the programming. Tech is perfect. Humans, not so much.

  7. Mort

    With the appropriate judicial clearances, an officer could ask the vehicle to identify its occupants and location histories.

    Time to start restoring that ’67 Impala and stockpile parts – otherwise I’ll have to murder that feature with fire and hammers, and I suspect that will void the warranty…

  8. Patrick Maupin

    Published reports indicate that google is using deep neural networks for detecting the environment the car is operating in. So far, deep neural networks that humans build seem to have blind spots , perhaps even big enough for the proverbial Mack truck.

    So, if you ask the car “why did you make that choice?” the answer might (as with a person) be “I didn’t see the pedestrian,” even when the pedestrian is perfectly obvious on the video captured by the same camera the computer was using.

    Since neural nets are trained rather than programmed, if google is also using neural nets for the car’s decision-making, the answer to the question “Why did you run over the schoolchildren instead of the nun?” might not even be articulable by the computer in a fashion that is understandable by mere humans.

Comments are closed.