We’re past the proof of concept part of the shiny death of the great American pleasure of cruising down the highway, top down, wind in your hair. Route 66 is closed. Instead, we can sit there like uninvolved blobs, because that’s what shiny-lovers really want out of toys so they can check their text messages instead.
Cool future? Maybe not. As Karl Bode at Techdirt explains:
As Google, Tesla, Volvo, and other companies make great strides with their self-driving car technology, we’ve started moving past questions about whether the technology will work, and started digging into the ethics of how it should work.
Wait, what? Are you saying it’s not just groovy technology making our life even more tech-tastic?
Just how much power should law enforcement have over your self-driving vehicle? Should law enforcement be able to stop a self-driving vehicle if you refuse to? That was a question buried recently in this otherwise routine RAND report (pdf) which posits a number of theoretical situations in which law enforcement might find the need for some kind of automobile kill switch:
“The police officer directing traffic in the intersection could see the car barreling toward him and the occupant looking down at his smartphone. Officer Rodriguez gestured for the car to stop, and the self-driving vehicle rolled to a halt behind the crosswalk.
Well, okay, sort of. After all, no reason for a cop to get mowed down by texting mommy in a minivan, right? Seems legit. Except that’s just the start of the questions raised by the report, commissioned by the National Institute of Justice.
“Imagine a law enforcement officer interacting with a vehicle that has sensors connected to the Internet. With the appropriate judicial clearances, an officer could ask the vehicle to identify its occupants and location histories. … Or, if the vehicle is unmanned but capable of autonomous movement and in an undesirable location (for example, parked illegally or in the immediate vicinity of an emergency), an officer could direct the vehicle to move to a new location (with the vehicle’s intelligent agents recognizing “officer” and “directions to move”) and automatically notify its owner and occupants.”
Or, to provide a little context, sell your stock in Harris Corp., makers of the infamous StingRay cell tower spoofer, because the cops won’t be needing it once you’re cruising in your google car, which can give up your every move and location. And, of course, there is no reason to suspect that “appropriate judicial clearances” won’t keep the lid on government excess, because the automobile exception has worked so well for the Fourth Amendment.
But once the government has the magic passkey to every self-driving car on the road, enabling it to do all those noble things the government says it needs to do to protect the children, the power to control cars is available to anyone with sufficiently mad tech skillz as well.
Thanks to what will inevitably be a push for backdoors to this data, we’ll obviously be creating entirely new delicious targets for hackers — who’ve already been poking holes in the paper mache grade security currently “protecting” current vehicle electronics.
When you return to where you left the car after a fun night of getting totally shitfaced, secure in the knowledge that your google car will get you home safely, and it’s not there, who you gonna ask? Do you even speak Russian?
But all this aside, there are some deeper, more disturbing ethical conundrums at work here, that really need to be considered. Remember the Trolley Problem?
“Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that’s why you have this switch. But on the alternate track there’s more trouble: Your child, who has come to work with you, has fallen down on the rails and can’t get up. That switch can save your child or a bus-full of others, but not both. What do you do?”
This is a long-standing ethical and philosophical conundrum, in which great minds have differed as to the “right” outcome. Yet, when it’s a google car, it’s all safety first.
What would a computer do? What should a Google, Tesla or Volvo automated car be programmed to do when a crash is unavoidable and it needs to calculate all possible trajectories and the safest end scenario? As it stands, Americans take around 250 billion vehicle trips killing roughly 30,000 people in traffic accidents annually, something we generally view as an acceptable-but-horrible cost for the convenience. Companies like Google argue that automated cars would dramatically reduce fatality totals, but with a few notable caveats and an obvious loss of control.
Due to the law of big numbers, the Trolley Problem and its innumerable variations will certainly arise, and arise with some frequency. The only difference is that it won’t be the driver deciding where to come out, or taking some heroic measure that wildly ends up with everyone surviving.
Nope. The good folks at Google will program the car so that someone will die.
This isn’t to say that the safety otherwise programmed into the grandma-mobile won’t save thousands of lives. It likely will. It’s just that the life it saves may not be yours. But it’s still shiny, and isn’t that enough?