Cool as the self-driving cars that are definitely going to be our future may be, especially for those who feel that grandma drives too fast and reckless, or who can’t bear to be off Instagram for even a second, it’s already understood that choices will be programmed into the vehicles that could spell your death. The Trolley Problem is no longer a moral exercise.
Who makes these choices? How will they be made? MIT, which will likely produce some of the engineers whose hands will be on the wheel of this moral dilemma, has crafted a video to help out with what it calls the “Moral Machine.”
Morality is one of those great bases for decisions, as each of us can make a decision with as much thought as we care to put in and are immune to challenge as there is never a right answer. It’s nothing more than what we feel to be right. And if someone else feels differently, well, so what? Their feelz are no more right than our feelz.
The best part is that there is no requirement to explain it. It is what it is, without anything more.
Except in this case, what it is may be your life, or your child’s or parent’s or loved one’s life. A machine will make a decision where there is no one who has done anything to deserve what’s about to happen that will end a life because that’s what a programmer told it to do.
Most people agree that the decision shouldn’t be left to a programmer, even a gaggle of them, or their prof or supervisor, or the CEO of Tesla. The MIT video offers an alternative, to crowdsource the death choice. But that too raises a host of questions.
Do you want your life to hinge on a crowd of very smart but, perhaps, socially challenged nerds? Do you want a bunch of 22-year-olds of the sort who spend their days watching Youtube in charge of life and death? Should this be out there for an entire nation to decide? Maybe there should be a Washington version so that elected officials, or perhaps “expert” bureaucrats, can be the ones to vote on life or death.
Or does any of this matter at all? Choices will have to be made and no matter which one prevails, someone will believe it wrong. In a way, it’s merely an “acceptance decisions,” like whether to drive on the right or left side of the road. One side is not inherently better than the other, so it doesn’t matter which one wins, but one side must win or we crash into each other.
There may be choices between equivalents, like which side of the road to drive on, but there may also be choices that give a leg up to certain preferred people. For example, what if the program distinguished between someone driving an expensive car and someone driving a cheap one, deciding that the person in the wealthier car was more deserving of survival because he contributed more to society?
In the alternative, what if there were built-in protections based on social justice preferences? Say the choice was between the car running down a group of people crossing the street or crashing itself into a tree at high speed, and the machine detected that the people crossing the street were of a particular race or gender.
What if the car was programmed to give a preference to one race or gender? The car will run down a people of one race but not another (pick which one deserved to be preferred).
Too simple? Fair enough. What if a point system was developed, where individuals were given points based on 27 factors that made them more or less societally worthwhile. It could include everything from race and gender to net worth to education to social contributions. Find a cure for cancer, you get 20 points. Shoot heroin, you get two points. Children would be tough, as one can never be sure if they’ll grow up to be Charles Manson or Einstein.
When the Affordable Care Act was just a twinkle in President Obama’s eye, an objection was raised to what was coined “death panels.”* People went nuts at the notion of a group of bureaucrats deciding who was worthy of extreme health care measures and who was not. While its use and implications politically were inaccurate, the fact remains that death panels exist and are an unfortunate necessity.
Today, insurance companies have gnomes in back offices deciding whether to pay for extreme care, whether you’re worthy of the amount of money it would cost to allow a patient to have very expensive treatments with questionable chances of success. Sometimes, they claim they’re experimental, an exception to insurance policies. Other times, they hide behind medical necessity, claiming that it can’t be justified.
To the patient who wants to live, who is willing to do anything possible to survive, the denial of treatment is a death sentence. And the decision is made by someone who gets a check from your insurance company every two weeks.
Is this wrong? Well, not entirely. There is a cost associated with treatment, and the cost is often extreme. If the insurance company paid for whatever anybody wanted, it would bankrupt itself in short order. So should it engage in triage, choosing to provide well-baby care to many poor people rather than one hugely expensive, marginally viable, operation for one person?
We are, and always have been, confronted with moral choices of how to distribute scarce resources that impact life and death. Somebody has to make the decision. This time, with fully-autonomous cars at stake, and a decision that will be programmed in, who should make the choice?
If you think you should get to make the choice for yourself, will anyone else agree to put their life at risk for your morality? But if not you, then are you willing to put your life at risk for their choice? Or is MIT right, and we should just put it on Youtube and let the kids decide? After all, children are our future, right?
*The phrase was attributed to Sarah Palin. It’s hard to imagine she came up with it, as it involved putting two words together.