Virtually Horrifying

I took a stab at trying to find the right image to go along with Chris Seaton’s post at Fault Lines, The Legal Antidote to Virtual Sexual Assault, and ran headlong into Rule 34. I was fascinated and repulsed at the same time. Two thoughts ran through my head:

  • Is this where someone finds sex when he never leaves his parents’ basement?
  • Is this where someone finds sex when no real person will have him?

But then I remembered, I was viewing through the lens of someone whose younger years were spent in the real world. There was no virtual world then, and therefore no virtual option. Maybe I just didn’t get it? Maybe the fond memories of youth that pre-dated the AIDS epidemic, the rise of neo-feminist efforts to recreate sexual rules that defied reality, skewed my appreciation of anime porn that allowed people to enjoy a virtual facsimile of sex without all the headaches and risks that accompany the real deal?

The Medium post upon which Chris relied for his Friday Funny (what we internally call his last post of the week, a send-off with humor after a week of posts filled with misery and trauma) has now been deleted, though the internet never forgets.

But it’s not really important. The gripe was that a woman was playing a game, her avatar was sexually assaulted and she claims, in the breathless way victims of “sexual violence” do, that it was horrifying. Nobody touched her, of course, because none of it was real, but that didn’t change her feelings, that it felt real.

And since feelings cannot be denied, or questioned for that matter since they reflect a person’s “lived experiences,” no matter how batshit crazy, the sexual assault may have been virtual but the feelings were every bit as real as any other. Then again, so too are the feelings toward anime porn that produce a person’s few private minutes of self-pleasure. If virtual reality can make you feel good, why not bad?

The Harvard Business Review warns, however, that our conflicted relationship with the virtual world will cause us to behave in a manner that will ultimately become our nightmare.

Stop swearing at Siri. Quit cursing Cortana. As digital devices grow smarter, being beastly toward bots could cost you your job.

As machine learning and artificial intelligence capabilities proliferate, digital interface design sensibilities begin to accelerate from skeuomorphic to anthropomorphic. Consider how Slack and Hipchat now use bots that go beyond automating interfaces to facilitate smarter and more engaging user experiences. That’s Chatbot 1.0. Ongoing algorithmic innovation assures that next-generation bots will be far sharper and more empathic.

Do you curse at Siri? I know that i curse at voicemail trees. I know it’s pointless, and that I really should save my anger at the waste of my time for when I run into the CEO at a cocktail party, but it just comes out. Of course, calling Siri a mean name doesn’t actually hurt Siri’s feelings, because, well, Siri isn’t a person and has no feelings. HBR isn’t so foolish as to actually think otherwise.

These behaviors are simply not sustainable. If adaptive bots learn from every meaningful human interaction they have, then mistreatment and abuse become technological toxins. Bad behavior can poison bot behavior. That undermines enterprise efficiency, productivity, and culture.

That’s why being bad to bots will become professionally and socially taboo in tomorrow’s workplace. When “deep learning” devices emotionally resonate with their users, mistreating them feels less like breaking one’s mobile phone than kicking a kitten. The former earns a reprimand; the latter gets you fired.

At face value, it would appear as if the HBR admonition makes empathy for the bots primary to human interaction. A cute turn of the phrase, “technological toxins,” where bad human interaction teaches bots to be bad in return. Can you imagine if the response to your telling Siri to play “Sympathy for the Devil” was, “Fine, dickwad, if you want to listen to crap like that.”

That undermines enterprise efficiency, productivity, and culture.

You might well have glossed over this dry line in the otherwise titillating quote, but this is what HBR is really concerned with. You see, big business (delightfully referred to as “enterprise” because without jargon, we would see what they’re doing) believes that Siri is the future and people, to the extent they’re needed at all, will exist to serve her. People are a pain in the ass. Siri can do it better, more reliably, cheaper and without all the annoyance of human feelings. So just don’t screw it (her?) up.

Telling Siri to shove it can’t be the same as kicking a kitten because Siri is just a program. But the business gurus at HBR know that the weak link in technology is, and always will be, people. And if you can’t control yourself with Siri, then your weakness as a person is revealed.

Just as one wouldn’t kick the office cat or ridicule a subordinate, the very idea of mistreating ever-more-intelligent devices becomes unacceptable. While not (biologically) alive, these inanimate objects are explicitly trained to anticipate and respond to workplace needs. Verbally or textually abusing them in the course of one’s job seems gratuitously unprofessional and counterproductive.

Crudely put, smashing your iPhone means you have a temper; calling your struggling Siri inappropriate names gets you called before HR. Using bad manners with smart technologies can lead to bad management.

If you’re the sort of person who virtually sexually assaults someone in a game, then you might have rape in your heart. If you can’t control your anger sufficiently to speak kindly to Siri, then you’re the sort of person who would abuse a subordinate or kick the office cat. If you’re the sort of person who prefer the comfort of an anime sexual partner, then . . . you’ll never be lonely?

For the unduly emotional, this reflects a confluence of virtual and real feelings, as if someone forgot that a chair won’t return the favor of a kindness. But the business overlords, those HBR types who depend upon the feelings of strangers to do their bidding in a shop run by Siri, aren’t taking any chances. They anticipate Siri will make them tons of money in the future, and you’ll be an annoyance unless you learn now to be respectful to virtual reality.

And we’re already well on the way of achieving the necessary obsequience. When feelings of horror arise from a virtual sexual assault, when feelings of pleasure come from a gif of a naked woman, we’re mere baby steps away from taking orders from Siri, intoned in a pleasant and non-abusive manner that tugs at our empathetic heartstrings, but insists that whatever we’ve been commanded to do gets done. Now. After all, Siri asked us nicely, and we’re only human.

For the moment, we ask things of Siri. That won’t always be the case. How will you feel then?

26 thoughts on “Virtually Horrifying

  1. Patrick Maupin

    You curse at voice menu trees? I thought I ws the only one who did that. I especially curse at the Louisana redneck ones that don’t branch properly.

    And yes, I must admit that actual people who also are ostensibly “explicitly trained to anticipate and respond to workplace needs” sometimes suffer my verbal wrath, usually when I detect a program malfunction and have to repress my visceral desire to physically reboot them.

    1. SHG Post author

      I also think they’re lying to me when they tell me their name is Cindy. He doesn’t sound at all like a Cindy.

      1. Patrick Maupin

        At least one Indian call center was trying to regionalize their customers’ experiences a few years back. Must not have caught on — I haven’t heard from a Hindu “Julio” since that first incident.

          1. Patrick Maupin

            That’s entirely possible. An increasing number of my calls seem to be fielded by newbies. I do my best to insure that each recording can provide at least two lesson’s worth of training material. Last week’s example:

            Sir, the reason that Aetna couldn’t pay this claim before…

            Stop right there. I understand you were passive-agressively probing to see if I have other insurance and that you’re going to pay it since I don’t, but don’t use bullshit words like “couldn’t”.

            Sir, if you’ll allow me to continue, we couldn’t…

            Seriously, stop with the lying.

            Sir, I’m not lying, I’m just explaining that we couldn’t…

            NOBODY HELD A FUCKING GUN TO YOUR HEAD AND TOLD YOU TO NOT PAY THIS CLAIM THAT YOU’RE LEGALLY OBLIGATED TO PAY.

            Sir, please don’t use profanity on the phone with me — I’m just explaining that we couldn’t…

            And then the call went downhill.

            1. SHG Post author

              I try my best to remember that they are poor people who are forced to read from scripts prepared by diabolical marketeers, with people standing behind them with dogs and whips. I don’t always succeed.

            2. Scott Jacobs

              “And then the call went downhill.”

              Much like the way a police report might tell us that a fight “ensued.”

  2. mb

    You could not possibly imagine the sheer volume of violent offenses I have virtually committed. Everything from sexual assault, robbery, burglary, and arson all the way up to mass murder and war crimes. The number of people I’ve virtually murdered this year alone is over 9000! And I’ve had all the same things virtually happen to me. And nobody cares, which is the entire point. As for the poor victim here, she needs to check rule 16, and then tits or gtfo.

    1. Lexie

      “The gripe was that a woman was playing a game, her avatar was sexually assaulted and she claims, in the breathless way victims of “sexual violence” do, that it was horrifying”

      So why didn’t she just kick him in the virtual nuts?

    2. Agammamon

      A few years me and a bunch of Ukrainian separatists massacred an airport full of unarmed civilians. Unfortunately the bastards knew I was an American mole so they killed me.

      What? I got better.

    1. REvers

      Oh yes, bot behavior can be learned. Last spring, Microsoft’s TayandYou twitter bot was turned into a screaming Nazi in less than a day. It was hilarious to watch.

      1. Scott Jacobs

        That was one of my favorite internet moments. From naive and full of wonder to hating gays, loving Hitler, amd thinking that Bush did 9-11 in under 12 hours.

  3. Raccoon Strait

    I am wondering why HBR took swearing at a bot as a pathway to ‘evilizing’ the bot, rather than cause a notification to the programmers that something was wrong? Is blame the user, not the programmer a euphemism for blame the student not the teacher? I used to like their articles, now I wonder how much victim blaming they practise, maybe even surreptitiously.

    1. Dragoness Eclectic

      “Blame the user” is the excuse of miserly software companies that can’t be bothered to hire an actual user interface designer. Or a QA department.

  4. Jim Tyre

    Is this where someone finds sex when he never leaves his parents’ basement?
    Is this where someone finds sex when no real person will have him?</blockquote?
    His? Him? Really?

  5. L David Gehrig

    If I remember correctly, Issac Asimov predicted something along these lines in The Beforetimes, in the Long Long Ago:

    1. A human may not injure a robot, or allow a robot to come to harm
    2. A human must obey the orders of a robot, except when in conflict with the first law
    3. A human must protect his own existence, except when in conflict with the first or second law

    I, for one, welcome our new robot overlords.

  6. B. McLeod

    Whatever game or interactive site it was, the obvious remedy is to program in a virtual Michelle Dauber and a virtual mob of loud, dowdy, doughy-looking social justice warriors, who will collectively howl over all other site sounds and disrupt all site activity for at least two, solid hours every time a virtual sexual assault is alleged to have occurred. Eventually, even the rapiest of virtual denizens will be deterred by these disruptions.

Comments are closed.