The Last Thing A Suicidal (Or Any) Person Needs

When all you have is a hammer, and you’re Facebook, what could possibly go wrong?

This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

By “send help,” Facebook means call the cops. Facebook’s hammer is artificial intelligence. The cops’ hammer is deadly weapons. The option of sending “mental health resources” is easier said than done, as there aren’t any for the most part, and “local first-responders” tend not to be the local suicide hotline roadshow. They tend to be the cops.

But all of this raises the question: how will Facebook’s AI know you, and know you well enough, to detect “patterns of suicidal thoughts”? If your friends, your family, don’t see issues, is Facebook up to the task?

They’ve already called the cops more than 100 times on their users, with the best of intentions.

Over the past month of testing, Facebook has initiated more than 100 “wellness checks” with first-responders visiting affected users. “There have been cases where the first-responder has arrived and the person is still broadcasting.”

That Facebook is concerned for the welfare of their users is thoughtful, but couching a visit from the local police as a wellness-check doesn’t change the fact that they’re cops. Assuming the AI was remotely accurate*, this could lead to suicide by cop or, far worse, the police reacting defensively with someone suffering from mental illness. Stories are legion of cops killing the  mentally ill when they claim to feel threatened. I would tell you to ask Eleanor Bumpurs, but you can’t because the cops killed her.

But what if someone is venting on Facebook, using it as a catharsis to get out their feelings. Is that not what the place is for? Does that mean they risk the cops knocking? Their neighbors will see, and rumors will swirl about the crazy person in the house across the street. They will be embarrassed. Parents will ask questions. Therapists will send them 50% off coupons.

And then there’s the fact that Facebook is scanning people’s posts in the first place.

The idea of Facebook proactively scanning the content of people’s posts could trigger some dystopian fears about how else the technology could be applied. Facebook didn’t have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.” There are certainly massive beneficial aspects about the technology, but it’s another space where we have little choice but to hope Facebook doesn’t go too far.

What are the chances dystopia will come with a trigger warning, as opposed to being couched in warm and fuzzy words reflecting the best of intentions? If they’re scanning for thoughtful and positive reasons, they’re going to find other things as well. Since it’s AI, one can never be quite certain of how the algorithm was written, how it will interpret content and the ability of millions of people to express themselves. Right, lawyer dog?

The potential for disaster, on the one hand, and embarrassment on the other is huge. The imposition of Facebook’s good intentions on their users’ privacy, however, may prove to be the most pervasive issue here. People (not me, mind you, but other people) use Facebook to communicate to their “friends.” It’s fun. It’s cool. And most users don’t think of some boiler room in Bangalore reviewing their “problematic” posts for potential issues.

  • Our Community Operations team includes thousands of people around the world who review reports about content on Facebook. The team includes a dedicated group of specialists who have specific training in suicide and self harm.

Feel better now?

  • We are also using artificial intelligence to prioritize the order in which our team reviews reported posts, videos and live streams. This ensures we can get the right resources to people in distress and, where appropriate, we can more quickly alert first responders.

What about now?

  • Context is critical for our review teams, so we have developed ways to enhance our tools to get people help as quickly as possible. For example, our reviewers can quickly identify which points within a video receive increased levels of comments, reactions and reports from people on Facebook. Tools like these help reviewers understand whether someone may be in distress and get them help.
  • In addition to those tools, we’re using automation so the team can more quickly access the appropriate first responders’ contact information.

It’s unclear whether your Facebook posts are being read by a dedicated review team, which has “specific training,” whatever that means, or AI. And it’s unclear what will be the trigger that brings the cops to your door. But regardless, do you really want Facebook calling the cops on you, even if it’s called a “wellness check” by guys who are locked and loaded?

Whenever someone commits suicide, there will invariably be calls to question how no one noticed the problem so that the person could have been saved. The same happens when the cops show up at someone’s home and kills them under the Reasonably Scared Cop Rule. It’s bad enough that this happens no matter how hard we try to prevent someone being harmed, but the Zuck won’t be liable should the call be made by Facebook that ends with a bullet in someone’s head.

*As of now, AI pattern recognition is basic junk science.

Cookie-cutter ratios, even if scientifically derived, do more harm than good. Every person is different. Engagement is an individual and unique phenomenon. We are not widgets, nor do we conform to widget formulas.

Does junk science with good intentions make it acceptable?

H/T MassPrivatel

31 thoughts on “The Last Thing A Suicidal (Or Any) Person Needs

  1. Pedantic Grammar Police

    Another of many reasons to ditch facebook. I did it years ago when I realized that I was sitting in front of my computer staring at a screen, thinking that I was interacting with my friends, while it was sunny and beautiful outside. I went out and worked in my garden, and never logged into facebook again. Since then the reasons have proliferated; the spying, the mental manipulation, the targeted advertising, the collection and sale of your personal information, the studies showing a positive correlation between depression and number of hours spent on facebook, and now they are siccing the police on their depressed users.

    And yes, what could be stupider than sending the police to “help” a suicidal person? From the wikipedia article:

    ” NYPD Emergency Service Unit squad specially trained in subduing emotionally disturbed people”

    I’m so glad that they were specially trained; it would have been so much worse to get shot in the chest with a shotgun by an un-specially-trained officer.

    1. SHG Post author

      I have the same reaction whenever I read “special trained,” “expert” or some variation on a theme. It makes the bullet so much more palatable.

  2. Patrick Maupin

    Facebook is going to tout their amazing successes, then they’re going to lose a couple of major lawsuits to dead peoples’ families, then what?

    Will they stop doing this, and then get hammered by lawsuits about suicides, or even mass murders, that they “obviously” should have seen coming?

      1. Patrick Maupin

        You’re right. After they tout a few amazing successes, there will be bills passed requiring them and google and everybody else to do this, and absolving all of them of any responsibility for whatever the gendarmes happen to do with the information.

    1. Norahc

      Facebook just had to have their own version of swatting it seems.

      Countdown till the wellness check kicks in tne wrong door begins in 3….2…

  3. CLS

    Welp. That’s it, I’m done with them.

    I’ve shared this blog post and a summary three times today and am now panicking over if and when an officer’s going to show up for a “wellness check.”

  4. womanwarrior

    Gee, better tell everyone to post sunny thoughts on Facebook. H’mm, nobody at Facebook watched the movie Minority Report, eh? Thanks for the ghoulish news, SHG. Another reason to get off FB!

    1. SHG Post author

      Can you imagine having a really lousy day, writing about it on FB and thinking, “what could possibly make this day any worse?” The mind boggles.

    2. Frank

      “The computer wants you to be happy. If you are not happy, you will be used as reactor shielding.”

      – Paranoia, by West End Games

    1. KP

      Yes- expressing bad thoughts about Facebook or Mr Z will get you a visit from a “specialist team” of the Govt…

    2. DaveL

      Facebook’s AI algorithm has analyzed your posts, and determined that you never really liked that dog anyway.

  5. TomH

    Wait, you mean people actually give Facebook their REAL home addresses?
    Damn! I’ve been doing it wrong on those multiple accounts this whole time.
    My apologies to Mr Smith and his family, Mr Jones, Mr Henry, Mr. …

      1. losingtrader

        I did mean to post something meaningful, but the only “WELFARE CHECK” I ever found on my door contained no monetary amount. I was disappointed and confused.
        Yes, my normal states.

  6. Tierlieb

    “As of now, AI pattern recognition is basic junk science” – while it is convenient to discredit the whole thing, no, pattern recognition is not junk science. That one paper quoted was. Pattern recognition in general is a very useful tool.

    However, the more complex the topic, the more complicated it gets. Facebook is generally considered much better than every other company on the planet at patterning advertisements to its users. That’s they core strength. Any behaviour pattern analysis would be derived from that. And they still suck.

    The issue is that with advertisements, making a mistake is cheap. People get annoyed and maybe install an ad blocker. It is not that cheap for suicide prevention, involving the police and justifying their service (see the Rosenhan experiment for problem proving oneself sane).

  7. delurking

    AI pattern recognition is not junk science.

    This post inspires me. Maybe I’ll write up some adversarial machine learning code that optimizes against their machine learning responses to its content-free suicidal-thoughts-post generation. It’ll be brilliant fun; it will look like gibberish to human beings, but score suicidal on the algorithm every time.

  8. Frank

    Is this the same AI that puts users in Facebook Jail for conservative opinions?

    Somehow I don’t believe the results will be what they expect.

  9. Bryan Burroughs

    I know you’re gonna flame me for this (as you should), but you misused “begs the question,” and that makes my teeth itch. Please, think of me and my teeth next time.

    1. SHG Post author

      Flame you? I’m ashamed of myself for doing so. I hate the misuse of “begs the question” as well, and here I did it. I humbly apologize, have corrected it, and beg your forgiveness.

  10. Kirk HADLEY

    A late .02 from a data scientist/professional AI nerd- FB is almost certainly not working off reinforcement ratios here but is instead using a data set of users known and/or strongly believed to have committed suicide and their respective posts. It’s not really “junk science” but it is very, very creepy.

Comments are closed.