If you’ve ever had the sense that no one takes note of what you put out on social media, maybe that you’re not quite as fascinating as you believe you are, there’s hope. At least if you’re a college student.
Campus Safety Magazine reports that the University of Virginia contracts with a service called Social Sentinel for $18,500 a year to monitor its students’ public social media posts. It works by scanning student social media accounts based on a “library of harm” of thousands of words curated by Social Sentinel in addition to words tailored to the specific school contracting with them. Posts from students containing words on these lists are forwarded to the police, who then decide whether or not to investigate the students.
Unless the school has a major donor with the last name “harm,” this is a bit disturbing. It likely surprises no one that there is a list of forbidden words, or that someone has created an algorithm to search students’ social media accounts for their appearance. That schools are doing so is Big Brother enough; but that they forward students’ posts to the police should they contain a verboten word reduces it to an entirely new depth of problems.
But it’s not just the appearance of a word from the library of harm that causes colleges to tell guys with guns to gear up.
“We look at the whole context of the post,” says University of Virginia police officer and crime analyst Beth Davis.
Feel better now? FIRE isn’t feeling it.
And therein lies a major problem. The “whole context” of a post is almost never available to someone as far removed from the post as an officer or school administrator reading it. Innocuous or inside jokes, and a whole host of other protected speech, will often be completely lost on them and could appear threatening without that crucial context.
“You have to translate the old mentality of ‘see something, say something’ to seeing threats online and reporting them and acting on them if necessary,” said Officer Ben Rexrode, community service and crime prevention coordinator for the University of Virginia police.
While it’s certainly a major problem, there is nothing about this scenario that isn’t deeply problematic. Who comes up with this list of words? Why are colleges stealthily surveiling their students? And what recourse is there when someone learns that good ol’ Officer Ben or Beth didn’t get the joke and decided instead to send in the SWAT team to take junior out?
The impetus for such methods is the cry that troubled students, whether suicidal or homicidal, somehow manage to evade notice until they take to the hallways with a gun in hand. Then the cries of “how did this happen” ring out, and the next level is comprised of demands for proactive screening of students’ conduct and speech to identify those at risk of doing harm. As if it was that simple.
There are a great many things that kids say and do which could raise alarms to others. The vast majority of students will harm no one. But there will be one buried in there who will end up being dangerous. There will be false positives and false negatives, so that we can never prevent someone from going violent and, in our effort to do so, will send in the cops for kids whose only offense is going for some lulz.
Will human eyes improve the outcome where algorithms can’t?
A crucial difference between a report from an algorithm and from a student is that when a student reports a post to the police, they are exercising their judgment to decide that a tweet could be threatening, and then the police are using their judgment about whether or not to investigate. Forwarding any post that pings an algorithm to police, and then having the police make a judgment about its context, eliminates from the equation those most capable of judging a social media post’s context: the post’s audience. It’s a scattershot tactic in an arena where precision is paramount.
The officers may be able to intuitively eliminate some “false positives” that the algorithm spits out, such as students tweeting about “good shots” at a sporting event, as mentioned in the article. However, since their job is to keep the campus safe, they are incentivized to investigate borderline cases, and that incentive works to increase the number false positives.
This reflects a faith in the competence and good intentions of campus police that they may not deserve. Bear in mind, any time the police are called in to address a situation, there is the potential for harm, even death, to result. What if the cops show up for a borderline twit and the dorm room has some weed on the desk, meaning the student is reluctant to open his door and put the investigators’ concerns to rest. What do the cops do then, but break down the door and seize the student, who resists and, well, bad things ensue.
But even the students who have no reason to fear will react to the knowledge that Big Brother is monitoring their every instagram pic.
Put yourself in the shoes of a student on campus. What would you do if you’re aware that anything you post may be flagged by the school administration or police for containing one of the thousands of keywords in Social Sentinel’s library of harm? Do you make the decision to tweet less? Do you restrict your posts to friends only? It seems hard to imagine how you could moderate your tweets to avoid thousands of words when you have no idea what they are.
And assume you do get flagged and questioned by police. Many people would probably change their behavior. And while people might want to be mindful of what they post publicly online, fear of police and their school monitoring them and misinterpreting their messages shouldn’t be something students have to navigate.
Are students “mindful” of what they post on social media? Should they have to be? While FIRE’s concerns over Social Sentinel, its library of harm and its resort to police when something strikes a campus cop as troubling are appreciated, they may not go nearly far enough. There is a strong smell of monitoring for political correctness, lest the “threat” of “violence” by offensive speech cause more sensitive students to fear for their “safety.”
As the notion of what constitutes a threat, and what gives rise to harm, has been reduced from paper cut to mean word, the chilling effect, if not the actual arrival of a cop at your little darling’s door, on student speech is outrageous. If you wonder why tuition is so high, perhaps the cost of monitoring your child’s social media mean words is part of the problem. But only the first part, as every aspect of this Social Sentinel program has the potential, if not likelihood, of disaster.