If you’ve ever had the sense that no one takes note of what you put out on social media, maybe that you’re not quite as fascinating as you believe you are, there’s hope. At least if you’re a college student.
Campus Safety Magazine reports that the University of Virginia contracts with a service called Social Sentinel for $18,500 a year to monitor its students’ public social media posts. It works by scanning student social media accounts based on a “library of harm” of thousands of words curated by Social Sentinel in addition to words tailored to the specific school contracting with them. Posts from students containing words on these lists are forwarded to the police, who then decide whether or not to investigate the students.
Unless the school has a major donor with the last name “harm,” this is a bit disturbing. It likely surprises no one that there is a list of forbidden words, or that someone has created an algorithm to search students’ social media accounts for their appearance. That schools are doing so is Big Brother enough; but that they forward students’ posts to the police should they contain a verboten word reduces it to an entirely new depth of problems.
But it’s not just the appearance of a word from the library of harm that causes colleges to tell guys with guns to gear up.
“We look at the whole context of the post,” says University of Virginia police officer and crime analyst Beth Davis.
Feel better now? FIRE isn’t feeling it.
And therein lies a major problem. The “whole context” of a post is almost never available to someone as far removed from the post as an officer or school administrator reading it. Innocuous or inside jokes, and a whole host of other protected speech, will often be completely lost on them and could appear threatening without that crucial context.
“You have to translate the old mentality of ‘see something, say something’ to seeing threats online and reporting them and acting on them if necessary,” said Officer Ben Rexrode, community service and crime prevention coordinator for the University of Virginia police.
While it’s certainly a major problem, there is nothing about this scenario that isn’t deeply problematic. Who comes up with this list of words? Why are colleges stealthily surveiling their students? And what recourse is there when someone learns that good ol’ Officer Ben or Beth didn’t get the joke and decided instead to send in the SWAT team to take junior out?
The impetus for such methods is the cry that troubled students, whether suicidal or homicidal, somehow manage to evade notice until they take to the hallways with a gun in hand. Then the cries of “how did this happen” ring out, and the next level is comprised of demands for proactive screening of students’ conduct and speech to identify those at risk of doing harm. As if it was that simple.
There are a great many things that kids say and do which could raise alarms to others. The vast majority of students will harm no one. But there will be one buried in there who will end up being dangerous. There will be false positives and false negatives, so that we can never prevent someone from going violent and, in our effort to do so, will send in the cops for kids whose only offense is going for some lulz.
Will human eyes improve the outcome where algorithms can’t?
A crucial difference between a report from an algorithm and from a student is that when a student reports a post to the police, they are exercising their judgment to decide that a tweet could be threatening, and then the police are using their judgment about whether or not to investigate. Forwarding any post that pings an algorithm to police, and then having the police make a judgment about its context, eliminates from the equation those most capable of judging a social media post’s context: the post’s audience. It’s a scattershot tactic in an arena where precision is paramount.
The officers may be able to intuitively eliminate some “false positives” that the algorithm spits out, such as students tweeting about “good shots” at a sporting event, as mentioned in the article. However, since their job is to keep the campus safe, they are incentivized to investigate borderline cases, and that incentive works to increase the number false positives.
This reflects a faith in the competence and good intentions of campus police that they may not deserve. Bear in mind, any time the police are called in to address a situation, there is the potential for harm, even death, to result. What if the cops show up for a borderline twit and the dorm room has some weed on the desk, meaning the student is reluctant to open his door and put the investigators’ concerns to rest. What do the cops do then, but break down the door and seize the student, who resists and, well, bad things ensue.
But even the students who have no reason to fear will react to the knowledge that Big Brother is monitoring their every instagram pic.
Put yourself in the shoes of a student on campus. What would you do if you’re aware that anything you post may be flagged by the school administration or police for containing one of the thousands of keywords in Social Sentinel’s library of harm? Do you make the decision to tweet less? Do you restrict your posts to friends only? It seems hard to imagine how you could moderate your tweets to avoid thousands of words when you have no idea what they are.
And assume you do get flagged and questioned by police. Many people would probably change their behavior. And while people might want to be mindful of what they post publicly online, fear of police and their school monitoring them and misinterpreting their messages shouldn’t be something students have to navigate.
Are students “mindful” of what they post on social media? Should they have to be? While FIRE’s concerns over Social Sentinel, its library of harm and its resort to police when something strikes a campus cop as troubling are appreciated, they may not go nearly far enough. There is a strong smell of monitoring for political correctness, lest the “threat” of “violence” by offensive speech cause more sensitive students to fear for their “safety.”
As the notion of what constitutes a threat, and what gives rise to harm, has been reduced from paper cut to mean word, the chilling effect, if not the actual arrival of a cop at your little darling’s door, on student speech is outrageous. If you wonder why tuition is so high, perhaps the cost of monitoring your child’s social media mean words is part of the problem. But only the first part, as every aspect of this Social Sentinel program has the potential, if not likelihood, of disaster.
One possible answer would be for students to simply pepper their every post with random trigger words #MurderDeathKill #AardvarkRape.
#AllAardvarksMatter
Back in the ’90s, after a government Internet surveillance program/proposal came to light (whose name escapes me at the moment), this became pretty common–people would put “NSA bait” in their signatures with words like gun, bomb, assassinate, etc. Have to wonder if it had any effect.
Echelon.
How annoying it was to receive emails from supposedly sane coworkers with a block of copypasta at the end.
ATF DOD WACO RUBY RIDGE OKC OKLAHOMA CITY MILITIA GUN HANDGUN MILGOV ASSAULT RIFLE TERRORISM BOMB DRUG KORESH PROMIS MOSSAD NASA MI5 ONI CID AK47 M16 C4 MALCOLM X REVOLUTION CHEROKEE HILLARY BILL CLINTON GORE GEORGE BUSH WACKENHUT TERRORIST
Copypasta al dente:
DRUG MOSSAD ATF DOD
MILGOV HANDGUN ASSAULT CID
BILL CLINTON GEORGE GORE
REVOLUTION C4
MALCOLM X WACKENHUT CHEROKEE!
Do you get the sense that this is an effort by the school to help the kids or more of an effort to create a system that tries to shed liability should something happen in the future?
I’d imagine the former could be dealt with by reason, especially when the threat of harm comes with those tasked on checking out some student’s twit. The latter could follow the pattern of corporate HR departments (to reduce potential harm to the institution, kids be damned).
That’s a good question, to which I have no answer. I get the sense that automating oversight reflects the institutional need for a facile means of dealing with problems, real or perceived, together with the belief that technology will cure whatever ails us. But whether this is really about their creating plausible deniability or actual concern is a matter of cynicism, as schools will undoubtedly claim they do it “for the children.”
Maybe they’re just preparing the kids to deal with our coming Chinese overlords.
Dear Papa,
I dream of a future where surveillance is the norm. Schools in the future are negligent for NOT surveilling their students. Student deaths that could have been prevented by surveillance by schools are now the basis for wrongful death suits by devastated family members who say, “The school should have known. It should have done more. How did it miss the obvious warning signs?” What’s “obvious,” of course is always decided after the fact.
Monetized hysteria. I’m never going back to school if I can help it, what do I care?
Best,
PK
So close……
Never going back to my old school
Cracking down on future crime. Shades of “Minority Report.”
Why do you hate children so much?
This comment will go outside the scope of your post. There’s a very troubling issue not addressed in either the article you link to or in any of the articles it links to. How does Social Sentinel know which social media accounts are those of students (or university employees), which are those of some hapless dude who just happens to live near UVA? The former may have consented, probably unknowingly, through some obscure school policy (which doesn’t make it right). But the latter wouldn’t have. Can Social Sentinel distinguish between Sammy Sophomore who lives in apartment 3F off-campus and rando unaffiliated dude who lives in 3G?
What you write about is plenty troublesome, but it’s not the only trouble.
You assume they’re monitoring the random ether rather than the specific social media accounts of students (@JoeAtUVA). While the means by which they identify accounts being monitored isn’t mentioned, there is nothing to suggest they’re monitoring the random waves as opposed to student accounts, and it seems extremely unlikely if not impossible.
That said, you’re always free to discuss it at great length on your blawg if you think this is such a troubling issue that needs to be discussed.
So kissing random goats is different than kissing the ones that you’re already milking?
I pity the machine that reads that stuff.
Pingback: #privacy #surveillance Studying At The Library of Harm | Simple Justice – Defending Sanity in the Uppity Down World