One of the saddest realizations that comes of spending time online is that there are a lot of really sick, disgusting people out there. I mean, a lot more than I ever suspected before the internet.
Sure, we always knew there were sick people out there, but they weren’t in our faces. And they weren’t able to find each other easily, which meant they didn’t have the support and comfort of knowing that they weren’t alone in their embrace of whatever warped thing turned them on.
But then, what protects the rest of us from suffering their pleasure? Adrian Chen at Wired gives us a view into the perverse world of content moderation.
As social media connects more people more intimately than ever before, companies have been confronted with the Grandma Problem: Now that grandparents routinely use services like Facebook to connect with their kids and grandkids, they are potentially exposed to the Internet’s panoply of jerks, racists, creeps, criminals, and bullies. They won’t continue to log on if they find their family photos sandwiched between a gruesome Russian highway accident and a hardcore porn video.
Let’s dispense with the obvious up front. There is no First Amendment violation when private businesses decide that they don’t want to host images of beheadings. Only the government can violate your rights. Private businesses or citizens have no similar obligation, even though you may really wish they did.
And because the article was written by Adrian Chen, who struggles with separating his personal agenda from his journalism, he had to toss in his pet peeves like racists and bullies, which are sufficiently vague to cover conduct that might well be moderated, but would be more likely not, as the point isn’t to silence thought, but to protect grandma’s eyes.
Social media’s growth into a multibillion-dollar industry, and its lasting mainstream appeal, has depended in large part on companies’ ability to police the borders of their user-generated content—to ensure that Grandma never has to see images like the one [21-year-old Filipino Michael] Baybayan just nuked.
And that’s where it all gets totally weird.
So companies like Facebook and Twitter rely on an army of workers employed to soak up the worst of humanity in order to protect the rest of us. And there are legions of them—a vast, invisible pool of human labor. Hemanshu Nigam, the former chief security officer of MySpace who now runs online safety consultancy SSP Blue, estimates that the number of content moderators scrubbing the world’s social media sites, mobile apps, and cloud storage services runs to “well over 100,000”—that is, about twice the total head count of Google and nearly 14 times that of Facebook.
Perhaps you knew of this army of people whose purpose was to delete the images that make other people gasp and wretch, but I didn’t. It reminds me of how naïve I am, how hard it is for me to fathom why there is any need for this. But clearly, there is a need, and the need is huge.
A list of categories, scrawled on a whiteboard, reminds the workers of what they’re hunting for: pornography, gore, minors, sexual solicitation, sexual body parts/images, racism. When Baybayan sees a potential violation, he drills in on it to confirm, then sends it away—erasing it from the user’s account and the service altogether—and moves back to the grid. Within 25 minutes, Baybayan has eliminated an impressive variety of dick pics, thong shots, exotic objects inserted into bodies, hateful taunts, and requests for oral sex.
This job isn’t nearly as much fun as it may seem.
Eight years after the fact, Jake Swearingen can still recall the video that made him quit. He was 24 years old and between jobs in the Bay Area when he got a gig as a moderator for a then-new startup called VideoEgg. Three days in, a video of an apparent beheading came across his queue
“Oh fuck! I’ve got a beheading!” he blurted out. A slightly older colleague in a black hoodie casually turned around in his chair. “Oh,” he said, “which one?” At that moment Swearingen decided he did not want to become a connoisseur of beheading videos.
Who are we? What are we doing? I realize that there are many who see this as part of the lulz factor, the occasional bizarre image that gives us that taboo thrill of being naughty and grossed out. But when you realize that it takes 100,000 people to clean up this cesspool, does it not make you realize that this reflects far too much sickness to explain away?
The cost of cleaning up the internets, both social and economic, is huge, even when the word is outsourced to the Philippines so that internet companies don’t have to pay US wages to get it done. But that doesn’t make it any better or more justifiable. Filipinos are human beings too, and they don’t need to see these images any more than some dude in Idaho.
But what this says about people is the most basic concern. Are we really as sick and disgusting as this article suggests? Do we really have to be? Is the anonymity of the internet an aphrodisiac for our worst, most disgusting impulses?
By no means do I suggest that the solution is to enact law to control the worst of human nature to express itself. But the flip side is that we don’t have to indulge our worst natures, even if we can do so anonymously, even if we can get away with it.
Clean up your act. Before posting some pic that will make grandma cry, ask yourself if this is really who you are. Ask yourself if this is really who you want to be. If not, then don’t do it. Just because you can be sick and disgusting doesn’t mean you have to be.