Early this morning, Brian Tannebaum started twitting angrily, which in itself is nothing particularly noteworthy. This time, however, it was about the banning from twitter of @1938loren, the twitter account of Loren Feldman. I never followed Feldman, but apparently he was banned under twitter’s harassment policy. Banned.
Without any clue what Feldman may have said or done to be banned, it doesn’t change what this means. For those who felt (note the word “felt,” rather than “thought”) that they were harassed by whatever it is that Feldman twitted, their actions in having him banned from the twitters means that those who wanted to read Feldman’s twits are denied. He has been silenced for all, not just those individuals who found his twits disturbing.*
This reflects the next extension of the purification of the interwebz, the delusion of Cyber Civil Rights feminism that not only demands the right never to hear or see thoughts that hurt their feelings, but demand that these thoughts be silenced, denied to everyone. This is where it goes beyond their claim of right to never be subjected to unpleasant thoughts, to their claim of right to ban all ideas they find disagreeable to everyone.
At The Guardian, Jessica Valenti panders to the sensitivities of would-be censors by asserting that technology companies, the platforms like twitter, that transmit speech, could end online harassment tomorrow, if they wanted to.
When money is on the line, internet companies somehow magically find ways to remove content and block repeat offenders. For instance, YouTube already runs a sophisticated Content ID program dedicated to scanning uploaded videos for copyrighted material and taking them down quickly – just try to bootleg music videos or watch unofficial versions of Daily Show clips and see how quickly they get taken down. But a look at the comments under any video and it’s clear there’s no real screening system for even the most abusive language.
There are only two types of readers who won’t immediately furrow their brow at this false equivalency, those who, like Valenti, are hell-bent on eliminating speech that displeases them from the internet and those who don’t comprehend how the magic of technology works.
At Techdirt, Mike Masnick explains the technological emptiness of Valenti’s claim, noting Sarah Jeong’s post about how Content ID, the Youtube algorithm designed to match videos with a database of copyrighted content, that suffers endlessly from Type 1 and 2 errors. She goes on to explain the blunt use of algorithms in past efforts to address harassment, which weren’t effective or scalable.
But more to the point, harassment isn’t the same animal to begin with. Algorithms don’t think about what they’re doing. They’re a big blunt club that beats to death word and phrases that some programmer plugged into them. They suffer from no doubt, no concern for overbreadth, no appreciation of sarcasm or satire, no context. They neither feel nor think; they just do.
Should we eliminate all curses from the internet? Perhaps all phrases that, in the hearts of mindful guardians of delicate sensibilities, might make someone feel badly? Hell, yes, according to Valenti. Silence it all. Only ideas that promote the values that Valenti deems worthy should be aired.
Lest you think that Valenti would, if she could view her demands through rational eyes, be persuaded that she’s gone off the rails, unhinged, batshit crazy, she and those who join in her call to silence expression that she characterizes as harassment, change her mind and appreciate the damage she proposes, you would be quite wrong:
But the responsibility of dealing with online threats shouldn’t fall on the shoulders of the people who are being harassed. And it shouldn’t need to rise to being a question of constitutional law. If Twitter, Facebook or Google wanted to stop their users from receiving online harassment, they could do it tomorrow.
The logical disconnect of this slippery contention is stunning. There can be no person “being harassed” unless and until they are being harassed, which itself means someone has written something that hurts their feelings and they have chosen to read it and have their feelings hurt. Yet, Valenti argues they should have no responsibility for dealing with it by censoring it before it happens, which would, by definition, mean no one was harassed.
This argument falls squarely within the “victim blaming” trope, that no one who dons the mantle of victim bears any responsibility for taking actions to not be a victim. In some instances, this is true. In some, it’s nonsense. But when it reaches the point of being a trope, untethered from any logical rationale, it only appeals to true believers and the ignorant, those incapable of distinguishing between instances where it makes sense and makes no sense at all.
That twitter has created black lists, groups of people who may express ideas that some identitarian group might find repugnant to their beliefs, they never have to see any idea that displeases them. If that’s how they choose to exist, so be it.
But when it reaches the next level, the banning upon demand of those who express ideas that displease an identitarian group, so that no one, even those who want to see their thoughts, are allowed to do so, it reaches the point of censorship of ideas that is intolerable in a free society.
Yet, even the banning upon demand, such as what happened to Loren Feldman, won’t satisfy them, and they demand that technology companies pre-emptively obliterate ideas lest they ever appear on the internet. If this seems too outrageous, too ridiculous to ever gain traction, consider this: even someone who questions the efficacy of algorithms to protect the delicate flowers from harsh thoughts like Sarah Jeong is fundamentally conflicted about silencing free speech.
The response of social media companies to the problem of harassment has been lackluster, and they are certainly capable of doing better, but doing better still doesn’t mean they can eliminate harassment tomorrow. It is tragic that they have prioritized intellectual property enforcement over user safety, but even its IP enforcement is considered unsatisfactory on many sides — whether for the content industry, for fair use advocates, or for users.
The problem is that the mechanisms proposed by censorship advocates like Valenti are, as of now at least, too blunt a weapon and too prone to error to be effective in the war against online harassment. If that could be fixed, then censor away and silence all those “harassers” like Feldman. And me. And you, perhaps. Actually, we’ll never know because our thoughts and ideas may never appear for anyone to decide whether they are of sufficient value to appear on the internet.
*Update: It would appear that Feldman was disappeared from twitter because he was dinged by Anil Dash, as more fully explained here. This just gets uglier.