You Can’t Handle The Truth (Update)

Early this morning, Brian Tannebaum started twitting angrily, which in itself is nothing particularly noteworthy.  This time, however, it was about the banning from twitter of @1938loren, the twitter account of Loren Feldman.  I never followed Feldman, but apparently he was banned under twitter’s harassment policy. Banned.

Without any clue what Feldman may have said or done to be banned, it doesn’t change what this means.  For those who felt (note the word “felt,” rather than “thought”) that they were harassed by whatever it is that Feldman twitted, their actions in having him banned from the twitters means that those who wanted to read Feldman’s twits are denied. He has been silenced for all, not just those individuals who found his twits disturbing.*

This reflects the next extension of the purification of the interwebz, the delusion of Cyber Civil Rights feminism that not only demands the right never to hear or see thoughts that hurt their feelings, but demand that these thoughts be silenced, denied to everyone.  This is where it goes beyond their claim of right to never be subjected to unpleasant thoughts, to their claim of right to ban all ideas they find disagreeable to everyone.

At The Guardian, Jessica Valenti panders to the sensitivities of would-be censors by asserting that technology companies, the platforms like twitter, that transmit speech, could end online harassment tomorrow, if they wanted to.

When money is on the line, internet companies somehow magically find ways to remove content and block repeat offenders. For instance, YouTube already runs a sophisticated Content ID program dedicated to scanning uploaded videos for copyrighted material and taking them down quickly – just try to bootleg music videos or watch unofficial versions of Daily Show clips and see how quickly they get taken down. But a look at the comments under any video and it’s clear there’s no real screening system for even the most abusive language.

There are only two types of readers who won’t immediately furrow their brow at this false equivalency, those who, like Valenti, are hell-bent on eliminating speech that displeases them from the internet and those who don’t comprehend how the magic of technology works.

At Techdirt, Mike Masnick explains the technological emptiness of Valenti’s claim, noting Sarah Jeong’s post about how Content ID, the Youtube algorithm designed to match videos with a database of copyrighted content, that suffers endlessly from Type 1 and 2 errors.  She goes on to explain the blunt use of algorithms in past efforts to address harassment, which weren’t effective or scalable.

But more to the point, harassment isn’t the same animal to begin with.  Algorithms don’t think about what they’re doing. They’re a big blunt club that beats to death word and phrases that some programmer plugged into them.  They suffer from no doubt, no concern for overbreadth, no appreciation of sarcasm or satire, no context.  They neither feel nor think; they just do.

Should we eliminate all curses from the internet?  Perhaps all phrases that, in the hearts of mindful guardians of delicate sensibilities, might make someone feel badly?  Hell, yes, according to Valenti.  Silence it all.  Only ideas that promote the values that Valenti deems worthy should be aired.

Lest you think that Valenti would, if she could view her demands through rational eyes, be persuaded that she’s gone off the rails, unhinged, batshit crazy, she and those who join in her call to silence expression that she characterizes as harassment, change her mind and appreciate the damage she proposes, you would be quite wrong:

But the responsibility of dealing with online threats shouldn’t fall on the shoulders of the people who are being harassed. And it shouldn’t need to rise to being a question of constitutional law. If Twitter, Facebook or Google wanted to stop their users from receiving online harassment, they could do it tomorrow.

The logical disconnect of this slippery contention is stunning.  There can be no person “being harassed” unless and until they are being harassed, which itself means someone has written something that hurts their feelings and they have chosen to read it and have their feelings hurt.  Yet, Valenti argues they should have no responsibility for dealing with it by censoring it before it happens, which would, by definition, mean no one was harassed.

This argument falls squarely within the “victim blaming” trope, that no one who dons the mantle of victim bears any responsibility for taking actions to not be a victim. In some instances, this is true. In some, it’s nonsense. But when it reaches the point of being a trope, untethered from any logical rationale, it only appeals to true believers and the ignorant, those incapable of distinguishing between instances where it makes sense and makes no sense at all.

That twitter has created black lists, groups of people who may express ideas that some identitarian group might find repugnant to their beliefs, they never have to see any idea that displeases them. If that’s how they choose to exist, so be it.

But when it reaches the next level, the banning upon demand of those who express ideas that displease an identitarian group, so that no one, even those who want to see their thoughts, are allowed to do so, it reaches the point of censorship of ideas that is intolerable in a free society.

Yet, even the banning upon demand, such as what happened to Loren Feldman, won’t satisfy them, and they demand that technology companies pre-emptively obliterate ideas lest they ever appear on the internet.  If this seems too outrageous, too ridiculous to ever gain traction, consider this: even someone who questions the efficacy of algorithms to protect the delicate flowers from harsh thoughts like Sarah Jeong is fundamentally conflicted about silencing free speech.

The response of social media companies to the problem of harassment has been lackluster, and they are certainly capable of doing better, but doing better still doesn’t mean they can eliminate harassment tomorrow. It is tragic that they have prioritized intellectual property enforcement over user safety, but even its IP enforcement is considered unsatisfactory on many sides — whether for the content industry, for fair use advocates, or for users.

The problem is that the mechanisms proposed by censorship advocates like Valenti are, as of now at least, too blunt a weapon and too prone to error to be effective in the war against online harassment. If that could be fixed, then censor away and silence all those “harassers” like Feldman. And me. And you, perhaps. Actually, we’ll never know because our thoughts and ideas may never appear for anyone to decide whether they are of sufficient value to appear on the internet.

*Update:  It would appear that Feldman was disappeared from twitter because he was dinged by Anil Dash, as more fully explained here.  This just gets uglier.

7 thoughts on “You Can’t Handle The Truth (Update)

  1. Keith Lee

    Read this this morning and is as true as it ever was.

    RE: Algorithmic analysis that leads to false scrubbing/blocking has a specific term, The Scunthorpe Problem. I wrote about it a couple years ago. At some point in time computers will likely have the ability to deduce nuance in speech and text, but that time is probably a ways off. Until then, relying on algorithmic filterings is a crapshoot at best.

    Instead of wishing for some farcical land of rainbows and ponies where no one is offended, people need to just put on their big boy/girl pants. But that’s probably my privilege talking.

    1. Ben

      We’ve come a long way since the mid-90s when content filtering was basically just running a comment against a dictionary. Some websites still use simple word matching for simplicity but a harassment filtering system would likely be much more complex. Computers still don’t “understand” context but they can do a pretty good job of approximating it. For example the predictive keyboard app on your cellphone can, given sufficient training, do a pretty good job predicting future words by looking at how word usage tends to cluster.

      If Twitter were building a automatic harassment detection tool what they’d probably do is use some form of machine learning. They would take a few thousand tweets and hand sort them as either harassment or not harassment and feed these into a machine learning algorithm which would analyze various properties of the tweets and try to build a mathematical model, called a classifier function, that puts all the harassing tweets on one side and all the non-harassing tweets on the other. Machine learning approaches like this are widely used commercially to determine things like customer buying habits.

      This isn’t to say we should build some kind of auto-harassment filter since it would likely have a very high false positive rate or be so narrowly tailored it never caught anything but we could certainly do better than simple word list filtering.

      1. SHG Post author

        Define harassment. Now, define harassment in such a way that it can be determined as an objective value. Now, define harassment in such a way as it can be determined objectively in advance of its context. The problem won’t be the ability of technology to create a classifier function, but to determine what to classify.

        Remember the few thousand twits that would form the basis for the algorithm? Who would decide which, if any, are harassment, and upon what basis? The ability to create a classification algorithm first requires the ability to create an objective definition so that we can figure out where a line can be drawn. No one has as yet been capable of doing so.

  2. Fubar

    Translated from a recently discovered and highly disputed letter from Claude Chappe, to a customer of his Paris-Lille tachygraph service who had complained that his messages were deleted for no good reason:

    Some complain our deletions are quirky,
    And our process is cloudy and murky.
    But, to quash the aberrant,
    Our means are inerrant:
    We employ a Mechanical Turkey!

  3. Timothy Knox

    Twitter, like SJ, requires the reader to opt-in. The only people who will see my tweets are those who have chosen to follow me. So if you don’t like what I tweet, if it offends you, don’t follow me anymore. How difficult is that?

    1. SHG Post author

      Which belies the true motive; it’s not enough that they don’t see or hear whatever it is they prefer not to, but they don’t want anyone else to see or hear it either. That’s where they go from “victims” to censors.

Comments are closed.