Google’s gone woke, at least a little bit.
In an email to developers on Thursday morning, seen by Business Insider, Google said it would no longer use “gendered labels” for its image tags. Instead, it will tag any images of people with “non-gendered” labels such as “person.”
Google said it had made the change because it was not possible to infer someone’s gender solely from their appearance. It also cited its own ethical rules on AI, stating that gendering photos could exacerbate unfair bias.
To say that it’s “not possible” seems an absurd exaggeration. In most cases, it’s not only possible but obvious. That there may be some cases where it’s too close to call doesn’t compel the claim of impossibility; that’s entirely Google’s choice, and one that’s better explained by “exacerbate unfair bias,” which sounds nice if one has a fine-tuned sense of jargon, but doesn’t do much to explain what sort of unfair bias might be exacerbated by stating the obvious.
On the other hand, this shift finds easier justification in the woke agenda.
Frederike Kaltheuner, a tech policy fellow at Mozilla with expertise on AI bias, told Business Insider that the update was “very positive.”
She said in an email: “Anytime you automatically classify people, whether that’s their gender, or their sexual orientation, you need to decide on which categories you use in the first place — and this comes with lots of assumptions.
“Classifying people as male or female assumes that gender is binary. Anyone who doesn’t fit it will automatically be misclassified and misgendered. So this is about more than just bias — a person’s gender cannot be inferred by appearance. Any AI system that tried to do that will inevitably misgender people.”
Certainly Google doesn’t want to misgender anyone. After all, it could get canceled, and that’s hardly a sound business plan, and nobody at Google wants to hurt anyone’s feelings by allowing facts to intrude on their ideology.
But what consequences follow this enlightened change?
Google invited affected developers to comment on its discussion forums. Only one developer had commented at the time of writing, and complained the change was down to “political correctness.”
“I don’t think political correctness has room in APIs,” the person wrote. “If I can 99% of the times identify if someone is a man or woman, then so can the algorithm. You don’t want to do it? Companies will go to other services.”
Consider devs working in targeted advertising, no longer able to distinguish who gets the ads for penis pumps and breast pumps? Okay, you don’t care because AI advertising is already a hated nightmare. What about images of your kids? If you post a fabulous beach pic of your son romping in the surf, how can a genderless AI distinguish it from child porn of some young girl whose chest is exposed? Your son isn’t likely to wear a bikini top when he swims, but his bare breasts aren’t male breasts, but people breasts, which are no different than any other people breasts.
Before, the answer would be because of social convention, even if you found the norm not to your liking. The question of whether you want to “free the nipple” is one thing, but that’s not on the table at the moment, and not your choice in any event as this is Google’s gig, so you don’t get a vote. With a genderless AI, it will be either both or neither, and given the neo-Victorian sensibility to femininity, as promoted by such trusted prudential minds as revenge porn princess Mary Anne Frank, the consequences seem clear.
Google notes in its own AI principles that algorithms and datasets can reinforce bias: “We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.”
Who doesn’t trust Google, as our gateway Overlord to the interwebs, to distinguish the “unjust impacts” from, you know, facts?