Facebook’s task is unenviable. Two billion people, all yammering on about literally everything in the world. And hidden in that unending torrent are an unknown number of abhorrent, hateful utterances that would be better off unuttered.
But the method Facebook has applied to this problem, a tangled system of ethical arithmetic revealed in a report from ProPublica, seems unsuited to the task — even absurd.
I wrote back in 2013 that Facebook’s “categorial imperative,” by which the company assembles personas from political and social breadcrumbs in haphazard jigsaw style, fundamentally limits its understanding of users. As the social network has become more deeply embedded into our lives, this limitation has become more acute and more consequential.
This week’s consequence is a set of rules, comprising a secret philosophical lens through which Facebook’s global team of content reviewers are instructed to view content. The rules are not simple (they run, reportedly, to about 15,000 words) because the topic is not simple. But just because something is complex doesn’t mean it can’t be simplistic.
Sure, it’s a noble idea, to create a universal guide to civilized human interaction. It’s just impractical. Not least because Facebook’s goals of accuracy and efficiency (or indeed automation) are at odds with each other.
The trouble starts immediately, with the attempt to build a set of rules from the ground up that determine which bucket to put speech in — “censor” or “allow.” Starting with what would appear to be strong pillars like “promote free speech and discussion on every topic” is destined for failure because before long, those pillars are eaten away at and built onto by countless exceptions.
So it is with the “protected categories” set out in Facebook’s training. Race, religion, disability — it’s a great list of things that are frequently targets of hate speech or otherwise uncivil communication.
But things immediately start to go off the rails when they attempt to systematize exactly how to protect them — an equation where you put the information in one end and out the other comes an action, like any other data-driven application. The moral math they use is intended to make things perfectly clear, but instantly produces situations that are, on their face, incorrect.
For instance, as the slides show, the equations produce the guideline that “white men” are a protected category but “black children” aren’t — a distinction as clear as it is clearly wrong. Is it a national controversy that black children are killing innocent white men and getting away with it?
A system created with the sole purpose of detecting and preventing hate speech has accomplished the exact opposite effect: excluding a marginalized group from protection and definitively protecting a group that not only has fundamental protections and privileges, but is arguably the group most responsible for the behavior being proscribed!
In practice this looks like where the system allows a person in a position of power, like a white United States Representative, to call for the slaying of people of a particular religion. But a black woman who explains her view of systematic racism by saying that one must assume all white people are racist has her account suspended. (That happened, and we talked with Leslie Mac about it at TechCrunch’s recent Justice event.)
The context required to see that this is wrong is that there are inequalities in power that produce complex and shifting social dynamics, and it is when these dynamics are treated to violation that we consider harm to have been done. The simple logic governing Facebook’s protected categories is unaware of these national and global conversations and their subtleties, and indeed is fundamentally incapable of accommodating them.
Instead, we have amazingly complex systems of exceptions. For example, migrants, despite the overwhelming connotation of certain races and religions, are only a “quasi protected category.” You can call them lazy and filthy, because those are not “dehumanizing,” and you can accuse them of certain crimes but not others. You can claim the superiority of your country, but not the inferiority of theirs.
No one is saying Facebook thinks white men are more important than black kids. That’s not what the rules are about. But it is an inescapable consequence of the way these rules are structured that white men are given protections that black children aren’t. The system is internally consistent, but does not reflect reality.
Of course, Asian transgender persons would be given protections that Spanish plumbers aren’t, too — sometimes the way the system orders things seems innocuous, but clearly it isn’t always. As a system that is meant to accomplish something fundamentally humanitarian, it’s deeply flawed because it is fundamentally inhuman.
What’s the alternative?
I don’t envy Facebook here. This is a hell of a hard problem, and I don’t want to make it sound like I don’t appreciate Facebook’s efforts in this direction. Nor am I going to pretend they’re sufficient when they clearly aren’t.
There are three basic problems that Facebook’s moderation system attempts to solve:
- Volume. Millions upon millions of comments and photos posted every day, and an unknown proportion must be removed.
- Locality. The rules governing what posts will be removed must include context from the region and culture in which they are to be applied.
- Awareness. People need to understand what the rules are, why they are that way, and who made them.
The current system is focused on volume, with lip service to locality and awareness. That is why it fails: it doesn’t reflect the social dynamics in the context of which people already communicate, and the rules themselves are obscure — secret, even.
People are socially intelligent: They adjust their speech, personalities, and appearance to the situation or population they’re with. We know not to crack jokes at (most) funerals, to be polite with the S.O.’s parents, and to relax our moral standards around friends we trust. We’ll adjust likewise if Facebook becomes just another space where certain behaviors are expected and others prohibited.
But in order for that to happen, the space and its rules need to be defined. Unfortunately for Facebook, doing that at a global scale is a non-starter. While a few carefully worded rules may be a starting point for both the U.S., China, Russia and Morocco, there are simply too many differences to share a rulebook.
That means each of those places needs its own rulebook. Who has the time and capacity for that? Facebook, of course! Facebook is the most popular forum for public and semi-public discourse in the world. That is a position of great power, and incurs the great responsibility of administrating that forum in an ethical and reasonable way.
Right now I believe Facebook is avoiding the inevitable step of creating a much more comprehensive and locally informed set of rules, both for pragmatic and idealistic reasons. Pragmatic because it will be complicated and expensive. Idealistic because the idea is to build a global community, and the more they try, the more they find that’s not how things work. The best they can hope for is to build a global community of communities, each policing themselves with a set of rules that are as flexible as the people they are meant to rein in.
The technological aspects of that are up to Facebook, but something it must not shirk on is the human aspect. Having 7,500 moderators is better than 5,000, but is one for every quarter-million users enough? I don’t think it will be once a system is developed that meets the standards that people deserve. That will necessitate numerous, permanent and highly skilled staff all over the world, not bulk eyeballs offering a barebones ground truthing service.
When will Facebook hire social workers, activists, psychiatrists, grief counselors, local officials, religious leaders, and others with long histories navigating ethical problems and communication barriers? If the goal is to engineer civility, it’s unavoidable that those who engineer it in real life.
If Facebook really is serious about connecting the world, or whatever its new slogan is, this has to be a priority. The threat of hate speech, live murders, abuse and everything else is part and parcel with the grand vision of a universal communication platform.
The privilege of making the platform a safe and well-defined one for everyone is a task Facebook should be tackling with pride and passion in open air forums, not treating like a dark secret to be optimized by engineers drawing Venn diagrams behind closed doors.
Featured Image: Bryce Durbin / TechCrunch