Today’s Washington Post has an “insider” story on how difficult it is for Facebook to police “hate speech” and offensive imagery presented to its 2 Billion users a month. Facebook, having run into difficulty when its new “live” livestreaming option began to show suicides, assaults and even murders, is hiring another 3,000 people to help keep its offerings safe. Facebook says it deletes 288,000 hate speech posts a month. Note that these figures include its vast overseas participation, and do not represent the number of domestic incidents.
So Facebook has a “censorship manual” that cites 15,000 words that apparently are not allowed. But most of its monitors seem to be overseas contractors, who may not understand American idioms — or sensitivities. So now minority groups are contending that the Facebook program is biased against minorities, such as using — in a friendly way — intimate racial slurs.
There is apparently even a new organization — the Dangerous Speech Project — that has identified five “factors” in determining whether speech is “dangerous” or not:
- Speaker: Did the message come from an influential speaker?
- Audience: Was the audience susceptible to an inflammatory message, e.g. because they were already fearful or resentful?
- Message: Does the speech carry hallmarks of Dangerous Speech? The hallmarks are:
- Dehumanization. Describing other people in ways that deny or diminish their humanity, for example by comparing them to disgusting or deadly animals, insects, bacteria, or demons. Crucially, this makes violence seem acceptable.
- ‘Accusation in a mirror.’ Asserting that the audience faces serious and often mortal threats from the target group – in other words, reversing reality by suggesting that the victims of a genocide will instead commit it. The term ‘accusation in a mirror’ was found in a guide for making propaganda, discovered in Rwanda after the 1994 genocide. Accusation in a mirror makes violence seem necessary by convincing people that they face a mortal threat, which they can fend off only with violence. This is a very powerful rhetorical move since it is the collective analogue of the one ironclad defense to murder: self-defense. If people feel violence is necessary for defending themselves, their group, and especially their children, it seems not only justified but virtuous.
- Assertion of attack on women/girls. Suggesting that women or girls of the audience’s group have been threatened, harassed, or defiled by members of a target group. In many cases, the purity of a group’s women is symbolic of the purity of the group itself, or of its identity or way of life.
- Coded language. Including phrases and words that have a special meaning, shared by the speaker and audience. The speaker is therefore capable of communicating two messages, one understood by those with knowledge of the coded language and one understood by everyone else. This can make the speech more dangerous in a few ways. For example, the coded language could be deeply rooted in the audience members’ sense of identity or shared history and therefore evoke disdain for an opposing group. It can also make the speech harder to identify and counter for those who are not familiar with it.
- Impurity/contamination. Giving the impression that one or more members of a target group might damage the purity or integrity or cleanliness of the audience group. Members of target groups have been compared to rotten apples that can spoil a whole barrel of good apples, weeds that threaten crops, or stains on a dress.
- Context: Is there a social or historical context that has lowered the barriers to violence or made it more acceptable? Examples of this are competition between groups for resources and previous episodes of violence between the relevant groups.
- Medium: How influential is the medium by which the message is delivered? For example, is it the only or primary source of news for the relevant audience?
Dangerous Speech Project, “What is Dangerous Speech?”
Unfortunately, even this five-part factor test, with subparts, is insufficient for this organization, so it provides a further “analytical framework” so the observer can be more certain of what speech is “dangerous” and what is not. The “Guidelines” are five pages long with footnotes. Such ambiguous terms as “coded words” are illustrated by examples from genocidal history.
When speech is deemed an existential threat, as when “dangerous” is compared to genocide, the very act of defining terms becomes crucial. This nonprofit organization can’t do so in a simple way, nor can a massive corporation like Facebook.
How then could a judge, legislator or other government official?
That is one reason why the First Amendment is a simple “thou shalt not”-type of rule. But even that may not be enough for some. Senator Diane Feinstein (D-CA) and UCLA Law Professor Eugene Volokh recently discussed this topic at a June 20, 2017, Senate Judiciary Committee Hearing. Feinstein and her colleague Dick Durbin (D-IL) thought that mere “menacing” or racist words was enough to cut off speech; Volokh pointed out that such actions were likely counterproductive and probably unconstitutional. Video of the hearing is here.