Last week Wall Street Journal reporters, relying on an internal whistleblower, released information showing that Facebook has created some exemptions from its own content moderation processes. “Millions of VIP users,” the article claims, are “shielded from the platform’s normal enforcement process.”
According to the article. “the company’s automated systems [often] summarily delete or bury content suspected of rule violations without a human review.” A certain class of users, however, have been exempted from this system. These so-called “high profile” accounts are people in politics, sports, and celebrities whose subjugation to simplistic, algorithmic content moderation – which FB itself admits is wrong at least 10% of the time – would cause a public relations flurry. To avoid negative attention to FB and its content moderation policies, the documents reveal that,
“If Facebook’s systems conclude that one of those [high profile] accounts might have broken its rules, they don’t remove the content—at least not right away… They route the complaint into a separate system, staffed by better-trained, full-time employees, for additional layers of review.”
This revelation has created a predictable backlash, as if we hadn’t had enough techlash already. Cue the routinized outrage at big tech. Facebook is not just corroding our minds, it’s blatantly disciminatory!
The FBOB has agreed to take up the issue and issue a ruling. The ruling it will issue is practically a foregone conclusion, given the current political climate: Facebook was bad, XCheck exemptions should be eliminated, everyone should be subjected to Facebook’s routine content moderation policies.
We beg to differ. Everyone seems to be missing the real point here: the problem is that there is too much content moderation in Facebook (and in other platforms), and discriminatory application and exemptions are inevitable. Content mod is progressively expanding into more and more judgment calls and other areas where it cannot possibly do any good. Xcheck is both a tacit recognition of its over-reach and a welcome setting of limits on its scope.
The Oversight Board needs to examine carefully why Facebook initiated the program in the first place. Let’s begin with the fact, which advocates of ever-expanding standards of content moderation like to avoid, that content moderation suppresses the free exchange of ideas and opinions. It’s one thing to weed out calls for violence and instances of porn. It’s quite another to attempt to detect “misinformation” on controversial topics, or insults. The expansion of content mod is an irritant to all of the transmitters of suppressed messages. It is often absurd, arbitrary, and generally acts as a brake on people speaking their minds. As for the receivers, while it may make the people offended by the transmission happy, it is just as likely to make an equal number of potential recipients upset, either because they agree with the suppressed sentiment or because they welcome robust public discourse and think the action was misconstrued or unnecessary.
When this interference happens to an ordinary schmuck, it is easy for Facebook to get away with it. It may be arbitrary, unfair, even an outright mistake, but who cares? The protests of the isolated Facebook user is an inaudible blip in the roar of social media. No one notices, no damage is done to the platform.
But when this happens to high profile users, the costs of suppression are clear, and the action is highly visible. Therefore, it is perfectly rational for Facebook to exempt high profile users from the absurd and costly ritual of content moderation policies based on misguided premises. The closer Facebook gets to Chinese-style social engineering and public opinion management, the more outrage and exemptions it will generate. The cross-check program efficiently removes the toughest and most consequential cases from the program and applies more careful (and more costly) standards to them. It allows them to defuse the reactions to their most visible acts of misaligned content moderation, and continue to rely on inaccurate, automated or crudely cheap systems for the rest of us.
Of course, it utterly unfair to discriminate between users with large followings and those with low visibility and small circles of followers. We understand that. What this tells us, however, is not that high profile users should be subjected to the same overextended standards as everyone else. What it tells us is that content moderation policies are trying to do too much, and we need to relax them. We need to drastically lower our expectations about the benefits and scope of suppressing social media messages, and we also need to moderate our belief in the harms caused by freer expression.
If the Oversight Board is going to function as the enlightened check on FB content policies that it aspires to be, it needs to see the cross-check program not as an abuse, but as a warning signal that many of the premises underlying our approach to content moderation are wrong.