Last week Wall Street Journal reporters, relying on an internal whistleblower, released information showing that Facebook has created some exemptions from its own content moderation processes. “Millions of VIP users,” the article claims, are “shielded from the platform’s normal enforcement process.”

According to the article. “the company’s automated systems [often] summarily delete or bury content suspected of rule violations without a human review.” A certain class of users, however, have been exempted from this system. These so-called “high profile” accounts are people in politics, sports, and celebrities whose subjugation to simplistic, algorithmic content moderation – which FB itself admits is wrong at least 10% of the time – would cause a public relations flurry. To avoid negative attention to FB and its content moderation policies, the documents reveal that,

“If Facebook’s systems conclude that one of those [high profile] accounts might have broken its rules, they don’t remove the content—at least not right away… They route the complaint into a separate system, staffed by better-trained, full-time employees, for additional layers of review.”

This revelation has created a predictable backlash, as if we hadn’t had enough techlash already. Cue the routinized outrage at big tech. Facebook is not just corroding our minds, it’s blatantly disciminatory!

The FBOB has agreed to take up the issue and issue a ruling. The ruling it will issue is practically a foregone conclusion, given the current political climate: Facebook was bad, XCheck exemptions should be eliminated, everyone should be subjected to Facebook’s routine content moderation policies.

We beg to differ. Everyone seems to be missing the real point here: the problem is that there is too much content moderation in Facebook (and in other platforms), and discriminatory application and exemptions are inevitable. Content mod is progressively expanding into more and more judgment calls and other areas where it cannot possibly do any good. Xcheck is both a tacit recognition of its over-reach and a welcome setting of limits on its scope.

The Oversight Board needs to examine carefully why Facebook initiated the program in the first place. Let’s begin with the fact, which advocates of ever-expanding standards of content moderation like to avoid, that content moderation suppresses the free exchange of ideas and opinions. It’s one thing to weed out calls for violence and instances of porn. It’s quite another to attempt to detect “misinformation” on controversial topics, or insults. The expansion of content mod is an irritant to all of the transmitters of suppressed messages. It is often absurd, arbitrary, and generally acts as a brake on people speaking their minds. As for the receivers, while it may make the people offended by the transmission happy, it is just as likely to make an equal number of potential recipients upset, either because they agree with the suppressed sentiment or because they welcome robust public discourse and think the action was misconstrued or unnecessary.

When this interference happens to an ordinary schmuck, it is easy for Facebook to get away with it. It may be arbitrary, unfair, even an outright mistake, but who cares? The protests of the isolated Facebook user is an inaudible blip in the roar of social media. No one notices, no damage is done to the platform.

But when this happens to high profile users, the costs of suppression are clear, and the action is highly visible. Therefore, it is perfectly rational for Facebook to exempt high profile users from the absurd and costly ritual of content moderation policies based on misguided premises. The closer Facebook gets to Chinese-style social engineering and public opinion management, the more outrage and exemptions it will generate. The cross-check program efficiently removes the toughest and most consequential cases from the program and applies more careful (and more costly) standards to them. It allows them to defuse the reactions to their most visible acts of misaligned content moderation, and continue to rely on inaccurate, automated or crudely cheap systems for the rest of us.

Of course, it utterly unfair to discriminate between users with large followings and those with low visibility and small circles of followers. We understand that. What this tells us, however, is not that high profile users should be subjected to the same overextended standards as everyone else. What it tells us is that content moderation policies are trying to do too much, and we need to relax them. We need to drastically lower our expectations about the benefits and scope of suppressing social media messages, and we also need to moderate our belief in the harms caused by freer expression.

If the Oversight Board is going to function as the enlightened check on FB content policies that it aspires to be, it needs to see the cross-check program not as an abuse, but as a warning signal that many of the premises underlying our approach to content moderation are wrong.

3 thoughts on “A Modest Suggestion for Facebook’s Oversight Board

  1. Whilst I agree with the premise that everyone should be subject to the same standards, I disagree with the suggestion that those standards should be significantly relaxed. For example, the spread of misinformation about the current pandemic and vaccines appears to have caused loss of life and so social media platforms like Facebook are correct in my view in taking steps to stem its dissemination.

    Couching this as a free speech issue is misguided given that social media platforms are private rather than public spaces, with their own community standards. Let’s apply those standards consistently but let’s not reduce efforts to minimise the considerable harm that can be done by lowering those standards because it is difficult to get them right.

    Surely it’s better to invest the time and effort to improve the standards? The alternative to Facebook doing this is that governments will step in and do so given the reach of the platform and potential for significant harm.

    1. Thanks for your input.
      First, let me clarify: by framing it as a free speech issue we do not mean to imply that platform content moderation policies are the equivalent of governmental censorship. We know that they are private actors, not state actors, and have the legal right to exercise editorial discretion. But it’s obvious that the platforms’ editorial policy can be more or less conducive to free, open and healthy discussion, and we think Facebook (and several other platforms) are being pushed too far in a suppressive direction. We are advocating for a more liberal policy. Their status as private actors enables them to do that; indeed, it would also make it perfectly legal for them to discriminate between high profile users and ordinary, low profile users, wouldn’t it? In fact, it is the people who object to discrimination between classes of users and want Facebook to be arbiters of truth about controversial issues such as vaccines and masking who seem to be treating it like a state actor.
      On the subject of the harms of misinformation, I reject the implication that systematic suppression of expression – especially by an automated algorithm – is the best way to promote the truth. There are many people out there who have blatantly false ideas about health issues (and a lot of other issues) to be sure. The best way to counter those is to expose them as false in an open discussion. The idea that ideas deemed false by an authority must be suppressed seems like a very Chinese Communist Party approach to public discourse.
      Only a very small amount of what is commonly called misinformation is deliberately false and intentionally harmful – and that kind of misinfo is legally actionable (see, e.g., the lawsuits around the claims about manipulated voting machines). Most of it is reflective of confusion, misinterpretation or genuine concerns that people have. It is better to confront it openly. And surely you must agree that a lot of the attempts to automate these decisions – which the scale of comment makes unavoidable – produces many bad results.

      1. This conversation seems to assume that actors are indeed acting on their own and in good faith. What of a state actor or state sponsored actor who uses social media to deliberately spread false or incendiary information for malign purposes? Does the platform owner have an obligation or at least a role to play in dampening the impact of that activity?

Comments are closed.