The past week saw a hurricane of activity around content moderation by platforms in the U.S. The Federal Communications Commission, the Justice Department, and a Supreme Court justice made it clear that they want to pull the plug on Section 230 of the Communications Act, the part of the law that immunizes platforms from liability for the expression of their users.

The social media platforms themselves heightened the tensions with several bold new steps in content moderation: they suppressed a New York Post story about Hunter Biden’s emails, and introduced new measures to make QAnon and the Proud Boys invisible on social media, as well as some leftist radical groups.

The Table summarizes the law and policy activity underway.

Item, date

Proposer
Description
Biased Algorithm Deterrence Act 2019Rep. Gohmert (R, Texas)Removes Sec 230 immunity from any platform using algorithms to filter content
Ending Support for Internet Censorship Act, 2019Sen. Hawley, (R, Missouri)Conditions Sec 230 immunity on an FTC audit determining that content moderation policies are politically neutral
Social Media Addiction Reduction Technology Act, 2019Sen. Hawley, (R, Missouri)Micromanages social media sites in a way intended to prohibit them from using “addictive practices”
Eliminating Abusive Rampant Neglect of Interactive Technologies (EARN IT) Act, , 2020Sens. Graham (R, South Carolina) & Blumenthal (D, Connecticut)Conditions Sec 230 immunity on the adoption of “best practices” recommended by a new National Commission on Online Child Sexual Exploitation Prevention that the law would establish
Executive Order: Preventing Online Censorship, May 2020President TrumpClaims that “Online platforms are engaging in selective censorship that is harming our national discourse”
Petition for an FCC Rulemaking, July 2020U.S. NTIA, Department of Commerce
Requests that the Federal Communications Commission (FCC) initiate a rulemaking to "clarify" the provisions of Section 230
Justice Department Legislative Package, September 2020U.S. Justice DepartmentLegislative proposals intended to address “unclear and inconsistent moderation practices“ and “the proliferation of illicit and harmful content online that leaves victims without any civil recourse”
Concurring Opinion in Malwarebytes Inc. v Enigma Software Group USA LLC, October 2020Justice Clarence Thomas, SCOTUSUrges the Court to the court to entertain a broader challenge to Section 230 immunity

Regardless of what happens to Section 230, we seem to be witnessing the end of the user-driven platform upon which that law was predicated. The platforms are being pushed into, or actively embracing, a role as “socially responsible” arbiters of speech, association and norms. They are no longer just intermediaries connecting us. Almost all sides in this debate, with the exception of a few legal experts and true liberals, seem to want the managers of platforms to actively police speech and association in ways designed to tilt political discourse one way or another. They just disagree about which way to tilt it. Neither side seems willing to embrace the full potential of the platform economy to facilitate diverse, robust expression.

The Platform Model

The social media platform was a business and media model in which the platform acted primarily as an intermediary for the exchange of information and ideas generated by its users. It not only unleashed exchanges of information and specialized channels, it allowed for the bottom up development of social norms regarding expression, by crowd-sourced means such as user reporting of posts and user-selected blocking decisions. That model allowed for the free, undirected association of people into groups at unprecedented scale, and the ability of like-minded or curious people to find ideas and groups matching their preferences. These groups were segmented by region, language and culture, and by divergent values and interests, but not so much by national borders or legal jurisdictions. The platforms thus brought into existence a transnational public sphere.

The pinnacle of recognition came in 2006, when Time Magazine announced that “You” – meaning us – was its “Person of the Year.” The notion of user-generated content, the magazine’s editors claimed, meant that we were “seizing the reins of the global media,” “founding and framing the new digital democracy,” and “working for nothing and beating the pros at their own game.” The model was celebrated not just by trendy journalists but also by academics and the Organization for Economic Cooperation and Development (OECD), in their October 2007 report “Participative Web and User Generated Content.”

Section 230 as the facilitator

While we would not agree with Jeff Kosseff that the 26 words in Section 230 “created the Internet,” it is true that they created the legal underpinning for the social media platform.

The law allowed users to generate and share content without making intermediaries liable for lawsuits around defamation, slander, libel, false claims or any other problems that might arise from the postings of their users. Immunizing platforms from liability made them common carriers of multimedia content. Just as the phone company is not responsible for whatever criminal statements or plots are made using the telephone system (and could not function if it was), so the platforms could allow millions of individual users to express themselves online without fearing whether they would be sued for what their users did. But Section 230 also gave the platforms the right to remove, shape or suppress content without losing that immunity. In other words, platforms had both the content immunities of a telephone system and the editorial discretion of a newspaper.

Do the platforms use this discretion in a biased way? Perhaps, but legally, their biases are exercises of editorial discretion, the same editorial discretion protected by the First Amendment and upheld by the Supreme Court in Miami Herald vs Tornillo (a case in which a newspaper refused to print a reply to one of their editorials). Platforms, like newspapers, have a right to be selective, even biased, under the First Amendment.

In the early days, however, the platforms’ policy towards the suppression or selection of content was a limited one. Their main function was to match providers and seekers of content. Their exercises of editorial control merely eliminated material that fell so clearly outside the limits of what their community of users would accept that it would damage their reputation or diminish the utility of their platform. No one expected them to become arbiters of truth or private nanny-states.

The end of the platform

As an ever-larger portion of public discourse takes place via platforms, they have become victims of their own success. Like broadcast television, they are now gatekeepers to vast audiences. Their increasing economic, social and political value to a diverse ecosystem of stakeholders, many with sharply conflicting values and goals, has made both their decision to permit and their decisions to restrict content more consequential and controversial.

The Achilles heel of the platform-as-intermediary-for-user-generated-content turned out to be politics. Electoral politics – as well as ideological battles between warring political movements and geopolitical rivals – are on the verge of breaking the model down. The 2020 election is a perfect storm, with American society polarized around Trump, BLM protests sometimes erupting into riots, and the coronavirus pandemic. That combination of factors has heightened the stakes of the kinds of public communication in which platforms are increasingly central.

We are now immersed in a wholesale assault on the platforms’ facilitative, intermediary role. The cultural left and the cultural right, both anti-liberal by nature, have been unwitting allies in this act of destruction. What is needed now is to situate this controversy in a framework grounded in a consistent understanding of free expression and its role in liberal democracy.

The role of the cultural left

The cultural left was an early and eager proponent of using the power of the platform to suppress ideas and groups they considered to be politically incorrect. Originally confined to uniquely outrageous and destructive speech by the likes of Alex Jones, their demands for suppression progressively expanded. Anti-vaccination groups, conspiracy theorists, challenges to critical race theory, negative opinions about trans people – virtually any confrontation with conflicting values prompted demands for eliminating messages and deleting accounts.

Advocacy of aggressive content moderation then evolved into a more systematic targeting of political movements. It is a fact that rightwing movements, like any group marginalized by mainstream media, had been making active use of the new social media. But the cultural left came to see platforms as somehow responsible for the political life of these groups – as breeding grounds for reactionary, undesirable political movements. Facebook, they believed, was responsible for Trump’s election. YouTube’s algorithm and WhatsApp were, according to a New York Times feature, directly responsible for Bolsonaro’s election in Brazil.

Under increasing pressure from this kind of analysis, the platforms started to target not just specific acts of harmful or destructive speech, but the crucial linkage between public speech acts and collective aggregation and mobilization of social movements. In this effort, they were actively cheered on by progressive groups who were willing to abandon their commitment to free expression as long as it was their ideological enemies who were being targeted.

Even as the West complains about China’s digital authoritarianism and its social credit system, the advocates of aggressive content moderation are consciously pushing the platforms into related forms of social engineering. In explaining its suppression of the Proud Boys, Facebook said in 2018 “Our team continues to study trends in organized hate and hate speech and works with partners to better understand hate organizations as they evolve. We ban these organizations and individuals from our platforms and also remove all praise and support when we become aware of it.” So the target is an organization, a movement, and any expressions of support for it, not merely harmful or illegal acts.

Confirming this, Facebook’s recent announcement about its suppression of QAnon shows that it no longer suppresses them because they discuss or encourage violence: “Starting today, we will remove any Facebook Pages, Groups and Instagram accounts representing QAnon, even if they contain no violent content.” They are targeted because they are a disapproved social movement, full stop. Facebook says it is “work[ing] with external experts to address QAnon supporters using the issue of child safety to recruit and organize.” While we agree that QAnon’s theories are insane, it is likely that exposure of them in the public sphere will make this clear to most people. We are more disturbed by the way the free association and enablement of collective action facilitated by social media platforms will be systematically denied to segments of the population not approved by the mainstream.

These interventions have major effects. YouTube’s June 2019 revision of its hate speech policy, for example, led to a fivefold increase in the number of channels and videos removed. It swept off 107,000 videos, 2,600 channels and 43 million comments in the first quarter of 2020.

Later in this article we discuss the implications of this for liberal democracy.

The cultural right and the suppression of the radical left

While the right-wingers claim the platforms are biased, there is abundant evidence that they do, in fact, suppress radical left wing groups as well as rightwing groups. According to The Intercept, “Facebook [has] shut down the pages of numerous antifascist, anti-capitalist news, organizing, and information sites.” It reports that It’s Going Down, a media platform that publishes first-hand, sympathetic reports on angry BLM and anti-racist protests, was taken down, as was Crimethinc, which it called “a bastion of left-wing, anarchist publishing and thought since the 1990s.” Other Facebook groups organizing around and praising violent antiracist uprisings were also shut down, including the PNW Youth Liberation Front, a network of youth collectives in the Pacific Northwest committed to direct action. So here is another point of view that the platforms think they need to silence.

Reporting on this suppression of leftist speech, The Intercept complains about “indefensible false equivalences between organized, racist fascists, and the antifascists who vigorously oppose them.” This is a clear indication of the paralyzing internal contradictions within the cultural left regarding free speech. It is patently absurd to cry “false equivalence” when QAnon, the Proud Boys and groups associated with anarchist uprisings and urban riots are both suppressed by the big platforms. Both are radical, ideology-driven movements (indeed, one could make a strong case that they feed off of each other). Both, however, have a right to free expression even if one disapproves of their tactics or ideology.

For people who want to live in a liberal democracy, the real issue is not whether one prefers fascists or left-anarchists (there are other, better choices), nor should we be debating which one of these groups platforms should suppress. The real policy choice is whether social media platforms are allowed to facilitate any and every form of legal speech and association, or whether they will be programmed to pre-emptively suppress social movements that some segment of the population finds to be threatening. If the second choice is made, both radical leftists and rightists are bound to be targeted. Asking major commercial platforms to suppress militant rightwing challengers without resulting in an equivalence with radical leftists betrays a total detachment from political and social reality.

While the mainstream platforms are suppressing both sides of the political spectrum, the Trumpian cultural right is also targeting the platform model. A new generation of Republican nationalist, quasi-fascist Senators such as Josh Hawley (R-Mo.) are leading the charge. Hawley’s Social Media Addiction Reduction Technology Act  was based on the accusation that platforms’ business model is based on addictive engagement – mirroring arguments made by Shoshana Zuboff, Zeynep Tufkeci, and other sages of the cultural left. Hawley’s Ending Support for Internet Censorship Act proposed to remove the immunity the large platforms receive under Section 230 unless they submit to an audit by the Federal Trade Commission that proves that their algorithms and content-removal practices are politically neutral. This is, in effect, a reinvention of the broadcast Fairness Doctrine for digital platforms. They are issuing other threats as well. In recent letters to Facebook and Twitter CEOs Mark Zuckerberg and Jack Dorsey, Hawley claimed that their editorial decisions about the New York Post’s Hunter Biden story violated campaign finance laws.

It should be clear that Section 230 immunity has little to do with the Trumpian right’s objections to the platforms. Their real concern is the way the platforms’ content moderation decisions affect them. Biased decisions, however, are not caused by the platforms’ legal immunity. Eliminating Section 230 and treating the platforms as publishers does not in any way alter their right, or their propensity, to exercise editorial judgment. Indeed, the more they are defined as publishers, the stronger their First Amendment right to make whatever content selections and deletions they like.

What’s really happening here is that the withdrawal of Section 230 immunity is being used by the Republican right as a threat, a stick with which to beat the platforms into submission. Section 230 is of immense economic significance to the platforms, due to the plethora of lawsuits to which they would be subjected and the massive amount of pre-publication monitoring of their users they would have to do if it went away. The threat of withdrawing it is being used as a means of regulating their behavior.

The cultural left has thrown up some resistance to the right’s attack on Section 230, but often it is a hollow defense of their own biases rather than a principled defense of free expression and the immunity of intermediaries. Referring to the right’s attack on Section 230, e.g., Access Now’s Legislative Manager said: “This is yet another attempt to prevent social media platforms from removing harmful hate speech and disinformation off their sites…” So Access Now is defending Section 230 for the same reasons that the Republicans are attacking it. But if the platforms stopped suppressing the speech of the radical right or Republican campaigners, would Access Now’s support for Section 230 remain? Or would it be calling for legislative or policy interventions, too?

Platforms in a free society

People who justify de-platforming fascists betray a fear that the majority of the population really is fascist, and unless organizing and expression of these ideas is actively suppressed, then the fascists will win. But if fascism wins power and votes in a free society, and only top-down suppression prevents that, then one is stuck in an authoritarian cycle either way. You’re just struggling over which gangs win. Similar ideas were expressed by American conservatives during the anti-communist trends of the 1940s, ‘50s and ‘60s. Democracies are supposed to have freedom of expression, conservatives would admit, but free expression rights should not apply to movements that would abolish free expression and democracy if they attain power. Conservatives, too, tacitly believed that their enemy’s ideas were more appealing than theirs: communist movements and ideologies, left to compete in the public sphere, would win unless we suppressed them.

Call us naive and idealistic, but democracy and freedom only win if the public can discuss and debate conflicting ideas. The freedom to organize publicly also brings with it transparency, an ability to see what these different groups are advocating, and to whom.

One can easily see how fundamentally undemocratic it is to deplatform ideologies and social movements merely because we oppose their belief system. Quite apart from the individual right of the ideologue to express their views, it denies the rest of the population the right to be exposed to certain political ideologies. The suppressing agent does not trust the public to make a wise or informed choice.  What this means in practice is that whoever happens to hold enough political and legal clout to get its opponents suppressed will use that power to block speakers deemed wrong or dangerous. That makes the social media platform – and indeed all the public media – a prime target for political competition for control of public discourse. And that is what is happening now.

Advocates of deplatforming always take a very short-term view of this power. They assume that because their norms are more prevalent in today’s culture that only their own disfavored bad guys will be suppressed. Having discarded the principle that it is possible and correct to disagree with every word another says but still defend to the death their right to say it, they expose themselves to suppression when the direction of the political winds change.