The European Commission recently concluded its “EU Internet Forum,” a code of conduct for regulating speech on the internet. The code was negotiated among law enforcement agencies and the major global social media providers. The Code of Conduct targets terrorist recruiting and so-called ‘hate speech.’ It encouraged the platform providers to suspend accounts, messages or postings that violate their terms of service which are now being re-engineered to ban forms of expression the EC considers undesirable. The European Digital Rights Initiative (EDRI) and Access Now are protesting the Code because in this arrangement, the contractual terms of service of the private platform providers actually have more bearing on what can be published than the law. EDRI wrote:
…the “code of conduct” downgrades the law to a second-class status, behind the “leading role” of private companies that are being asked to arbitrarily implement their terms of service. This process, established outside an accountable democratic framework, exploits unclear liability rules for online companies. It also creates serious risks for freedom of expression, as legal — but controversial — content may well be deleted as a result of this voluntary and unaccountable take-down mechanism.
As advocates of freedom of expression we have mixed feelings about the EDRI/Access protest. Their main complaint seems to be that the law should come first. Absent a clear prohibition on suppression of free speech such as the First Amendment to the U.S. constitution, however, we believe that public law in Europe is as likely to suppress free speech as to protect it. We also believe that as private actors, the platform providers do have the right to limit expression in ways that protect the value of their service and respond to the demands of their customers for a certain kind of environment.
But when platform providers do this, they should be accountable to the public through market discipline. That means that people who do not like their policies can migrate to other platforms. Users should have alternatives; content moderation should be a bottom up process subject to consumer choice. Services that deviate from the socially-expected norms should be allowed to exist, even if they are on the margins of the market.
The exercise of private content moderation becomes much more problematic if the service provider has a monopoly, or if multiple providers are part of a government-orchestrated cartel that adopts a uniform policy reflecting the government’s direction rather than the people’s preferences. The use of intermediaries to regulate expression is especially objectionable when the governments’ preferred policy runs counter to fundamental legal rights that governments are, by law, supposed to uphold. Therefore, the EDRI/Access critique of the EU Internet Forum is basically correct.
We also support Access Now and EDRI in their protests against their exclusion from the negotiations. In general, we think that civil society participation in an illegitimate exercise is to be avoided rather than expected. But when governmental intervention makes a uniform code of conduct inevitable, it is better for the voices of individual rights advocates to be present in the negotiations. There is simply no excuse for the European Commission to put together a negotiation that excludes one of the most critical stakeholders affected by the process – those who uphold individual rights to freedom of expression. Such an exclusion makes it clear that this is an act of government policy, not a matter of voluntary cooperation. It also should make it clear to naïve advocates of “the multistakeholder model” how stakeholder representation can be manipulated by states to get the results they want.
It is not surprising that the EU found it more effective to regulate expression on social media platforms indirectly, by working through major intermediaries, rather than by prosecuting people under the law. The distribution of content by these platform providers is practically global, whereas government prosecutors are hampered by jurisdictional limitations. While some analysts see the solution to this as improved cooperation among states, such cooperation could easily become a cure worse than the disease, because it may give local governments extraterritorial jurisdiction, and subject the platform providers to the most restrictive laws around. The controversy reveals, once again, how the transnational Internet clashes with traditional, territorial-based law and regulation.
The solution to this dilemma, however, is not for states to make illicit law by pressuring platform providers to collude on state-directed policies that are extra-legal or even illegal. The only liberal solution is for states to continue to confine themselves to action squarely within their own jurisdiction, and when that is ineffective let the platforms and their users reach diverse, competing agreements about what kind of expression they will refuse to tolerate.
The EC Internet Forum is part of a broader negative trend. Until recently Western countries disagreed with the Russians, the Chinese and their partners in the Shanghai Cooperation Organization who insisted that information content is a cybersecurity issue. The SCO’s proposed “Draft International Code of Conduct for Information Security” was opposed by the West because it characterized “subversive” and “destabilizing” messages as a matter of cybersecurity. While all governments agree that certain kinds of content could be or should be illegal, the US and its allies in Internet governance have usually held the view that the term “cybersecurity’ should be reserved for technical and operational breaches, such as DDoS attacks, phishing, spyware, exploits and other ways of breaking into or attacking networks and information systems. By cloaking content regulation as a cybersecurity issue, we open the door to extensive, intergovernmentally-sanctioned censorship. Under this code of conduct, any effective criticism of the legitimacy of a sovereign government, no matter how repressive the government, could be classified as a threat to cybersecurity and silenced.
With this EC initiative, however, and similar American initiatives, the West seems to be backing into a stance similar to that of the SCO. They view hate speech and Islamic radical speech as destabilizing and subversive, and thus subject to pre-emptive suppression. American and European governments have been pressuring social media to block and censor Twitter accounts, Facebook pages and other messages produced by people supporting ISIS. The argument is that ordinary people will be “radicalized” by exposure to Islamic state messages. As one especially ludicrous blog put it, “the EU Internet Forum seeks to prevent terrorist domination of social media platforms.” The idea that terrorists have enough support to “dominate” the major social media platforms is paranoid and reveals a complete lack of commitment to the principles of a free society. Do these people think that the beheadings, religious extremism and cultural barbarism of the Islamic fundamentalists are so attractive to all of us that we cannot resist or reject them? Does it think that there is no real answer to the terrorists and haters other than to silence them?