During the UN Internet Governance Forum (IGF), IGP organized a panel discussion on The interaction of platform content moderation & geopolitics. The panel focused on how inconsistency in content moderation can reinforce existing power disparities, and make platform content moderation policies into a proxy battle among conflicting interest groups. The following policy questions were examined: 

  1. What role does content moderation by platforms play in reconfiguring responsibility and accountability in societies in the current moment? 
  2. What case studies offer insight into the ways in which law, norms, and technology interact to shape enforcement of community standards on platforms?
  3. What kind of formal and informal arrangements have developed between digital platforms and governments to limit the proliferation of state-backed misinformation/ disinformation, hate speech, and violent or terrorist content? 
  4. What political and social responses are needed to address the growing power of platforms over the contemporary public sphere?
  5. What would a more transparent and accountable content moderation look like and require today?

Speakers included:

  • Pratik Sinha,  Alt News
  • Varun Reddy, Facebook
  • Marianne Diaz, Derechos Digitales
  • Amelie Pia Heldt, Hans Bredow Institute
  • Tarleton Gillespie, Microsoft Research /  Cornell University
  • Urvan Parfentyev, Russian Association of Electronic Communications

The panel discussants aired many issues and the following areas of agreement emerged:

Platforms are in their adolescence

The social and legal norms around speech have evolved over many years and within different cultural contexts. Platforms are in their adolescence and online speech is a recent development.  The challenges that platforms are grappling with are issues that have taken several years to figure out, or are far from being resolved in other media fields. Most big platforms that we are concerned with emerged from the US free speech tradition and initially adopted a hands-off approach to content moderation. Platforms were unprepared for how this approach would play out in different political, legal and social systems, or in conflictual environments steeped in racial tensions. It has taken them a long time and several international missteps (Myanmar, US elections manipulation) to realize that even in the best conditions a hands-off approach leads to problems, but putting their hands on also causes problems. 

Existing arrangements for content moderation are flawed

As platforms come to terms with their global footprints they are responding however they can, from rolling out new features and interventions to squash harmful content to complying with formal or informal demands from states. Increasingly platforms are moving towards an industrial approach to content moderation, contracting thousands of moderators and deploying identification software. The front-line moderators and software are overseen by in-house policy teams hired to deal with rule changes, emerging problems, and public backlash. As platforms build teams and policies, certain kinds of users (public figures, influential content producers) and content (highly sensitive, demonstrably problematic or controversial) still rise to a more ad hoc/direct assessment by in-house moderation teams. Over the years, the gap between how platforms moderate the many users of their services versus the few who bring in millions of dollars or grab eyeballs with their charged commentary has been growing. 

Existing arrangements raise questions about how the speech of the many is lost inside a large scale and complex system where it can be difficult to be sensitive to culture, language and context. For e.g., what kind of resources do platforms allocate in moderating the speech of the many versus a public figure like Donald Trump? If platforms do prioritize certain users, what happens when the influential few engage in behaviour that qualifies as hate speech or incitement? While over-moderation and censorship is an issue, the content that platforms decide to leave up,  is as important. Platforms failure to take action against content that violates their policies or community standards has been the cause of recent content moderation controversies. We are increasingly seeing these play out in the regulatory context as well with platforms being hauled up in front of committees to explain the political bias in their decision making and being threatened with changes to the intermediary liability regimes. 

The big, global platforms are still U.S. centric

Platform policies originate from the policy use cases in its country of incorporation without giving due considerations to the context in specific regions. Coming from the US meant that North American legal, political, cultural and social norms defined or shaped  the development and enforcement platforms’ global policies.  This explains why platforms chose to openly focus on Al Qaeda and ISIS over understanding a whole array of forms of political violence, terrorism and extremism that they encountered in other jurisdictions. We see this playing out when legal, friendly expression in local languages ends up being flagged and removed since it is considered a slur by people who crafted the rules for speech on the platform. 

Another case in point is the stark difference in platforms’ thinking about their obligations around misinformation and hate speech in the US versus their response in other markets. For example, users across five Latin American countries cannot flag local and international misinformation or hate speech because platforms have not invested in fact-checking capabilities in the region. Facebook including caste as a vector of hate speech in India is viewed as an example of the localization that goes into platform’s global policies, but the company making these changes after a decade of operating in India is also a shocking reminder that the worldviews from which these platforms start are incredibly persistent, and take a long time to change.