Content moderation online is a hot topic especially after the Christchurch Call, a New Zealand-France joint initiative to eradicate terrorist, violent extremist content online. At RightsCon this year, IGP is going to discuss the Christchurch Call and content moderation online during two sessions. Both sessions will be held on Wednesday 12 June:

The benevolent accomplices of authoritarian regimes on the Internet (10.30 to 11.45)

Rafik Dammak and IGP co-organized the session about the benevolent accomplices of authoritarian regimes. The session will discuss how online social media platforms can have a role in increasing censorship by taking down content and suspending suspicious accounts. Moreover, laws and regulations that obligate platforms to remove content with no due process mechanisms in place also contribute to censorship online. So they are in a way accomplices of authoritarian regimes, using the same tool: censorship.

Censorship and online video streaming

Just a couple of examples can make the issue clearer: YouTube deactivated Wael Abbas’s YouTube videos of police brutality in Egypt, because the content was graphically violent, which were restored later after complaints filed against YouTube.

Google took down a number of Iranian YouTube accounts last year because they were spreading misinformation and were tied to the Iranian government. Iranian government approach to content moderation is more or less similar: Aparat is a popular online video platform (similar to YouTube) in Iran. Due to filtering, it has a high number of users. When entities critical of the government create channels and accounts on this platform, their account is suspended and deleted for a variety of reasons, one is spreading lies!

Cybersecurity and influence operations

Considering the influence operations (accounts that spread propaganda on online social media platforms) as a cybersecurity issue is another approach that legitimizes censorship for solving a problem that is not a cybersecurity problem. Cybersecurity attacks are technical attacks. The motivation behind the attack might indeed be political, the attack, however, is not threatening, violent content but a technic. The process of identification of info-ops is contaminated with political biases and the technics to identify them are not always scientific.  For example, the security firm  FireEye has been helping Twitter to identify 2800 Iranian inauthentic account that it has identified as influencers. We don’t know their methods or technics. But errors have already been reported in unfair deactivation of accounts. Moreover, some civil society activists have applauded Twitter’s action and have asked for more account suspension.

Using content takedown and account suspension by online platforms  can:

  • Legitimize authoritarian countries similar methods of censorship
  • Make it easier for these countries to argue that online content is a matter of national security ( a line of argument that they had been repeating before their democratic allies joined them)
  • Argue for creating their own national online platforms and internet and filter globally accessible platforms because of the unfair treatment of their users on those platforms

If content takedown by intermediaries is not a solution for fighting with terrorism, bringing cybersecurity, fighting with info-ops, then what is? We will be discussing this issue during the session.

Free speech or hate speech  (5.15 to 6.30 PM)

We are also involved with the session on free speech or hate speech: should online diligence change? Despite the dichotomous title of the session, free speech and hate speech are never that clear cut. In this session we with a number of businesses that are mainly focussed on content delivery, hosting and other technical tasks, discuss how we should approach the concept of content takedown when it comes to hate speech. One of our most important messages is in policymaking processes, or pledges such as Christchurch call, it is very important to create a clear distinction between the service providers and not require content takedown at the Internet infrastructure level. Businesses might have some practices, especially in domain name registrars. However, obligating registrars to remove domain names and content delivery networks to monitor traffic and provide content filtering is a step too far.