“Censorship” and Social Media

A U.S. District Court has issued a landmark ruling that would block the federal government from encouraging or influencing the suppression of content by social media companies. Citing the First Amendment, the court issued a sweeping injunction that would stop agencies such as the White House, the Justice Department, the FBI, DHS, its Cybersecurity and Infrastructure Security Agency (CISA) and the State Department from “meeting with…emailing, calling, sending letters, texting, or engaging in any communication of any kind with social-media companies urging, encouraging, pressuring, or inducing in any manner for removal, deletion, suppression, or reduction of content containing protected free speech.”  

The judge said, “the issue here is not whether the social-media platforms are government actors, but whether the government can be held responsible for the private platforms’ decisions.” After reviewing evidence of actions by the government to target specific speakers and ideas via private interactions with the platforms, Judge Terry Doughty concluded, “when the government has so involved itself in the private party’s conduct, it cannot claim the conduct occurred as a result of private choice….“ He went on to uphold the plaintiffs’ more politicized claim that the federal agencies were “targeting conservative speech,” concluding that “the government was engaged in ‘viewpoint discrimination,’ to which strict scrutiny applies.” The defendants have filed an emergency petition with the 5th Circuit to stay the injunction. It is possible that the overly broad reach and some potentially self-contradictory aspects of the injunction may lead to its reversal. 

Even so, this ruling is an extremely healthy development for free speech in America. For one thing, it opens our eyes to the extensive involvement of the state in content moderation and lays to rest the myth that these activities are not politically biased. Governmental shaping of political discourse is inherently biased – towards the policies and needs of whoever holds power. Biden administration apparatchiks like Rob Flaherty aggressively demanded suppression of dissenting views on social media. Michael Shellenberger’s documentation of what he calls a “censorship industrial complex” in the House Select Committee is an overdue critical examination of the systemic nature of governmental interventions in social media and how they reflect partisan political differences. 

While the “censorship” label is overused and sometimes inaccurate (private editorial choices are not censorship), it is important for Americans (and the world) to better understand the extensive ties between the state and the social media environment. It is not just direct pressure but also financial support for monitoring media from intelligence and military agencies and a growing number of “working groups” and “information sharing” arrangements involving social media and civilian government agencies. We already knew from the Twitter files that this was going on, but Missouri v Biden opens our eyes wider. Judge Doughty’s Memorandum Ruling documents aggressive emails and threats of altering Section 230; we see Facebook admit that it is actively “demoting” posts that “don’t violate our community standards” due to government pressure. We see the pressure for influence sliding down the slippery slope from terrorism and foreign election interference, to “misinformation” about “vaccine hesitancy,” then to “misinformation” about climate change, gender, and economic policy, and finally, pressure to eliminate demeaning representations of Jill Biden. Governmental pressure seems to have played a major role in this progressive expansion of the scope of content suppression.

Predictably, many are reacting to the decision in partisan ways, with Democrats/progressives emphasizing that Judge Doughty is a conservative Trump appointee and conservatives (many of whom have also been promoting governmental interference in social media content) feeling vindicated. We are not conservatives and have little sympathy for many of the suppressed ideas and causes, but that’s not the point. The Missouri v. Biden case is just the tip of a very big iceberg, of which Musk’s takeover of Twitter is a part. Interactive social media redistributed power over public discourse in ways that opinion-leading elites found deeply threatening. Since 2016 there has been a systematic attempt to reassert control, which has seen many on the left abandon their commitment to the role of free expression in democracy as they claimed that social media was “destroying democracy.” Trump’s election scared them, and the pandemic and the terrible attempts to delegitimize the 2020 election created an environment that put a heavy premium on narratives about politics and public health, amplifying the stakes and divisions. Put this phenomenon into an environment in which the US is both politically and culturally polarized and the two “poles” have major imbalances in their access to and control of media, and it is inevitable that one side with a dominant position in media will try to suppress the other, if the political tools are available. We need to break that pattern. Exposing government-induced suppression is the first step towards fully detaching the state and public discourse, as the first amendment intended. 

European Commission says US isn’t Inadequate

…and the U.S. returned the favor. On July 10 the U.S. Attorney General designated the European Union and the European Economic Area as “qualifying states” as defined by President Biden’s October 2022 Executive Order strengthening privacy and civil liberty protections against U.S. signals intelligence activities. The Order creates an independent, binding mechanism enabling Europeans to seek redress through the submission of a complaint about illegal collection of their personal data. None of the safeguards in the Framework apply unless the intelligence services of these European countries provide sufficient privacy safeguards for Americans. The same day, the European Union issued its final decision that the U.S. provides “adequate” protection of privacy, making it lawful for companies and other organizations to send personal data from the EU to the United States. “The Commission has carefully analysed U.S. law and practice… and concludes that the United States ensures an adequate level of protection for personal data transferred under the EU-U.S. Data Privacy Framework (DPF) from a controller or a processor in the Union to certified organisations in the United States.” 

Can we go back to a global Internet now? The agreements temporarily resolve legal uncertainty around the transfer of EU users’ personal data by thousands of U.S. companies, including large platforms like Meta and Google. But some analysts believe that unless the U.S. reforms its controversial FISA 702 provision, which permits the government to conduct targeted surveillance of foreign persons located outside the United States, the new DPF is still vulnerable to legal nullification. Max Schrems has vowed to challenge the agreement again at the European high court, while the U.S.’s Peter Swire has argued that “the U.S. Framework provides stricter safeguards, in some ways, than the member states of the EU.” 

Turkish Cybersecurity Researcher Joins IGP as Visiting Scholar

Esra Merve ÇALIŞKAN, a doctoral student at the Department of Political Science and International Relations at the Faculty of Political Sciences at Istanbul University, arrived in Atlanta for a year-long Visiting Scholar position as part of the Internet Governance Project. She will be doing a comparative study of how cyber policy in Turkiye, the United States and Russia evolved in reaction to specific cyber incidents, supervised by Dr. Milton Mueller. 

WEIS 2023 

The 23rd edition of the Workshop on the Economics of Information Security (WEIS) took place in Geneva, Switzerland. The significance of digitization in shaping international relations was emphasized, with digital sovereignty emerging as a theme of this year’s conference. IGP has focused on this issue for a long time, as the attempt to align the Internet with state sovereignty is one of the primary retrograde movements in Internet governance.

IGP PhD student Vagisha Srivastava was invited to the panel discussion on digital sovereignty, where it became evident that the dialogue on the topic still suffers from confusing two completely different definitions. Some participants identified digital sovereignty as national control over digital infrastructure and data flows, encompassing the protection of national interests and the state-led pursuit of technology independence and policy-making autonomy. Others highlighted the importance of personal and organizational control over data, hardware, and software. These approaches emphasized individual-level sovereignty, emphasizing the need for digital access and literacy to empower vulnerable population groups. Bridging the digital divide and navigating the decentralized multi-actor network were identified as other major challenges. 

On the other hand, opportunities were identified in fostering innovation ecosystems and upholding digital common goods. Controversies arose regarding the potential alignment of digital sovereignty with state protectionism and the compatibility with the network effects of digital platforms. WEIS papers recognized the controversies surrounding the Internet of Things (IoT), privacy, and mental health in the digital age. The aim of addressing these issues was to contribute to the development of solutions and guidelines for a more secure and ethical digital landscape.

The conference featured an engaging agenda that included 15 articles, 5 posters, 4 panels, and two rump sessions for presenting new ideas. Notably, a hackathon on digital sovereignty encouraged participants to innovate by addressing the challenges in this domain. The sessions and panels delved into the dynamics of cyber risks and their impact on insurance, discussions on incentivizing cyber resilience, the dynamics of online crime, including topics such as trust signals in cybercrime forums and the relevance of underground communities in the threat landscape, and a lively debate on the implications of digital sovereignty in various contexts.

Overall, the conference provided a platform for in-depth discussions, insights, and exploration of the multifaceted aspects of digital sovereignty, aiming to contribute to the advancement of a secure and ethical digital future.

Global Threats To Encryption On The Rise  

Policymakers worldwide are contemplating legislation aimed at safeguarding children’s online safety. These bills seek to grant law enforcement access to encrypted services like WhatsApp and Signal. In the UK, the Online Safety Bill introduces a new regulatory framework to combat illegal and harmful content on the internet, including obligations for tech companies to scan their platforms, including end-to-end encrypted messaging services, for child sexual abuse material (CSAM). Concerns have been raised by a group of UK-based researchers and scientists specializing in information security, privacy, and cryptography, who caution against the deployment of surveillance technologies under the guise of online safety. Apple, having previously attempted to introduce a similar feature in iMessage in 2021 but ultimately ditched the plan, has joined formal opposition to the bill. Despite growing opposition, the Online Safety Bill is moving ahead and is expected to become law soon. Meanwhile, in the European Union, policymakers are considering legislation that would require tech companies to scan messages, videos, and photos on their platforms for illegal material, such as CSAM, and report suspicious activities to the police. This proposal has faced widespread criticism from cryptographers, technologists, and privacy advocates who fear the potential eradication of end-to-end encryption services in Europe. The legal service of the EU council has cautioned that the proposed regulation significantly limits privacy and personal data rights, raising the possibility of it falling foul of a judicial review on multiple grounds. Recently leaked survey results from the European Council indicate that the majority of governments involved in the discussion favor some form of scanning of encrypted messages, with Spain even suggesting legislative measures to prevent EU-based service providers from implementing end-to-end encryption. In the United States, the controversial EARN IT Act, which was first introduced in 2020, has been reintroduced for the third time. Among other provisions, this act compels companies to engage in user surveillance by eliminating Section 230 protections for companies that may unknowingly host illegal material, even if encryption is a contributing factor to their lack of awareness. The STOP CSAM Act follows a similar template to the EARN IT Act, proposing the removal of Section 230 immunity for civil claims against internet intermediaries involving CSAM-related harm. Encryption service providers argue that creating a backdoor exclusively for law enforcement is “magical thinking” and technically infeasible. Such a backdoor would inevitably be exploited by cybercriminals, terrorists, and authoritarian regimes for intrusive surveillance of private conversations. Privacy advocates face the task of ensuring that these legislative proposals, which pose threats to online free speech, include amendments and exemptions to safeguard encryption.