On March 9th a battle between subcommittee members in the U.S. Congress ensued during the Hearing on the Weaponization of the Federal Government on the Twitter Files. The Twitter Files show that multiple government agencies, including the Federal Bureau of Investigation, Department of Homeland Security, Department of State, and the Central Intelligence Agency were in regular communication with Twitter since 2020, making suggestions related to “problem” accounts, their company policy violations, and what they called disinformation. Essentially the files show the government targeting speech through a private company, and perhaps gaining the ability to indirectly censor certain topics and discussions. This type of collaboration goes against the essence of free speech and adds fuel to the fire surrounding content moderation.
Both U.S. political parties have raised concerns about platform content governance. Both seem hostile to Section 230. Democrats have made combating disinformation a focal point of their party, creating a Task Force on Digital Citizenship and proposing a multi-agency digital democracy task force to combat disinformation and misinformation, as well as legislation to create a “COVID-19 Misinformation & Disinformation Task Force.” However, they failed to define clear criteria for defining disinformation and were not transparent about who decides what disinformation is. Additionally, they seem to believe that in times of crisis it is legitimate to restrict certain controversial speech for the sake of maintaining a narrative deemed in the public interest.
One of the threads in the Twitter Files shows that both the Trump and Biden administrations requested the removal of content in 2020. Nevertheless, Republicans, accusing Democrats of pressuring social media companies to suppress content, have introduced what they call “anti-censorship” legislation. The Protecting Speech from Government Interference Act would prohibit federal employees from censoring the speech of others while acting in an official capacity. Censorship by federal agencies is already illegal under the first amendment, however. Will this type of legislation eventually be extended to address the content moderation policies of private platforms? How will Republicans react if Twitter suppresses content in a way more to their liking, now that it is moderated by Elon Musk instead of woke liberals? Whether the target is disinformation or censorship, it seems that for our lawmakers the first amendment is not sufficient to address these issues.
The increased political attention to content moderation has continued to put the spotlight on Section 230, which promotes free speech by immunizing platforms from liability for what users post, while also allowing them to limit what people can say and do on their services. Section 230 allows platforms, reinforced by First Amendment protections of editorial discretion, to regulate and decide what content they want to host, but it does not hold them liable for content that their users post. Since 1996, this bipartisan law recognized that the value of promoting free speech online outweighed potential harms. It enables open discussions without interference, and more importantly, allows dissenting opinions that are crucial for converging on truth. The internet under Section 230 is the framework of an open democracy as it lets users of various communities discover each other and debate similar or opposing views. The United States’ approach to online content moderation allows diverse, competing private actors subject to market discipline to regulate speech online. This is radically different from countries like China, which directly controls content through government intervention. It also differs from growing efforts in the UK and Europe to impose a “duty of care” on platforms. As the Twitter Files show, democratic societies are not immune to the political impulses that lead to Chinese-style government intervention.
On the surface, our lawmakers are divided on what to do with Section 230. However, changes are already happening and in some instances, they are working together. President Joe Biden called for Big Tech companies to take responsibility for the content they spread, and to fundamentally reform Section 230 of the Communications Decency Act. The EARN IT Act, which had bipartisan support, aims to slow the spread of child sexual abuse material (CSAM) online, by limiting Section 230 protections. The goal is to encourage platforms to scan all user-transmitted or stored messages, photos, and files. Platforms would have to turn material obtained through warrantless searches, which are inadmissible in court, over to law enforcement. Even though platforms are already liable for violating federal criminal law, including bans on possessing and distributing CSAM, the bill reverts protections under Section 230, to allow state criminal and civil lawsuits against platforms relating to user distributed CSAM.
Fortunately, the Supreme Court has shot down two of the most significant challenges to Section 230. Gonzalez vs Google alleged that Google is partially responsible for a terrorist attack because one of its companies, YouTube, mistakenly suggested ISIS content to users. In a related case, Twitter vs Taamneh, relatives of ISIS victims invoked the Anti-Terrorism Act to claim that Google, Twitter and Facebook “aided and abetted” terrorists by not taking down their content. Both cases are reminiscent of Fields vs Twitter, where the lawsuit claimed that Twitter provided material support to terrorists by providing accounts to users who discuss or promote terrorism. In a brief filed to the federal appellate court, the Electronic Frontier Foundation argued that if online platforms no longer have Section 230 immunity, platforms will take aggressive action to screen their users, review and censor content and potentially prohibit anonymous speech. The Fields vs Twitter case was dismissed in 2016 as the court ruled that Section 230 barred the families’ claims.
Even without Section 230, many forms of targeted speech, such as hate speech or political and medical misinformation, is protected under the First Amendment. Section 230 is meant to protect the First Amendment online. Without it, platforms would be more risk averse and take down content which they deem could end in a lawsuit, stifling speech surrounding controversial but important topics. Are the Twitter Files an early indicator of what content moderation would look like without Section 230? The over politicization of disinformation and censorship has put a fundamental right at risk and may end in a zero sum game, where the biggest losers will be the American people. If we don’t pay attention to the bipartisan legislation, we risk falling into a state run censorship-industrial complex.