Facing a hostile U.S. Congress, Facebook proposes “updated Internet regulations” that push more responsibility for content regulation decisions to the government. Europe is already trying to do just that, but civil liberties groups are campaigning against it. India considers a tech nationalist approach to cryptocurrency.

A taste of “updated Internet regulations” 

U.S. politicians are exploiting anti-Big Tech sentiment to target free speech on social media. Congressional hearings on “Social Media’s Role in Promoting Extremism and Misinformation” showed that there is a bipartisan consensus that social media companies can be blamed for almost everything that is wrong with American politics, and that unspecified forms of government regulation of platforms can fix it. In his opening statement at the March 25 hearings, Democrat Michael Doyle angrily exclaimed that the platforms “amplify extremism” and their business models encourage and profit from misinformation. Republicans are going along with the attack because, in the words of Ohio Republican Bob Latta, they dislike “big tech’s ever-increasing censorship of conservative voices and their commitment to serve the radical progressive agenda” by “cancelling any voices that are not considered woke.”

It’s now clear that Facebook’s campaign for “updated Internet regulations” is designed to defuse these political pressures by pushing more responsibility for controversial content regulation decisions to the government. Google’s testimony defended Section 230, stating that without it, “platforms would either over-filter content or not be able to filter content at all.” Twitter’s Jack Dorsey argued that neither government nor private companies should make the decision, but favored a “protocol approach” that would allow users to be in control. But Facebook’s Mark Zuckerberg called for “thoughtful change” in Section 230. “Instead of being granted immunity,” he said, “platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it.” Significantly, his testimony also asked Congress “to bring more transparency, accountability, and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal.” During  questioning, he endorsed Congressman Peter Welch’s suggestion to create a new government agency modelled on the SEC or FTC to supervise and regulate decisions about content filtering. 

It will be interesting to see what concrete measures, if any, emerge out of this. Scapegoating Big Tech is easy; the problem of moderating or controlling digital content at massive scale without fostering political censorship is not. We have already documented how efforts to regulate disinformation and hate speech in Europe and Asia have led to internet censorship by the state. What would be the standard for determining whether “adequate systems” of detection and suppression of illegal content are in place? Wouldn’t political oversight of processes for suppressing legal content be vulnerable to a constitutional challenge?

A European model?

Europe has already taken several steps down the path American lawmakers are contemplating. The latest is a proposed Regulation “on preventing the dissemination of terrorist content online.” The controversial regulation, proposed in 2018 after “lone wolf” terrorist attacks in Europe (which there is no conclusive evidence that online content caused), has many similarities to Zuckerberg’s notion of “updated Internet regulations.” While it only targets terrorist content, it tries to square the circle of retaining platform immunity while, in fact, making platforms more responsible for detecting and taking down content. It makes the government (“competent authority”) responsible for striking “a fair balance between public security needs and the affected interests and fundamental rights including in particular the freedom of expression and information, freedom to conduct a business, protection of personal data and privacy.” The Regulation tries to retain that aspect of the 2000 E-Commerce Directive that relieves platforms of a general obligation to monitor,” but it also says “the decisions under this Regulation may exceptionally derogate from this principle” due to “the particularly grave risks associated with the dissemination of terrorist content.” 

Pressures from liberal civil society and free speech advocates led to several improvements of the Regulation, but European digital rights groups are still campaigning against its passage. Their main concern is that it creates incentives for platforms to use upload filters and other forms of algorithmic content regulation that will restrict access to important and useful information.

India may block cryptocurrency IP addresses

In the latest manifestation of tech nationalism, the Indian government is considering blocking internet protocol (IP) addresses of cryptocurrency exchanges or companies dealing in cryptocurrencies. The development is a part of the government’s plan to introduce a bill to ban private cryptocurrencies and introduce India’s own digital currency in line with what China and other countries are doing. India is also amending its Companies Act 2013 to require companies to disclose cryptocurrency holdings in their financial statements.