IGP at UN Internet Governance Forum

The UN Internet Governance Forum announced its draft schedule. The event will be held December 6 – 10; it is planned as a face to face meeting in Katowice, Poland, but, due to the evolving Covid situation, many of the pre-events will be virtual and most sessions will be hybrid. The Internet Governance Project organized or co-organized three sessions, all of which were accepted: a workshop on “Multistakeholder initiatives in content governance;” a workshop on “Antitrust regulation of internet platforms;” and a Town Hall on “Beyond the hype: What does digital sovereignty actually mean?”   

Fingering China: private, public roles in cyber attribution

On July 19, 2021, the United States government, along with other usual and unusual collaborators, publicly accused the Chinese government and its agents of conducting “irresponsible and destabilizing behavior” in cyberspace. The latest episode of public attribution  is notable for its expansion of the multilateral approach to public attribution of state-backed cyber activities. In addition to the Five Eyes (1, 2, 3, 4, 5) and NATO, the member states of the European Union and Japan all issued statements. However, there were important differences in who was blamed. Some statements referred to activities “by Chinese state-sponsored actors,” others to activities “which the Chinese government is behind,” others, less accusingly, to activities “undertaken from the territory of China.” The collective grouping of the attribution seems to have exacerbated China’s feeling that it is being picked on; articles in the Chinese press have questioned the accusations, with some responding angrily and others raising double standard concerns.

Some statements focused on the exploitation of Microsoft Exchange vulnerabilities, but others also pointed to other incidents attributed to threat actor “APT40,” including the unsealing of the May 2021 US DoJ indictment of four individuals. The indictment includes over 120 allegations, covering activity going back as far as 2009. While we don’t have access to the actual evidence, we do have a clear understanding of the types of data and how it was collected by both private and public actors. The indictment indicates a mix of technical indicator and technique data observable in cooperating organizations’ system logs, like installation of unique malware files and tools or use of particular directories, IP address and domain name queries, and email addresses used in spear phishing campaigns. The indictment weaves together private threat intel reporting, repeatedly stating how some (but importantly not all) of the data was “subsequently publicized by a cybersecurity company”, indicating how private threat intel companies selectively reveal and make assessments about perpetrators, perhaps without knowing the entire picture. We see references to personal information, like email and hosting accounts and domain name registrations, presumably obtained via legal due process. There is also insight to how activities were linked between different victims/incidents (e.g., software similarity, reuse of domains and IP addresses), and to the Chinese intelligence apparatus, possibly through open source and other intelligence. 

Research emanating from our 2020 workshop on attribution shows that more than 30 organizations in at least nine countries (including China) have shared data about APT40’s alleged activities since 2016. This includes private threat intel providers, threat info sharing platforms, media outlets, independent researchers, and government agencies. We are now looking at the extent to which the data from these various sources overlaps and contributed to the public attribution. Such analysis may give insight to how much the US government relies on private actors, how much and when they reveal data publicly, and what influence they have in public attribution.

Reframing Intermediary Liability 

Both the U.K. and the E.U. are working on updating the “safe harbours” in order to tighten control over online intermediaries. We have previously covered the EU’s draft Digital Services Regulation (the ‘DSA’) that regulates online intermediary services. The DSA retains safe harbours against liability but throws in additional obligations for large intermediaries to meet in order to qualify for immunity. Very large platforms that serve a certain threshold of users will be required to appoint a compliance officer, implement codes of conduct and crisis response protocols, and open up their systems for audits. Additionally such platforms are required to share parameters of decision-making on content removal, account restrictions, publish details of all ads posted on the platform and filtering/ monitoring techniques used and analyse systemic harm from the use of their platforms. Companies that do not comply with these obligations could end up coughing up 6% of their annual global income. 

In May, the U.K. government published the Online Safety Bill which establishes a “duty of care” for intermediaries to address illegal and “legal but harmful” online content. Duty of care as an approach to regulating intermediaries first made an appearance in the 2019 Online Harms White Paper. The new bill imposes an obligation on certain service providers to moderate user-generated content in a way that prevents users from being exposed to terrorist and child sexual exploitation and abuse (CSEA) content, hate speech and fraud. The extent and reach of duty of care will be set by the Office of Communications (OFCOM), based on the assessment of prevalence and persistence of illegal and harmful content on a service. The bill does not mention intermediary liability and “safe harbor” for platforms and it is unclear if and how the “duty of care” obligations will alter the conditional immunity regime outlined under the EU’s E-Commerce Directive (eCD) and implemented in the U.K. during its membership in the EU. 

While policymakers in both the EU and the U.K. seek to alter the existing liability framework that applies to intermediaries they are going about in different ways. The DSA retains the existing conditional immunity regime and sets forth specific and defined obligations for platforms. The Online Safety bill places additional monitoring obligations on intermediaries that they must meet, but which do not qualify them for immunity from liability.

The DSA is currently under review by the European Parliament Internal Market and Consumer Protection committee. The Online Safety bill needs to pass through both houses of the U.K. Parliament. A Joint Committee has been established to scrutinise the draft bill and report its findings by 10 December 2021. The committee is seeking comments on how the Bill compares to online safety legislation in other countries, and whether it will achieve its aims until 16 September 2021

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.