This time we analyze the SolarWinds intrusion and challenge some narratives being spun around it. We also look at Europe’s whirlwind of legislative activity in the EU around cybersecurity and the digital economy. And oh, happy New Year.

It’s an idiot (solar) wind…

The biggest event shaping Internet governance discussions was the discovery that a popular IT administration software, known as the SolarWinds Orion Platform, had been exploited by a sophisticated adversary. The pervasiveness of the Orion platform across multiple industry sectors and government agencies enabled extensive spying and data exfiltration once it had been compromised. The large scale and scope of the intrusion, which seems to have begun in October 2019, triggered a far-reaching debate about cyber(in)security and the appropriate response. It is notable that the discovery of this major threat did not come from the NSA, the Department of Homeland Security, CyberCommand, or any sectoral information sharing center (ISAC), but from a private sector cybersecurity firm, FireEye, which itself was compromised

Reviewing the reaction to Solorigate (as Microsoft has dubbed it), it is both heartening to see business and civil society generating rapid analysis and sophisticated discussion, and disappointing to see politicians and pundits pouncing on the incident to advance predetermined political agendas. Generally, the popular media has amplified superficial interpretations of the incident, repeatedly using military analogies of “an attack” to describe it. In our meta-analysis of the evidence below, we make the following points: 

  • Framing it as an “attack” is misleading
  • We need to know more about how it happened
  • It is too early to say that “the Russians did it” 
  • Retaliation is not such a good idea; indeed the whole incident raises doubts about the deterrence and defend forward paradigm
  • The most appropriate immediate response is improved organization-level defenses
  • More centralization of cybersecurity information is not the answer

How did it happen?

This breach efficiently hit tens of thousands of victims by targeting a software supply chain rather than directly hacking each network. Solarwinds acknowledged in its SEC disclosure that it had evidence “that the vulnerability was inserted within [it’s] Orion products…as a result of a compromise of the Orion software build system and was not present in the source code repository of the Orion products.” When customers updated with a compromised version of the software, it created a backdoor that could be exploited (and has been according to Fireeye and others). Microsoft reported that the compromised software was digitally signed, which “suggests the attackers were able to access the company’s software development or distribution pipeline.” How did that happen? At this stage we just don’t know enough about Solarwinds supply chain system, which can involve numerous components (both human and machine). There are reports of sloppy security practices, involving poor password management of update servers, and a security organization apparently without leadership. But that doesn’t explain how the adversary inserted the vulnerability or gained access to private key material to digitally sign the compromised file. SolarWinds also revealed being “made aware of an attack vector that was used to compromise the Company’s [Office 365] emails and may have provided access to other data contained in the Company’s office productivity tools.” Could this have been the initial entry point allowing for lateral movement to their supply chain? According to Solarwinds management, it’s still unclear “over what period of time this compromise existed and whether this compromise is associated with the attack on its Orion software build system.”

Who did it?

The ongoing presumption is that it was a Russian government intelligence agency. The sophisticated tradecraft, the stealthy persistence within its targets, and the presumed incentives indicate state espionage, which narrows it down to a few possibilities. But attribution to Russia is only a plausible assumption at this point. No one has adduced direct evidence that links it to Russian intelligence (known as the SVR). We need to be more careful and scientific about attribution, especially since many people are talking about retaliation. There are too many people who want it to be the Russians for reasons that have more to do with ideologies than cybersecurity. Those who say the Russians did it need to provide evidence, not just assertions. [Hours after we published this, Recorded Future made available a useful and careful analysis of SolarWinds attribution, entitled “Are we getting ahead of ourselves?“]

An intrusion or an attack?

It seems irresistible for observers to call this an “attack.” But such framing conjures up images of forcible, coercive actions, such as denial of service attacks or ballistic missiles. Those who see this as an “attack” are forgetting that the intrusions that enabled it started more than a year ago, the tainted updates happened in March or April, and they weren’t even noticed until mid-December. Although the update gave them potential access to thousands of organizations, the adversary was highly selective in making use of it, and took steps to remain unseen. More careful observers classify the incident as espionage. 

It is true, as Sergio Caltagirone points out, that the intrusion provided the adversary access that could, in theory, be used later to mount some kind of damaging attack. Indeed, in the case of NotPetya a supply chain compromise first provided access and was later exploited to deliver a ransomware/wiper attack. But (as Caltagirone surely knows) NotPetya exploited the access provided by a compromised software update several days after the access was created. It’s been 8-9 months since the Solarwinds compromise and nothing but espionage and data exfiltration has occurred. And if something damaging does happen using that access, we can indeed call that “an attack.” Until then, we don’t have an attack, we have an intrusion enabling espionage. This does not downplay the scale of the risks created, but spying among national governments is essentially unregulated, and the U.S. does it at a similar scale and scope.  

Is retaliation the answer? 

The superficial and psychologically/politically rewarding response is to damn the Russians and demand some form of payback. President-elect Biden has already promised to impose “substantial costs” on the responsible party. Calls for retaliation are based on a logic of deterrence; one punishes bad actors so that they will avoid doing it again. But if it is the Russians, there are serious doubts about the practicality of deterrence in this case. The U.S. and other countries have already imposed numerous sanctions and punishments on the Russians as a response to their invasion of the Crimea, and for subsequent cyber operations. It’s not as if they don’t know we don’t like cyber-intrusions and aren’t prepared to use whatever leverage we have to punish them. The Indian cyber-analyst Pukraj Singh argues that the SolarWinds incident undermines the idea that “cyber operations more or less adhere to the law of armed conflict, thus bestowing legitimacy upon Western offensive counteractions.” 

So are we prepared to escalate retaliation, e.g., to a military attack? Such escalation, aside from increasing the risk to the U.S., would only increase the incentives of the foreign adversary to keep an eye on us and find and exploit whatever cyber vulnerabilities they can. What’s needed are actions to reduce tensions between the two countries.

Then there’s the issue of whether retaliation is even justified. As Jack Goldsmith pointed out, “the United States regularly penetrates foreign governmental computer systems on a massive scale, often …with the unwitting assistance of the private sector, for purposes of spying. It is almost certainly the world’s leader in this practice, probably by a lot.” For all we know, Solarwinds could be perceived by the adversary as retaliation for our own acts. And as frustrating as it may seem to not punish the bad actor, this is so far an intrusion not an attack. 

How to respond?

It is best to concentrate on fixing security flaws and improving defenses. Deterrence by denial, as they say. A common fallacy that pervades thinking about problems like this is that if the intrusion was attributable to a state actor adversary of the U.S. government, then the appropriate response is also government-led, nationwide in scope and military in nature. In fact, the adversary in this case has exposed a software supply chain (and possibly other) weakness that is not confined to the United States and is best addressed transnationally at the organizational and industry level. It is likely a vulnerability can be identified, monitored and fixed, and its exposure in this case can alert us to carefully scrutinize other supply chains to identify related problems that can be addressed. The U.S. should recognize its real strength, which is that it has world-leading private cybersecurity capabilities with strong market incentives to identify and remediate such vulnerabilities. 

Information sharing

Several well known, expert observers are saying that this incident strengthens the case for governmentally mandated information sharing. The facts about this incident, however, provide no support for that idea. On the contrary, it was a single private actor, FireEye, which uncovered the threat. They did this with their own resources, by carefully monitoring their own situation. After it discovered the intrusion, FireEye had no problem sharing relevant information with the rest of the world using a variety of formal and informal mechanisms. It is difficult to conceive of any way in which formalizing and bureaucratizing the sharing of threat intel in the hands of the state would have led to an earlier discovery. Indeed, the official agencies already designated with information sharing capabilities, such as the ISACs, the DHS and the NSA, had no idea this was going on. We will write another article about why incentivized private actors and looser, networked forms of governance will likely perform better at generating data and putting it to use, but for now it is important to counter the attempt to bend the facts to support the centralized information sharing narrative. 

Europe Churns Out Digital Laws 

In July 2019, Ursula von der Leyen, the President of the European Commission vowed to make “Europe fit for the digital age” and “upgrade the Union’s liability and safety rules for digital platforms, services and products.” On 15 December 2020, the European Commission delivered on the promise, and published the draft of the Digital Services Act (DSA) and the Digital Markets Act (DMA). Together, the DSA and the DMA form the Digital Services Act Package, a set of rules intended to create a new legal framework for the regulation of digital services, which have until now been largely governed by the e-Commerce Directive adopted in the year 2000. The legislative package is still just a proposal, but if adopted it will have huge repercussions for users and technology companies not only in the EU but beyond. 

On 16 December 2020, the European Commission also released a set of proposals related to cybersecurity, including the EU’s Cybersecurity Strategy for the Digital Decade, a new directive on critical infrastructure and a proposal for refreshing the Directive on Security of Network and Information Systems (NIS Directive), an EU-wide legislation on cybersecurity that came into force in 2016. NIS2 is only a proposed Directive – it will be some time before any new rules take effect. The proposal will now be subject to negotiations between the legislators (notably the Council of the EU and EU Parliament), which can take time. Once agreed and adopted, Member States will then have a further 18 months to transpose it into local law.

These proposals along with other regulatory efforts like the General Data Protection Regulation (GDPR) are part of Europe’s efforts to tame Big Tech and assert sovereignty in data and digital technology. The Commission asserts that “European values are at the heart of both proposals” but it is an unstable mix of promoting individual rights, protectionist impulses against American big tech companies, and an attempt to assert “technological sovereignty” in a globally interconnected world, all of which create spillover effects in other jurisdictions. We focus on the important elements of the DSA-DMA legislative package and the NIS2 Directive below:

What’s in the new DSA-DMA Package? 

Focus on very large online platforms 

The DSA applies to any intermediary irrespective of their location as long as they serve European customers, but it focuses on very large online platforms; i.e. those serving at least 10% of the Union population. The DMA applies to large companies that are classified as gatekeepers i.e. those that have a strong economic, intermediation or an entrenched and durable position in the market. Simply put, digital platforms with a large, lasting user base (over 45 million) in multiple countries in the EU and who control at least one “core platform service” (search engines, social networking services, certain messaging services, operating systems and online intermediation services) qualify as gatekeepers

Obligations for intermediaries 

The DSA defines new due diligence obligations for online platforms such as social media and marketplaces. It requires them to establish a single point of contact, designate a legal representative within the EU and implement mechanisms allowing third parties to flag illegal content. Very large platforms have more focused obligations including escalating information in case of suspicion of serious criminal offences involving a threat to the life or safety of persons, establishing a complaint-handling system, adopting measures against misuse, enabling traders to respect EU consumer/product safety laws, as well as increased transparency of online advertising and content recommender systems. The DSA also introduces “algorithmic accountability” for platforms and sets new obligations to assess the risks which arise from the systems used by them. 

Under the DMA,  gatekeepers must refrain from engaging in  “clearly unfair” practices and also have positive obligations for e.g. they must allow business users to promote their offerings and conclude contracts with customers outside the gatekeeper’s platform, implementing targeted measures to enable third-party software to function properly and interoperate with their own services, etc.).

Penalties

The DSA provides for penalties of up to 6% of the annual income or turnover of a provider of intermediary services in the event of infringement. Under the DMA sanctions for non-compliance include fines of up to 10% of the gatekeeper’s worldwide turnover. For recurrent infringers there are also structural remedies, such as obliging a gatekeeper to sell a business, or parts of it (i.e. selling units, assets, intellectual property rights or brands).

Setting up European regulatory bodies 

The Directive establishes Digital Services Coordinators at the national level to receive complaints about breaches of the regulation, investigate, audit, carry out interviews and on-site inspections, and impose interim measures and fines. A European Board for Digital Services will be set up to supervise the Digital Services Coordinators and support the coordination of joint investigations.

Breaking down the NIS 2 Cybersecurity Directive

Essential and important services 

The NIS2 Directive eliminates the distinction between operators of essential services (OES) and digital service providers (DSPs) in the current NIS regime. Instead, public and private “entities” are classified as “essential” and “important” based on their criticality for the economy and society. While the risk management frameworks for different entities are similar, the duties and level of penalties vary across the two categories. The revised directive also introduces a clear size cap, meaning that all medium and large companies in selected sectors will be within scope, although it does retain flexibility for Member States to include small, high risk entities. The Commission’s proposal calls for expanding the scope of the Directive to include critical digital infrastructures and providers of services like Internet Exchange Points (IXPs), DNS service, TLD name registries, cloud computing service, data centre service, content delivery networks, and public electronic communications networks (essential), and digital marketplaces, search engine operators, social networking services platforms, postal and courier delivery services (important).

Increased regulations for service providers

The NIS2 Directive establishes focused measures covering incident response and crisis management, vulnerability handling and disclosure, streamlined incident reporting requirements, and cybersecurity testing. Important and essential entities will be tasked with assessing the cybersecurity risks facing them and will need to submit breach  notifications “within 24 hours after having become aware of the incident”. The authorities may inform the public about such an event if it’s important. The directive also covers accountability of the company management for non-compliance with cybersecurity risk-management measures. 

Fence-straddling on encryption and law enforcement access

The Directive makes end-to-end encryption mandatory and notes, “In order to safeguard the security of electronic communications networks and services, the use of encryption, and in particular end-to-end encryption, should be promoted and, where necessary, should be mandatory for providers of such services and networks in accordance with the principles of security and privacy by default and by design for the purposes of Article 18.” Problematically, it also notes that the “use of end-to-end encryption should be “reconciled” with the Member States security interests and should “permit the investigation, detection and prosecution of criminal offences in compliance with Union law.” In a verbal squaring of the circle, the Directive says that “end-to-end encrypted communications should maintain the effectiveness of encryption in protecting privacy and security of communications, while providing an effective response to crime.” The provision is likely to be controversial.

Admonitions for supply chain security 

The Directive does not go into details but notes, “Addressing cybersecurity risks stemming from an entity’s supply chain and its relationship with its suppliers is particularly important…” The Directive does little more than admonish entities to “take into account the overall quality of products and cybersecurity practices of their suppliers and service providers, including their secure development procedures.” 

Supervisory and enforcement measures

Authorities have investigative powers over essential and important entities like on-site inspections and random checks, and may take measures that are  “effective, proportionate and dissuasive” including issuing warnings, binding instructions orders, and administrative sanctions like fines. The law authorizes fines of “at least 10 million EUR or up to 2% of the total worldwide annual turnover of the undertaking to which the essential or important entity belongs in the preceding financial year, whichever is higher.” 

Starting a cyclone and European vulnerability database 

The Directive calls for enhancing the role of the Cooperation Group and the establishment of European Cyber crises liaison organisation network (EU-CyCLONe). The directive calls for the EU Agency for Cybersecurity (ENISA) to “establish a vulnerability registry” where  essential or important entities and their suppliers, as well as other operators may disclose vulnerabilities on a voluntary basis. The establishment of the European vulnerability database given the U.S. already operates one is significant and highlights the way the pursuit of “sovereignty” leads to fragmentation of cybersecurity initiatives. As noted in the Directive “Although similar vulnerability registries or databases do exist, these are hosted and maintained by entities which are not established in the Union. A European vulnerability registry maintained by ENISA would provide improved transparency regarding the publication process before the vulnerability is officially disclosed and resilience in cases of disruptions or interruptions on the provision of similar services.”