The Internet Governance Project (IGP) at Georgia Tech’s School of Public Policy has maintained a consistent interest in addressing the challenges of attribution in cyberspace through transnational cooperation. This topic has been explored through IGP’s presentations on the need for an international attribution institution at RightsCon 2018, the North American Network Operators’ Group (NANOG), the Institute for Information Security & Privacy, at the Internet Governance Forum and our past blog posts on the subject. On May 14-15, 2020, IGP held its 5th Annual Workshop, devoted to the topic “Building transnational cyber-attribution.” The workshop virtually brought together more than two dozen international researchers and practitioners to explore making attributions based on facts and scientific methods rather than politics and strategy. This is the first of a three-part preliminary look at the workshop proceedings, outcomes, and next steps.
The need for neutral attribution capability
The workshop opened with comments from Georgia Tech’s Milton Mueller, ICT4Peace’s Serge Droz, and Stéphane Duguin of the CyberPeace Institute. Mueller’s comments emphasized the geopolitical implications of neutralizing attribution (his full comments were published here). The ability to point a finger at an adversary, to issue blame or cast aspersion, whether justified or not, is a form of power. A new form of transnational cooperation amongst non-state actors could limit the power of states to use attribution as a strategic tool. There is a need to move beyond unenforceable norms in cyberspace and establish a network to check, review and make cyber attributions as a more potent form of cyber accountability. Droz noted that this work cannot be led by a single organization, it is important that people and organizations in a transnational attribution network work together towards keeping and maintaining their independence from states. Attribution is necessary to ensure that stakeholders are held accountable for their actions or commitments and is part of the larger aspect of establishing security, stability and resilience of the internet. Setting up an independent body will help bring together stakeholders to collectively work to build evidence led attribution. It should be based on rigorous analysis of operations, and methodologies or data derived from diverse stakeholders. Duguin emphasized that creating a functional network will require creating groups that relay accurate information and have the ability to verify facts. Other considerations that need to be factored in when creating such a network include what to publish and when, withholding information that is critical for disrupting operations, and transparency around the political processes that will shape disclosures and joint-attributions.
Detailing the attribution process…and what it means for building transnational attribution
Timo Steffens, author of the forthcoming book Attribution of Advanced Persistent Threats, offered some conceptual and pragmatic input on the attribution process. Attribution as a discipline has developed over time, with two different approaches being established. There is continuous attribution which is practiced by private threat intelligence (TI) providers and governments’ intelligence agencies. In this approach, the object of attribution is a threat group, often with a goal of linking them to a country, but rarely to an individual. There are numerous groups, activities and data sources, but publications are opportunistic and attribution has lower impact. The other approach is a case-driven attribution, typically undertaken by civilian law enforcement. The focus is on specific incidents, with the goal of naming organizations and individuals and publication of findings being politically relevant and intentional. There are relatively fewer cases to choose from, with data sources being more specific (e.g., the victim’s network), and usually confidential or sensitive.
Performing attribution occurs in four phases: 1) data collection, 2) clustering, 3) accusation or “charge”, and 4) communication. However, in practice, shortcuts are used, taking existing conclusions for granted. E.g., the WannaCry and NotPetya attributions took shortcuts by assigning the malware to existing threat groups (Lazarus and Telebots/Sandworm, respectively). Since these groups had already been attributed to countries, the accusation phase, i.e., finding evidence for the country level attribution, was actually omitted. Completing each phase requires different skill sets and data sources, which are highlighted by applying what Steffens calls the “MICTIC” framework. MICTIC stands for Malware, Infrastructure, Command and Control, Telemetry, Intelligence, Cui Bono. This information could be obtained by outsourcing or collecting it collaboratively. Skill sets required include experience with malware hunting and reversal, infrastructure tool/service usage, command and control forensics, regional expertise and political/strategic analysis, vendor-specific telemetry familiarity, and access to signals and human intelligence. Importantly, the various communities engaged in attribution work with different assumptions. E.g., that espionage activity is state-sponsored, that threat actors operate in the country/region they work for, and government-employed individuals means a state is responsible. Scientific-based attribution should explicitly document its assumptions, thereby establishing its premises.
Discussant Erisa Karafili (Univ of Southampton) raised several important questions, including whether or not clustering similarities or disparities could potentially throw attributions off track. Workshop participants suggested that improving the validity and reliability and harmonizing methods used in clustering is an important area of focus (an example would be the application of MITRE ATT&CK framework). Replicating studies is another practice that could provide an opportunity for improving attribution. However, clustering relies largely on technical data which is insufficient to ascertain motive (cui bono), especially in the transnational context.
Geopolitical and human rights implications of neutral attribution
This panel discussed the role of attribution in geopolitical conflict among states and its role in promoting accountability in cyberspace. The panelists weighed in on the use of attribution as a means of interstate rivalry as well as the implications of different attribution models for internet governance.
Kristen Eichensehr, Assistant Professor at UCLA Law, highlighted how the attribution of state-sponsored cyberattacks influences the development of international norms and international law. Inaccurate attributions, whether intentional or accidental, can corrupt the process of developing the primary norms that govern state behavior in cyberspace. False accusations are particularly problematic in cyberspace as it is hard to debunk or refute accusations, and very few entities apart from victims can assess incidents or provide checking functions. Domestic law is insufficient as a governance mechanism, as there are no obligations for states to choose an attribution mechanism that is governed by law at all, and standards vary across countries. There is a need for customary international law standards that would govern the evidence given to accompany attributions. Eichensehr argued that when cyberattacks are publicly attributed to a state, it should be accompanied by sufficient evidence to enable cross-checking of the attribution by other parties (public and private). If this is mapped on to US evidentiary standards, it would be analogous to the standard of verifiable preponderance of evidence. In practice, this means that a state doing attribution would need to give enough evidence to either replicate or corroborate the attribution. A new independent entity could help bring clarity to attributions and perhaps provide neutral trusted authority. Any new entity(ies) should supplement, not replace, the current decentralized system of attribution because there are underappreciated virtues to decentralization. Having decentralized, diverse bodies involved in attributions would be more suitable for making attributions credible for diverse audiences.
Brandon Valeriano, Professor at US Marine Corps University and a Senior Advisor to the Cyberspace Solarium Commission. discussed the challenges of engaging policymakers and the government. The US National Cyber Strategy fails to address attribution. Currently, attributions are assumed, are not collaborative, and there is no coordination internally or externally. The Cyberspace Solarium Commission settled on Layered Cyber Deterrence, which is designed to achieve the goal of enhanced resilience and enhanced attribution capability through a strategy of collective action by partners and by the international system. How is the USG trying to enhance attribution capabilities? One example is through the Cyber Threat Intelligence Integration Center (CTECH) created a few years ago. CTECH currently does not play a critical role in generating attribution processes – so the Commission is advocating that CTECH should dive into the issue to provide analysis and coordination for rapid and accurate attribution. This can be achieved through 1) standardizing ODNI’s attribution guidelines and assessment timelines 2) establishing an attribution analysis working group (not standing but designated) which should include private-sector analysis and data to accelerate the federal government’s response 3) advancing analytic capabilities by applying emerging technologies, and diversifying data sources to overcome evolving technical challenges.
Professor Hans Klein from Georgia Tech reflected on institutional considerations for building transnational cyber attributions. He highlighted that the act of composing attributions can be a form of attack or counter attack, or can be used as a form of information operations to mobilize public opinion or frame narratives. These can be used to set the agenda for future actions by states. Given the high-stakes political environment of attributions, he made the case for a global foundational entity elaborating on considerations that would influence the decision-making authority and internal processes of the proposed entity.
Go to Day 2 summary.