Guest Blog Post by Carter Yagemann, Cybersecurity Researcher and Graduate Student at  The Institute for Information Security & Privacy (IISP), Georgia Tech.

A few weeks ago I had the privilege of attending the RSA conference as an RSAC Security Scholar. It was my first time attending the conference, and the presentations of leading industry security experts made it an event worth attending.

Although the highlights of the event were the discussions on securing the internet of things (IoT) and government’s role in regulating security, I want to tease out another topic that may have been overlooked due to its technical nature: cyber-attack attribution. We’ve seen this topic gain significant attention in the media due to the allegations by the United States that the Russian government sponsored the hacking of the Democratic National Committee. In light of this event, it seems highly appropriate that two researchers from the Russian company Kaspersky Lab gave an RSAC talk on the false flags used by cyber-attackers to mislead researchers attempting to perform attribution. Their points are worth discussing in a more general context because we must understand the credibility and limitations of this kind of forensics if we are to respond intelligently to such serious allegations.

I want to begin by reiterating the digital “fingerprints” that forensic researchers rely on to make sense of the aftermath of a cyber-attack. These fingerprints can be broken into two broad categories: data-based and intelligence-based.

On the data side we have the digital bits left behind by the malicious software (malware) and tools used by the attackers. These fingerprints can contain clues like when the malware was compiled, what language the authors were typing in, the addresses and geographical locations of the servers used by the attackers to control the malware, and more. On the surface, these are all valuable for narrowing down the list of possible suspects. However, there is a catch: data can be rewritten.

In cyber-security research we like to divide complex computer systems into layers of privilege. Just like how a town on the western frontier might appoint a sheriff to keep the townspeople law-abiding, in computer systems we rely on layers of higher privilege to enforce the integrity of lower layers. When we design our security models, we often refer the layers that we assume to be trustworthy as the trusted computing base (TCB). Just as we cannot trust a town with a corrupted sheriff to be law-abiding, we cannot trust the integrity of the data on a system after its TCB is corrupted. Since malware often leverage privilege-escalating (TCB corrupting) exploits, it is difficult to determine if uncovered clues are genuine or false flags planted by the attacker to mislead investigators.

Which brings us to intelligence-based fingerprints. These are the clues traditional investigators are more familiar with. They include comparing the possible motives for performing an attack against the suspected motives of known parties, comparing the tools used to those believed to belong to various actors, and so on. While this is not a completely ineffective way to pinpoint the perpetrator, it is a shift away from the certainty of deduction and towards the uncertainty of induction. The uncertainty becomes greater when you consider that a lone political activist can have the same motives as a foreign nation state and malware tools are frequently traded over internet black markets.

This leads to the main point I want to make. Relying on intelligence and induction in the absence of trustworthy data is not inherently a bad thing. Induction can still be used to make intelligent decisions. The problem arises when conclusions inferred using intelligence are masqueraded as conclusions deduced from data in an attempt to give them unjustified credibility. When an entity like the United States government hires a team of technical experts from a company like CrowdStrike and then fails to produce satisfying technical indicators to support its allegations, instead choosing to keep most of its supposed evidence secret, we as a society have to exercise skepticism. The stakes are too high to tolerate the blurring of the line between hard science and soft speculation.


This is a blog post by Carter Yagemann