By: Stephen H. Campbell
There are several reasons why attribution is so hard in cyberspace.
(1) Schneier cites the obvious one – it’s easy for actors to disguise their geographic origins by using VPNs, Tor, jump hosts etc. Many of the North Korean cyber actors, for example, mount their attacks from China.
(2) Another reason is the endemic use of proxies – organized cyber militias under tight control (China), private individual contractors on a loose leash (Iran), or organized criminal groups that can operate with impunity as long as they target state adversaries (Russia). The use of proxies or cutouts makes the case for attribution much tougher and enables plausible deniability.
(3) Another reason is the extensive use of false flag operations by moderately skilled to advanced actors. For a good analysis including examples see Bartholomew and Guerrero’s paper at https://media.kasperskycontenthub.com/wp-content/uploads/sites/43/2017/10/20114955/Bartholomew-GuerreroSaade-VB2016.pdf and their presentation at RSA 2017 – https://www.rsaconference.com/writable/presentations/file_upload/ht-w11-hello-false-flags-the-art-of-deception-in-targeted-attack-attribution.pdf. It’s easy to make claims for an attack that make it look as if a hacktivist or terrorist group has perpetrated the attack. And it’s not that difficult to copy others’ code or pepper your code with their handles or snippets of their language or even incorporate others’ actual malware which invokes others’ C&C infrastructure. All of this can throw the forensics dogs off the scent…
Denial and deception is easier in cyberspace than in physical space, especially identity deceptions. Compared to face-to-face interactions, communication channels in cyberspace carry considerably less information – there is no body language, there are no voice inflections etc. For survival reasons the human brain evolved with a huge visual component and the ability to fuse data from the other senses and formulate hypotheses remarkably quickly based on historical experience (sense making). Simulating this with artificial intelligence and machine learning is still in its infancy – objects in cyber space just cannot yet be as easily examined as they are in physical space. And there is little permanence in cyber space – information can be quickly and easily changed. For all these reasons ascertaining identity is tough.
(4) Still another reason, this one less technical, is the lack of analyst training that cyber threat analysts get in identifying and countering bias, and in using estimative language. Very often analysts are being pressured to draw conclusions when they only have a fraction of the evidence, or when their sources are either speculative or sparse – in such cases they need to be trained to examine alternative hypotheses, to articulate reservations in their conclusions, to call a hunch a hunch, or simply to concede that they are unable to draw any reasonable conclusions given the limited data they have.
(5) Sometimes analysts will draw conclusions about a cyber actor by examining their tasking i.e. the targets they are going after, or by identifying specific toolkits (e.g. the use of Snake means it is probably Turla, the use of WildPositron means it is probably Lazarus etc.). But similarities in tasking are not enough. And tools can so easily be shared and replicated. So the closest that non-government analysts tend to get to accurate attribution occurs when the adversary has made a mistake – e.g. left strings, debug paths or metadata in malware, or cut corners – reused the same command and control infrastructure as a previous campaign or used a piece of infrastructure registered by the same individual as a previous suspect or reused the same passwords or encryption keys etc.
(6) Nation-state analysts working for well-funded intelligence agencies have the advantage that they can fuse cyber IOCs with intelligence gained through other means – specifically SIGINT and HUMINT. If these agencies are able to infiltrate their cyber adversaries and listen to their communications or interact with their programmers and/or operators directly then they can gather corroborating evidence that can be pretty conclusive. The problem is that, if they then use this intelligence they may give away their sources and methods: a judgment call that decision makers have to make depending on how high the stakes are in a potential name and shame situation.
(7) So how important is attribution, anyway? Well, for nation-state actors who wish to create a deterrent affect through retaliation, it is very important. John Bolton pushed through a recent repeal of Obama-era restrictions on retaliation in cyberspace. See https://www.wsj.com/articles/trump-seeking-to-relax-rules-on-u-s-cyberattacks-reverses-obama-directive-1534378721. Let’s hope that the bar is pretty high for the level of confidence in attribution needed before retaliatory actions are taken. And for law enforcement, the bar must typically be high enough in the US to provide enough evidence for the “probably cause” standard so that the indictment holds up in court.
For the typical commercial enterprise or non-profit organization, however, attribution to a specific actor or group or nation-state agency is simply not that important, given that the commercial organization or non-profit cannot typically do much with this information. In the majority of cases the actor is external and in a remote jurisdiction. The organization is best to spend its efforts understanding the class or type of actor so that it better understands its threat profile and can put in place appropriate defensive measures. Of course, the organization can bring in the FBI or local law enforcement and can subsequently press charges if attribution is successful. But in most cases attribution will not be possible or will result in a remote actor outside of law enforcement’s jurisdiction. The exception is when the cyber crime is committed by an insider, in which case the organization is justified in pursuing attribution until it can root out the bad actor(s) in its midst.