The text below was the basis for Dr. Ford’s remarks to a symposium on “Controlling Cyber Conflict? Arms Control, International Norms, and Strategic Restraint,” held on June 21, 2011, at the National Press Club, and organized by the George C. Marshall Institute.
June 21, 2011
by Christopher Ford
Good morning everyone. Let me start by offering my thanks to the Marshall Institute for organizing this event and inviting me to take part. It's a pleasure to be back at the National Press Club, and an honor to join such a terrific group of experts in discussing cyberwar challenges. I would like to say a few words on the subject of arms control as a potential approach to controlling cyber threats, and to offer my thoughts on alternative approaches.
I. The Threat
Unfortunately, it's almost unnecessary today to describe the threat presented by cyber conflict. Increasingly sophisticated computer attacks against government systems, defense contractors, and corporate pillars of the Internet-connected world are the stuff of almost everyday headlines.
Last month, for instance, it was reported that a number of companies and institutions had suffered cyber intrusions – among them the defense giant Lockheed Martin, which was attacked by intruders who had equipped themselves for this by earlier penetrating computer security at a different company that manufactured user authentication software. Attacks were also reported against Northrop Grumman, L-3 Communications, Sony, Citibank, and even the International Monetary Fund.
Attribution of such attacks is notoriously difficult, but observers today speak increasingly of the so-called "Advanced Persistent Threat" (APT) faced by U.S. networks. Though we usually cannot definitively identify the originator, and though a number of countries – among them Russia, North Korea, Israel, and the United States – have reportedly used cyber tools to attack someone else in recent years, pretty much everyone understands this APT to come from the People's Republic of China.
So far, the APT seems to have been devoted to spying. The more dangerous possibility, however, is that the very access that is now used to steal information could be used – or has perhaps already been used – to implant malicious code for use later, potentially enabling an attacker to crash or manipulate key computer networks in time of crisis or war. That kind of information infiltration, as opposed to the mere exfiltration of data theft, is the Holy Grail of cyberwar at this point, and given our extraordinary degree of dependence upon computer systems in both the civilian economy and military operations, serious cyber assaults could be dangerous indeed.
II. The Seemingly Easy Answers
Some observers have suggested that the answer to controlling cyber threats lies in cyber arms control. And indeed the intuitive case for this seems strong, at least on a superficial level. A new technology has emerged that has the potential to wreak enormous destruction, not simply between military opponents but in ways that could cripple the civilian economy and cause vast suffering to innocents in belligerent countries and around the world. What could be more natural than trying to ban or regulate military applications of cyber technology?
Others, by contrast, seem to see the best answer as lying in the application of traditional deterrent approaches. They urge us to adopt a cyber war posture analogous to our use of nuclear deterrence to forestall adversary aggression during the Cold War. Yet both of these "easy" answers are problematic in different ways.
A. Cyber Arms Control?
I believe the "arms control answer" fails for several reasons. Conventional approaches to arms control need it to be clear what one is regulating, and that this "what" be meaningfully observable to others. Neither of these things is very easy to imagine in the cyber context. A cyber "weapon," after all, can be no more than just lines of computer code – a pattern of ones and zeros that doesn't even "exist," in the usual physical sense, at all. This is about as protean a technology as one could imagine, one that resides in and propagates between billions of computer chip "locations" around the world, one that is used or relied upon everywhere (and by nearly everyone) for all sorts of things every day, one quite capable of autonomous replication, and one that can be detected in situ only by actually accessing and studying the memory of the individual system in which it resides. To describe cyberspace as a nightmare scenario for arms control verification, therefore, would be to understate the problem.
And indeed, there doesn't seem to be agreement on what sort of computer "weapons" should be controlled anyway. In the West, we tend to define cyber weaponry in technical terms, to mean software or techniques that target an adversary's computer systems themselves. But some other countries have a worryingly different view of what constitutes a cyber "weapon."
From what we think we know about Russian and Chinese cyber doctrine, for example, they agree with us about system-attack tools constituting cyber "weaponry." But this is only part of the picture as they understand it. Moscow and Beijing also seem to consider the category of cyber weapons to include the open, substantive content of electronic communications – that is, to include political ideas considered threatening to the government. Controlling cyber threats, in this context, means foreclosing the Internet's utility as a transmitter of political and social ideas.
In the mid-2000s, efforts were made by various capitals to promote cyber arms control and associated notions of Internet regulation based upon such theories, often building upon the idea that the present Internet regulatory system, set up by the United States in the 1990s, and which prizes informational freedom, was both unfair and threatening. The Bush Administration pushed back hard against such proposals, a position so far mostly followed by the Obama Administration, despite its enthusiasms for other forms of arms control.
Such ideas have not disappeared, however, and indeed seem to be picking up again. Earlier this month, for instance, the president of China's Xinhua news agency publicly called for the creation of a "new world media order" through "[r]esetting rules and order in the international media industry" based upon "the theories of 'checking superpower' and 'maintaining equilibrium.'" Under this system – which he described as "something like a 'media U.N.'" – international media would be held to "rational and constructive rules so as to turn mass communication into an active force for promoting social progress" and a new mechanism would be established to "coordinate the global media industry."
Viewed through the Chinese prism – in which the most significant cyber threat is that Internet content-providers might permit Chinese citizens to learn and discuss things their government does not wish them to know or discuss – this substantive content regulation isn't just related to cyber arms control. It is the most important kind of cyber arms control.
For all these reasons, as I have written elsewhere, we should think long and hard before approaching cyber arms control as we have so often tried to approach the regulation of other destructive military technologies. It is unlikely to work, and some of our most important would-be interlocutors are undertaking the effort in order to accomplish things we cannot possibly allow ourselves to support.
"Deterrence" approaches, however, also have their problems. The paradigmatic problem of cyber deterrence is well known: attack attribution. It can be extraordinarily hard – or in many instances impossible – conclusively to identify the source of a cyber assault. But if your would-be attacker, knows you can't clearly identify him, how deterred is he likely to be by the prospect of your retaliation?
III. What to Do?
A. Best Practices
One possibility that deserves further attention is the idea of negotiating not cyber arms control "limits" on government activity but rather a collection of "best practices" – a recommended code of conduct, if you will – to encourage governments and private entities alike to work together on robust cyber-security, and in investigating attacks and coping with their consequences. In the transboundary environment of cyberspace, effective response requires effective international cooperation involving both governments and the private sector. Much more should be done to develop and improve such coordination.
After Estonia suffered debilitating cyber attacks from Russia in 2007, Estonia's NATO allies stepped in to work quite effectively with the government in Tallinn to contain and repair the damage. In fact, NATO maintains a "Cooperative Cyber Defense Centre of Excellence" in Tallinn, devoted to cyber-defense-related education, research and development, "lessons learned" analysis, and coordination within the Alliance. Another example of inter-institutional cooperation against cyber threats can be seen in a pilot program that apparently began last month in the United States, pursuant to which the secretive National Security Agency (NSA) makes available to private Internet Service Providers (ISPs) the "fingerprints" of malicious code, or sequences of suspicious network behavior, associated with particular types of cyber attack. The ISPs then use this data to scan information headed through their systems to major U.S. defense contractors, looking for telltale signs of an assault in progress – hopefully thereupon being able to shut down such attacks before they penetrate the contractors' defenses. Since this is an approach that avoids the political sensitivities of having NSA itself accessing private data, it is perhaps both scalable and "saleable" for much broader application.
The point, at any rate, is that we should look for ways to improve cooperation and coordination in cyber security, attack analysis, and consequence management. Articulating and promulgating international "best practices" might help.
B. Old Rules, New Rules
Another important step, I think, would be to clarify the rules that presumably already apply to conflict in cyberspace – and the important degree to which cyberspace is not wholly different from other realms of potential conflict. Though much about cyberspace is indeed novel, the rules of international law, I would submit, are no less applicable there than elsewhere. If one faces a sufficiently grave threat, one is within one's rights to take action against one's adversary under the principle of military necessity, and by whatever means are most efficacious, consistent with the legal principles of proportionality and distinction related to the prevention of unnecessary suffering and the protection of noncombatants.
The cyber world will surely have its idiosyncrasies with regard to how such principles are to be applied, but this is not simply terra incognita. Already, in fact, cyber planners seem to be pushing in the direction of compliance with basic law-of-war principles that govern the use of force in other respects. As I discussed in a blog last year, for instance, the coding of the "StuxNet" worm, allegedly used by the United States and Israel against Iran's uranium enrichment program, seems to have been written with just such compliance in mind. Arguably, "StuxNet" suggests the cyber world's gradual development toward weapons and tactics analogous to our military's use of precision-guided ordnance and pinpoint intelligence to direct "targeted killing"-type attacks against individual enemy combatants while minimizing civilian "collateral damage." Through this lens, in other words, cyber conflict may not really be all that much different from other forms of conflict, and international legal norms may already be shaping nations' approaches to the cyber "battlespace."
Nor do I think that deterrence-based strategies are entirely inapposite. True, attribution is a serious problem. But the difficulties of achieving "smoking gun" levels of proof are different only in degree, not in kind. Traditional wars are obvious enough, but the global spectrum of conflict has never been limited merely to "obvious" means of struggle. Instead, adversaries have engaged in conflict with each other by all manner of means – not just with open and official force but also through the use of proxy forces, covert operations, secret agents, commando raids, guerrilla insurgencies, privateering, and arms-supply and subsidy relationships with sympathetic actors – for as long, most likely, as there have been countries.
In this sense, therefore, the "attribution problem" is not new. It has just acquired a new focus in a new "battlespace" that favors stealthy and irregular attackers perhaps even more than do the slums of Fallujah or the mountainous wilds of the Hindu Kush.
This makes deterrence more difficult, but it does not make it impossible. Indeed, by comparison to some aspects of modern counter-insurgency, cyber-deterrence, as least with regard to state-sponsored cyber attacks, might even seem more promising. If indeed China is responsible for recent APT attacks upon U.S. computer systems, for example, it is more likely that threats of retaliation will be able to deter Beijing from undertaking truly catastrophic attacks than it is that prospective counterstrikes would deter a fanatical jihadist eager for martyrdom.
Information uncertainty, in the attribution context, may not be quite as crippling as some pundits seem to assume. "Plausible deniability" is a concept that came into our lexicon long before the age of cyber conflict, and it is used to describe situations where one really is pretty confident about the attacker, but cannot "prove" it in publicly-effective ways. A degree of uncertainty, however – and even a fairly significant amount of it – has not traditionally presented an absolute barrier to responsive action.
One always prefers certainty, of course, but if it is not available, leaders must – and do – make decisions on the basis of the best information available. Where the threat is sufficiently great or a provocation sufficiently heinous, leaders seem traditionally to have been quite willing to take action well short of "beyond a reasonable doubt" proof. It may be that many cyber attacks do not present evidence of attribution sufficient to justify retaliatory response at all, but this will surely not always be the case – and the would-be attacker naturally has to figure this into his calculations. For this reason, I applaud the fact that U.S. military doctrine now reportedly speaks increasingly of cyber attacks as something that could, in theory, provoke a conventional military response.
The Advanced Persistent Threat (APT) may offer a case in point. There does not seem, publicly at least, to be conclusive "proof" of Chinese government involvement in mounting all these cyber assaults. But this has not stopped most observers from assuming it, and for pretty good reasons. The pattern of attacks has targeted information that is of great interest to the Chinese regime – e.g., not just information on U.S. defense programs but on pro-democracy or human rights activists with ties to China, officials in the Tibetan diaspora, pro-independence Taiwanese, and members of the Falun Gong movement, as well as the personal e-mail accounts of U.S. and South Korean officials and scholars concerned with China policy. The attacks are also frequently of an extremely sophisticated nature that one would expect more from a state cyberwar program than from the stereotypical teenage hacker in his parents' basement.
Nor can one forget, of course, that the Chinese government is famous for the degree to which it works – apparently with great sophistication and tenacity, and with no small amount of success – to control the use of the Internet in China, preventing anything of which it disapproves. How likely is it that all these cyber campaigns could be organized in China without the government at least knowing and approving, and more likely actually sponsoring them? (Either way, moreover – as we saw with al Qaeda and the Taliban – as a campaign of hostile acts continues, the presence or absence of detailed operational oversight by the host government may become, beyond some point, irrelevant.)
Indeed, as noted, WikiLeaks cables indicate that the U.S. Government apparently has learned from intelligence informants and other sources that the Chinese government is behind attacks against Google, that it has cultivated ties to various "hacker" organizations in order to improve its cyber offensive capabilities, and that individuals linked to the People's Liberation Army's Third Battalion are behind various computer network exploitation (CNE) attacks on U.S. systems. This is entirely consistent, moreover, with the thrust of a great deal of Chinese writing stressing the importance of developing asymmetric tools – including cyberwar technologies – with which to challenge U.S. power. Most observers do blame China for the APT, and I think this is quite reasonable.
Such attribution is of course largely inferential reasoning, rather than direct "proof." Inference, however, is a perfectly good way of thinking – used by everyone in daily life as well as in policymaking – and if well-enough founded, it can appropriately justify dramatic policy choices. Could such inferential conclusions show Chinese government involvement strongly enough to support some kind of retaliation?
Clearly the case for Chinese responsibility hasn't yet been considered enough to support any kind of overt U.S. retaliation. For all we know, however, covert tit-for-tat games may have been underway for some time, confronting Beijing with similar problems and challenges of its own. (After all, U.S. leaders have many equities to balance, and there are reasons to avoid direct conflict with China that have nothing to do with a lack of APT attribution!) Such clandestine sparring would certainly have some bearing on deterrent calculations: we may already be quietly teaching them to expect more from us if they do more to us.
Moreover, all that has so far been at issue with the APT – as far as we know – is sophisticated espionage. And espionage is, for better or worse, a pretty "normal" activity in peacetime. It is by no means a given that current levels of (principally inferential) knowledge about China's culpability for APT attacks, however, would be deemed insufficient if such intrusions suddenly led to critical system crashes or data-manipulation resulting in loss of life, military or civil paralysis, or huge economic costs.
Perhaps current levels of circumstantial evidence would still not be enough to drive us to retaliate in the face of such dramatic cyber mayhem, but I'm not so sure. Even if its operatives hide their cyber tracks well, in other words, Beijing could not entirely count on cyber impunity. And there, one might hope – at least as between major state players, and at least with respect to really destructive cyber operations – lie the seeds of deterrence.
In my view, cyberspace surely presents new policy challenges, but it may not be as different from other realms of conflict as is commonly supposed. It is messy and complicated, and it involves formidable risk-tradeoffs in an environment of tremendous informational uncertainty and outcome-unpredictability. In it, the rules of the road are at least somewhat ambiguous, or contested, and we face threats from a range of state and non-state actors, frequently operating in the shadows and not necessarily playing by the same "rules" to which we hold ourselves accountable anyway. If you think this is an entirely unprecedented situation, however, try picking up some recent literature on the law of armed conflict as applied to counter-terrorist and counter-insurgency operations. Welcome to modern conflict.
I look forward to our discussions this morning. Thank you.
Christopher A. Ford was formerly Senior Fellow and Director of the Center for Technology and Global Security at Hudson Institute.
Home | Learn About Hudson | Hudson Scholars | Find an Expert | Support Hudson | Contact Information | Site Map
Policy Centers | Research Areas | Publications & Op-Eds | Hudson Bookstore
Hudson Institute, Inc. 1015 15th Street, N.W. 6th Floor Washington, DC 20005
Phone: 202.974.2400 Fax: 202.974.2410 Email the Webmaster
© Copyright 2013 Hudson Institute, Inc.