Network Solutions

Jason Healey joined the faculty at Columbia University’s School of International and Public Affairs in 2015 as a senior research scholar, while continuing a fellowship at the Atlantic Council, where he founded the Cyber Statecraft Initiative.

 

Cyberwar is too important to be left to the generals: an interview with Jason Healey

Jason Healey joined the faculty at Columbia University’s School of International and Public Affairs in 2015 as a senior research scholar, while continuing a fellowship at the Atlantic Council, where he founded the Cyber Statecraft Initiative. Healey began his career in the U.S. Air Force; his service included stints at the National Security Agency and at the White House as director of Cyber Infrastructure Protection (2003–05). He has also worked for Goldman Sachs as a manager for security and resilience. He is editor of A Fierce Domain: Conflict in Cyberspace, 1986 to 2012 (2013) and coauthor of Cyber Security Policy Guidebook(2012).

There is a distinct strategic thinness to many conversations about cyber power, almost as if the newness and the pervasiveness of the technology mean that the past is no longer much of a guide to the future and that established, pre-Internet ways of thinking about state relations, for example, no longer hold. How do you see this relationship between cyber technology and strategic thought?

There is currently a conflict among different communities in defining what cyber actually is. I remember when I was at the White House that the technologists would come in and talk about cyber conflict in terms of incidents they were seeing on the network. The things they saw were entirely true and factual: threats travel at the speed of light, accurate attribution of responsibility for cyber attacks is difficult, and the attacker has all of the advantages. Each of these technologists would also have his or her own vision of apocalyptic danger: a “cyber Pearl Harbor.”

There is currently a conflict among different communities in defining what cyber actually is.
— Jason Healey

On the other side you had the policy wonks, people with diplomatic, public-policy, or legal backgrounds. They had their own views, which were no less true than those of the technologists but were from a different perspective. They might talk of sanctions, for example, and say that attribution was really not that difficult, given that they were looking at a broader array of information about actors, and not just working their way back technically among the ones and zeroes on the network itself.

This divergence of views between geeks and wonks is wrongly viewed as a tragedy. It is an opportunity. The training and experience each group brings to bear are all crucial to our solving the major problems we face.

What we’ve found over the past 10, maybe 15 years is that we’ve been allowing the technical mindset to dominate, mainly because it is such a technical field. But there is also a strategic bias at work. If you listen to military people, they will talk about capabilities, usable capabilities, and much of the discussion will be about drones and special operations. I once lost track trying to count all the countries in which the United States is currently killing people. But each of these actions is conceptualized as discrete. And we’re seeing that in cyber as well. Rather than have a larger view of whether we’re winning or losing globally, we tend to focus in on “OK, what can we do to mess up ISIS? How can we use cyber to get at ISIS?” The United States has been reluctant to think in terms of broader strategies, especially with cyber.

We also had a head of cyber command and the National Security Agency, General Keith Alexander, who really advocated more use of cyber capabilities as a bloodless way for America to fight its wars or disrupt its adversaries, as in the attack on Iran’s nuclear enrichment program. Presidents Bush and Obama were both eager to assert American power abroad without it necessarily being traceable back to us, so you can imagine that they welcomed a commander who had a technically satisfying answer ready when a president wondered, “Who will rid me of this troublesome priest?” A president could take actions without there seeming to be likely major repercussions, as there would be if American lives were being risked. And a president could respond quickly using cyber means.

What was lost was the habit of thinking, “What will be the consequences of this choice five years from now? Or ten? Or two hundred?”

How might the things we are doing today fundamentally alter the Internet itself in ways that will not just affect us, but also our kids, grandkids, and grandkids’ grandkids?

Because if we mess up the Internet, such that our grandkids would have an Internet fundamentally different from and worse than the one we have now—for example, if each nation-state, in effect, has its own Internet, and information needs a passport to be allowed to cross borders—then we will have less productivity, less innovation, and less ability to communicate. We will have set humanity back not just for this generation, but for every generation to come.

There has until recently been a view that the Internet and broader technological changes are such a creature of the private sector that states would effectively be overwhelmed, and the self-interest of large technology corporations would compel them to protect the Internet from state control. Since Edward Snowden’s disclosures, however, there have been several trends in the other direction: companies are being more controlled by states or otherwise accommodating them.

In Washington, DC, we have a splittwo sets of policies. We have Internet policies that focus on ensuring information can circulate freely and concern themselves with Internet governance, trying to ensure that the Internet remains relatively borderless, closing the digital divide, forming broadband policies, and fostering permissionless innovation—that is, ensuring that people can try their innovations out on the Internet platform without having to get permission from state or other regulators. All these envision a leading role for the private sector. Those policy efforts are all modestly funded. President Obama often says this is a priority.

There is another set of policies that are “cyber” policies, which tend to be security-related: FBI policies, cyber espionage, cyber crime, and cyber warfare.

Where Internet policies tend to be modestly funded, the cyber policies tend to be extremely well funded. Astoundingly well funded. So when there is a conflict between the Internet side and the cyber side, the cyber side wins. You see this now with the Apple locked-phone controversy. You saw it with the PRISM program exposed by Snowden: kind of coercing American companies to help the American espionage machine, and if they don’t help, working around them or just, apparently, hacking them.

So while American policymakers know that the real cyber power isn’t in Fort Meade, where the NSA is, but in Silicon Valley, that fact often gets lost in the effort to keep Americans safe from terrorism.

Should those priorities be rebalanced?

We’ve never had a national cyber strategy that would sort out this question of what our real priority is going to be.

If President Obama had said, “Secure the Internet first and foremost. We have to have a sustainable Internet, one that’s going to be secure and that’s going to be free, and that is our top priority,” then FBI Director [James] Comey would know what his marching orders are. He would know what the default position is, against which he can make his case: Apple doesn’t have to prove their case, I have to prove my case. But since Washington has never laid out that priority, it’s easy for Washington to forget that America’s true cyber power is Silicon Valley, it’s Route 128 in Boston; it’s not the FBI, it’s not the NSA.

The academic international-relations world, with respect to military technology, has its most extensive experience in thinking about nuclear conflict. To what degree is that corpus of work, including its explanatory metaphors, useful to understanding cyber? Or does it just get in the way?

Nuclear analogies usually hurt more than they help when we’re talking about cyber issues. Specifically, I think the preexisting understanding of “deterrence” has harmed our field of cybersecurity more than it has helped. Deterrence is about having the capacity to inflict great pain on an adversary. But cyber capabilities are very useful to everyone and everyone uses them all the time. Because there are such low barriers to entry, and because you can use many cyber capabilities clandestinely or covertly, nations are using them against each other all the time.

So when you try to think about these cyber capabilities in deterrence terms, you enter into a spiral of escalation, which is not helpful.

Remember, we’ve been talking about a cyber Pearl Harbor since June 1991. So for 25 years we’ve felt strategically vulnerable to our enemies, and presumably they’ve felt strategically vulnerable to us, yet no one has yet died from a cyber attack. So there is clearly some restraint, if not actual deterrence. But then it is also worth remembering that in the heyday of deterrence, many expected that it would mark the end of warfare itself, which proved not to be the case. Nations were more than willing to attack each other, just below a certain threshold. That’s what we see now also in cyber.

Deterrence was ultimately a doctrine of stability and nonuse of certain weapons. Current U.S. cyber thinking aims more at supremacy than stability, which pushes toward competition and escalation.

Modern societies have always been dependent on the private sector—Remington, Krupp, Kaiser—for technological innovation and production in the service of foreign policy, in particular war though also espionage. Where are the continuities and discontinuities in this ongoing story with regard to cyber? For example, every software engineer is essentially dual-use, and many militaries are in competition for them.

Even the German economy in the 1930s wasn’t built around weapons. Whereas the U.S. economy is now dependent on IT companies. So we are militarizing in this area that is dominant for American innovation and growth. So the stakes are far higher; yet we’re still using things like AT&T giving access to cable traffic during World War One as meaningful precedents, somehow, for today.

But communication is no longer about ten-dollar trans-Atlantic telegrams. Our relationship to Facebook and other consumer platforms is intensely personal. Espionage used to be done overseas, and didn’t affect one’s normal life. But now, when you learn that spying involves Facebook and Apple, your attitude toward such espionage is likely to change.

I think we need to recognize that we need to make some choices here. One choice would be the hard-power, General Alexander path, where winning means dominating in espionage and all offensive operations, having the best arsenal of cyber capabilities, and collecting the whole haystack of worldwide ones and zeroes.

You could also go with an economic-power perspective where winning means to have the most agile Internet-enabled economy, the strongest tech companies, and the most trusted cybersecurity experts. That’s a different set of priorities than in the hard-power perspective.

There is a third way as well, a soft-power perspective that would emphasize this once-in-a-century opportunity to win the hearts and minds of digital natives around the world, regardless of nationality or anything else, so they see America as representing their values and enriching their lives, and America gets concomitant influence by doing that.

Only a U.S. president would be in a position to choose among these and enforce the decision. But if such a decision remains unmade, we’ll just continue to stumble along and have programs like PRISM or force Apple to write software at the FBI’s request.

In terms of military use of computing technology, the close relationship between the academy and government dates back at least to the founding in 1916 of the National Research Council, part of the National Academy of Sciences, which President Wilson saw as very useful to the war effort. Harvard, MIT, Caltech, Penn, and others remained active in developing analog computing, then digital computing; Stanford came in very strongly after the second world war. This was of course engineering departments rather than international relations, much less history or philosophy. But given the pervasiveness of Internet technology, how do you see the relationship today between the academy and innovative technologies that also have political relevance?

I think much of that will be determined by one key factor: whether or not governments succeed in their attempt to nationalize cyberspace and to make themselves the dominant players. Right now in the United States they go back and forth: “No, we don’t want to interrupt your world, but you do have to do what Director Comey says.” China and Russia, Pakistan and Iran, Malaysia, to some degree Indonesia—they’re not even trying to balance it: government is there to protect citizens so you have to do what it says. France is increasingly like that as well, after the Paris attacks.

If nationalization, in this sense, occurs, then I think you’ll see more researchers, more students, who want to please the government and will be competing for government contracts. That said, there will also be technologists who will want to outfox the government, and maybe the academy can help the larger society find a good balance, given these competing pressures.

Scott Malcomson is a Visiting Media Fellow at Carnegie Corporation of New York, specializing in international affairs.  He is the author of Splinternet: How Geopolitics and Commerce Are Fragmenting the World Wide Web. The opinions expressed here are those of the author and contributors, not necessarily of the Corporation. Twitter: @smalcomson