Our Machines, Our Selves, Our Killer Robots

Online, each of us is—in essence—an increasingly dense cluster of data points, linked to other person-clusters by an increasingly dense web of connections based on shared attributes or actions.

Online, each of us is—in essence—an increasingly dense cluster of data points

I.  Improving Our Services

“I’m never very excited to see those little notices that ask, ‘Will you allow us to use your data to help improve our services?’” Andrew W. Moore, dean of computer science at Carnegie Mellon University, makes this observation with a tiny smile. We are sitting in his department’s agreeably cluttered conference room. “Although that information does help companies improve their services, it is also often interpreted as an invitation to try out a bunch of other things. ‘Improve our services’ is such a vague term.”

It is hard to imagine, I suggest, what data would not, at least potentially, help improve a service.

A robot distributes promotional literature in London’s Parliament Square at the April 2013 launch of the Campaign to Stop Killer Robots, an international coalition working to preemptively ban fully autonomous weapons. (Photo: Oli Scarff/Getty Images)

Moore laughs. “Often we don’t know whether data is useful until it has been experimented with," he points out. "So you need to have the data already, or at least a sample of it, before you can see if it is useful or not.”

We were talking at the end of a one-day conference in Pittsburgh titled “The Future of the Internet: Governance and Conflict.” This gathering was the second part of the two-part Carnegie Colloquium on Digital Governance and Security, cohosted in 2016 by the Carnegie Endowment for International Peace in Washington, D.C. (October) and Carnegie Mellon University in Pittsburgh (December), and supported by Carnegie Corporation of New York.

A former Googler, Moore has worked with large data sets of many kinds, including for Internet advertising and for detection and surveillance of terrorists. He has also specialized in machine learning and its cousin, artificial intelligence. Both involve statistical analysis of data sets as a complement to writing algorithms that use the data to do something: move a robot forward, for example, or find data patterns that identify a terrorist, or target ads based on patterns in an online consumer’s click-through behavior.

 

So Moore knows his data sets—classified, unclassified, commercially available, commercially confidential, and public. As Ben Scott of Stanford Law School and New America pointed out at the Washington conference, most people do not read, much less understand, the consent forms and permissions they are signing in order to “improve services,” and the commercial reflex is to ask permission for as much data as imaginable on the theory that if it isn’t useful today it might be useful tomorrow. Once these commercial data sets exist, governments, Scott emphasized, would find them “irresistible.”

Long, long ago (that is to say, as recently as the 1990s), a person could be anonymous online.

That is more or less where we are today. “The fact is,” Andrew Moore says, “that much of the information concerning which you would think you’d be giving permission, or withholding it, can probably be deduced from your behavioral information without your permission. In particular, once you’ve got click information and location data from a cell phone, you can work out pretty much everything about a person’s demographics, their activities, even their education level.”

Online, each of us is—in essence—an increasingly dense cluster of data points, linked to other person-clusters by an increasingly dense web of connections based on shared attributes or actions (being from the same village, buying the same book, sharing a Facebook friend). Long, long ago (that is to say, as recently as the 1990s), a person could be anonymous online. Indeed, as the New Yorker cartoon had it in 1993, “On the Internet, nobody knows you’re a dog.”

Peter Steiner / New Yorker; © Condé Nast

Your online self was a fiction that you created. But once commercial browsers became widely available (after 1995), followed by search engines, online marketplaces, social media, and other platforms that depended on advertising and purchases for their revenue, online life came to require a credit card or some other reliable account that tied to a single person. As Moore notes, your identity “had to be linked to some payment credentials, and at that point your identity became singular, what was known as a Real Name. It would be very hard to have some dual set of payment credentials that weren’t linked.” The commercial Internet, not governments, gave each person a single identity. From then on, everyone knew you were a dog.

In principle, this makes us safer online: a lack of privacy for good actors is the price to pay for the ability to identify the bad ones. Since the counterfactual can’t be explored, it is hard to know whether a return to anonymity would be so bad, and the numberless frauds and data thefts online do make one wonder whether the effective outlawing of anonymity means that only outlaws can have it. But in any case the governmental guardians of privacy are themselves against anonymity, and the commercial holders of data base their business models on mapping online activity onto real bodies, objects, geography, and cash.

We remain participants in an online world where our understanding of even our own identities is increasingly shaky, and our control over them weak indeed.

At the Washington meeting, Carnegie Mellon engineering dean James H. Garrett asked a panel whether a person couldn’t have a controllable, private profile of some kind. Edward W. Felten, deputy U.S. chief technology officer, replied that there had been efforts at creating such a thing but they had come to nothing. Yuet Ming Tham, a partner in the Hong Kong offices of the law firm Sidley Austin, made the interesting point that in Asia, to her knowledge, no restrictions exist on the use of digitized information about dead people, which may offer whole new vistas for medical and other research. For those of us still living, we remain participants in an online world where our understanding of even our own identities is increasingly shaky, and our control over them weak indeed.

II.  Keeping the Nation-State in the Loop

The Internet was built on a physical infrastructure that belonged to universities, the U.S. government, and, eventually, a handful of private companies. What really enabled the Web to take off was agreement within the rather small Internet community on a set of protocols for sending and receiving electronic information. (The ubiquitous "http," for example, stands for “hyper-text transfer protocol”.) The design of the Internet was deliberately simple. That made it possible for the Internet to spread around the world.

As long as the domain names and numbers, and the protocols, were standardized, the Internet worked. The actual labor of debating and standardizing such things was done by several groups growing out of the engineering subculture that developed the Internet: the Internet Engineering Task Force, the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Assigned Numbers Authority (IANA), the Internet Society, and others. These groups, together with the growing population of regional Internet Service Providers and other Internet institutions, make up what is frequently, if vaguely, referred to as “the Internet community.”

While in 1998 one could imagine the Web as extra-territorial, today states insist on being in the loop.

U.S. government projects, mainly military, were the basis for the Internet, and eventually the defense and intelligence sectors took themselves off the growing network for security reasons and developed their own systems. The Internet came under the purview of the National Science Foundation (NSF) and then, once it was seen to be primarily a commercial platform, it passed to the U.S. Department of Commerce, which contracted the Internet Assigned Numbers Authority (IANA) to ensure the system stayed robust. By the latter half of the 1990s the Internet was clearly on its way to becoming a global platform, and, accordingly, the U.S. committed to giving up that Commerce contract at some point to a body or bodies to be determined. The theory was that the Internet should not be controlled by a state or states, not even by the state that had invented it—and that, in fact, continued to maintain a significant portion of its root infrastructure.

It hasn’t worked out quite that way. It is true that most data is commercially derived and privately held (in the fullest sense the Internet is overwhelmingly a private enterprise). But states are taking a growing interest in it. On the one hand, for a nation to opt out of the Internet is difficult and expensive and means cutting a state off from the most geopolitically and economically important innovation of our era. On the other hand, being on the Internet poses grave dangers to a state in terms of security and economic vulnerability. While in 1998 one could imagine the Web as extra-territorial, today states insist on being in the loop.

 

At the Pittsburgh meeting, Tim Maurer of CEIP laid out the three-tier structure of the Internet: pipes and plumbing (servers, cable) at the bottom; protocols (set and maintained by IANA, ICANN, and the Internet community, with some government input) in the middle; and the content layer on top. What really keeps the Internet universal is not the plumbing part—servers and cables can be built—but the protocol layer. In 2016 the Commerce contract finally expired and responsibility for the protocol layer passed to the Internet community in an exceedingly complicated set of bureaucratic arrangements designed to escape capture by any one player, whether governmental, nongovernmental, or intragovernmental.

In Pittsburgh several participants noted that the momentum for this fundamental change in Internet governance—it is known by the easygoing term “the IANA transition”—really took off in 2013, after the Edward Snowden revelations, when states questioned just how hands-off the U.S. actually was and China began to look at an “alternative root,” that is, an entirely separate Internet. Lawrence E. Strickling, who as the Commerce Department’s assistant secretary for communications and information was as central as anyone to the IANA transition, celebrated the fact that the U.S. had kept “a 20-year-old promise” to put the Internet out of direct U.S. control. Fadi Chehadé, until recently the president and CEO of ICANN, emphasized that Internet governance was now in accord with the “distributed polycentric platform” that the Internet itself had become, while Malcolm Johnson, deputy secretary-general of the International Telecommunications Union (ITU), spoke of the UN agency's experience in regulating radio spectrum, which is the medium for mobile Internet access.

There was satisfaction that the Internet had come through a difficult period with its independence from nation-states preserved. At the same time, when Paul Timmers of the European Commission insisted in Washington that there had to be international governance of some sort to ensure cyber security, nobody was able to come up with a plausible body that could make that happen. States cannot leave a security vacuum of such significance unfilled. Former U.S. ambassador to the UN Human Rights Council Eileen Donahoe argued in Pittsburgh that the top “content layer” of the Internet is now being carved up by states as they assert their “tech sovereignty.” Fadi Chehadé acknowledged that, for this layer, there are no guarantees of open, secure, and free content. He suggested there need to be transnational protocols to protect content, and that private companies have to shoulder more responsibility for online content—they do, after all, control the vast majority of it. A long-time advocate of cyber freedoms and human rights, Donahoe pointed meaningfully to technical possibilities (such as the SCION and DONA projects, the latter being managed by the ITU) that could enable states to refashion the Internet or to even sidestep it altogether.

The consensus was that nearly all states accept the Internet while simultaneously resenting their dependence on it. For them, it creates points of vulnerability. Schemes for national-sovereign “data localization” are one attempt at a solution (from a state perspective); China's "Great Firewall" is another. All such maneuvers depend on governments being able to force private companies to do certain things with data: a “global data sovereignty” movement that, as Ben Scott put it, will become increasingly hard to resist.

III.  Keeping the Human in the Loop

Paradoxically, perhaps, the ultimate form of state competition—warfare—is becoming more and more the province of machines.

In the end, it is state competition and distrust that is dividing up the Web. All states have an interest in controlling their citizens, and at this point such vast numbers of citizens are active on the Web—each of us is, as Andrew Moore showed, a dense cluster of data points linked to other person-clusters of data points—that states are gradually extending their monopoly of violence from the physical realm to the virtual one.

Paradoxically, perhaps, the ultimate form of state competition—warfare—is becoming more and more the province of machines. As David Brumley, director of Carnegie Mellon's CyLab Security and Privacy Institute, told the Washington conference, the U.S., Russia, Israel, China, and India are increasingly investing in autonomous technology, including robotized soldiers. For Brumley, the key question is whether or when to “delegate a decision” to an autonomous hardware/software system engaged in virtual or real battle. Brumley observed that autonomous systems are already playing a key role in the U.S. military’s efforts to create a “third offset”: a decisive technological advantage that would give the U.S. global military dominance. (The first offset was nuclear weaponry; the second offset was highly accurate guided munitions.)

Brumley noted that the chief architect of the third offset, Under-Secretary of Defense Robert Work, has stated that while Russia and China are also developing autonomous systems, the U.S. will always keep a human in the decision loop rather than fully automating—that is, programming a machine to take a decision to kill. But to play the devil's advocate: if a machine identifies a suicide bomber, should it wait for a human to tell it to kill? If a system sees 60 incoming missiles, should it immediately shoot them down? Or should it wait—with perhaps disastrous consequences—for the human in the loop to pull the trigger?

Vintage illustration of a man with an electronic circuit board brain, 1949. (Photo: GraphicaArtis/Getty Images)
 

Mary Wareham, advocacy director of the arms division at Human Rights Watch and global coordinator of the Campaign to Stop Killer Robots, noted in Washington that these conundrums are not new. Robots used to be relegated to, for example, cleaning ships; today, they have been transmogrified into long-distance warplanes. Daniel Reisner, former head of the Israel Defense Force’s international law department, told the same audience that landmines are probably the oldest autonomous weapons.

Much of modern computing, including the Internet and GPS, was developed to improve the speed of decision-making of commanders who have to determine when to fire and at what. Andrei Kolmogorov used his pioneering work on probability to make Soviet artillery fire in World War II more accurate (the Moscow University Department of Probability Theory, which Kolmogorov headed, compiled ballistics firing tables). Most of the great wartime figures in computing innovation in the U.S.— Norbert Wiener, Claude Shannon, George Stibitz—all worked on what is called “fire control,” just like Kolmogorov. Machines made it possible for humans to fire, or return fire, with a speed and accuracy that could never have been achieved otherwise. One of the central pioneers of postwar computing and the Internet, J.C.R. Licklider, wrote in a famous 1960 essay (its research funded by the U.S. Air Force) of his hopes for “the development of man-computer symbiosis. . . . Men will set the goals and supply the motivations, of course, at least in the early years.”

 
It may be reassuring that automated warfare, like artificial intelligence, always seems to be just over an ever-receding horizon.

In Washington, former IDF attorney Daniel Reisner said: “I don’t know where cyber stops and kinetic [warfare] begins anymore.” Still, it may be reassuring that automated warfare, like artificial intelligence, always seems to be just over an ever-receding horizon. As Mary Wareham noted, the AI community talks a lot about how Artificial Intelligence can be “beneficial to humanity,” but Silicon Valley continues to draw a distinction at warfare. Lieutenant General R. S. Panwar, former colonel commandant of the Indian Army Corps of Signals, told the Washington conference that, in the end, the general in the field is the one responsible for the effect of whatever weapon he is using, dumb or smart. Lieutenant General Robert Schmidle, USMC, who was the first deputy commander of United States Cyber Command, emphasized in Pittsburgh that “the key is the decision maker, not the tool.”

Nonetheless, General Panwar also stressed that America’s third-offset goal is military predominance. This creates haves and have-nots. The have-nots will wish to be haves. They might well feel justified in developing automated systems that would leave them less at the mercy of American technological superiority. All the more so given the context laid out by William J. Burns, president of CEIP and former U.S. deputy secretary of state, at the start of the Washington conference. Burns sees an international order beginning to crumble and a return of great-power rivalry. Western-led globalization is being rejected; a fortress-like nationalism is taking its place. For some time, technology had seemed on balance to be advancing liberal norms. Now it is posing the gravest challenges to those very norms.

In retrospect, the Washington and Pittsburgh conferences, which took place just prior to the U.S. presidential election and the presidential inauguration, respectively, might have marked a turning point. One week into the Trump administration, senior members of ICANN, an American creation and the Internet’s main governing body, found themselves unable to attend a board meeting in the U.S. because of an executive order banning visitors from certain countries. There was then immediate talk of having to hold governance meetings for the Internet outside the country that had invented it. Meanwhile, the White House drafted an executive order that emphasized “preserving the ability of the United States to decisively shape cyberspace relative to other international, state, and non-state actors.” The international and the national are inherently in tension, but for now, in cyberspace, as in the terrestrial world, the national appears to have the upper hand.