The tensions of securing cyberspace
spacer

spacer

spacer
The tensions of securing cyberspace: the Internet, state power & the National Strategy to Secure Cyberspace by Michael T. Zimmer

The National Strategy to Secure Cyberspace exposes a growing tension between the nature of the Internet and the regulatory powers of the traditional nation–state. The National Strategy declares, with all the strength and authority of the United States government, the desire to secure a space many consider, by its very nature, chaotic and beyond the reach any organized or central control. This paper will argue that both the structural architecture of the Internet and the substantive values codified within it clash with governmental efforts to "secure cyberspace."

Contents

Introduction
A brief history of the National Strategy to Secure Cyberspace
The architecture of the Internet
The structural tensions with state power
The substantive tensions with state power
Conclusion

 


 

spacer

Introduction

In October 2002, while millions of people all over the world were working, shopping and surfing online, the Internet sustained a crippling "denial of service" attack. For close to one hour, the thirteen root servers that manage the Internet’s addressing system were bombarded by millions of bogus requests for information, overwhelming them with data until the servers failed. Seven of the 13 root servers failed that day, and two others failed intermittently during the attack. Thanks to the distributed nature of the Internet’s architecture, ordinary users experienced no slowdowns or outages. Such denial of service attacks are common and easy to perpetrate, but the size and scope of this event make it unique. While similar attacks in February 2000 disrupted Amazon.com, eBay, Yahoo and other e–commerce cites for several hours, the coordinated attack in October 2002 — on all the root servers at once — was a rarity. This coordinated attack took place — perhaps by coincidence, perhaps not — only one month after the U.S. Department of Homeland Security released its initial draft of the National Strategy to Secure Cyberspace.

Initial reactions to this policy document focused on the boldness of the intent expressed within its title: securing cyberspace. The primary goal of the National Strategy is not simply to secure computers in the U.S., or to secure the nation’s vital infrastructures, or even to secure the physical networks. Instead, the American government intends to build strategies and enact plans to secure cyberspace, the ephemeral space that exists only in relation to the medium of the Internet. This is more than just an exercise in semantics. By declaring, with all the strength and authority of the U.S. government, that they hope to secure a space most of us consider, by its very nature, chaotic and beyond the reach any organized or central control, the National Strategy exposes a growing tension between the nature of the Internet and the regulatory powers of the traditional nation–state. This paper will argue that both the structural architecture of the Internet and the substantive values codified within it clash with governmental efforts to "secure cyberspace."

 

spacer

A brief history of the National Strategy to Secure Cyberspace

As the Internet has become an increasingly important part of daily life, businesses, governments and individuals alike have begun to realize how it might be exploited to threaten their interests. As early as 1997, scientists and government officials recognized the growing threat, as reported in the Proceedings from the Carnegie Mellon Workshop on Network Security:

"The United States Information Infrastructure faces a continuous barrage of attacks from hackers with an assortment of tools available to them. ... A plethora of easily accessed "tools" puts the capability for sophisticated, computer–based information warfare into anyone’s hands, regardless of geographical location, nationality, or motivation. ... The rapid growth rate of the Internet increases the feasibility of computer–based attacks while decreasing the chance of detection." [1]

For many, this threat has been taken more seriously since the devastating events of 11 September 2001, exacerbating fears that the Internet could be used as a weapon against American assets. As President Bush’s message in the opening pages of the National Strategy demonstrates, the threat of a future cyber–attack is taken seriously at the highest levels of the government: "The policy of the United States is to protect against the debilitating disruption of the operation of information systems for critical infrastructures and, thereby, help to protect the people, economy, and national security of the United States" [2]. Steeped in civic idealism and putting forth the every–optimistic "call to action," the National Strategy provides a blueprint for the way that government, corporations and individuals need to view Internet security.

The National Strategy to Secure Cyberspace identifies three strategic objectives: (1) Prevent cyber–attacks against America’s critical infrastructures; (2) Reduce national vulnerability to cyber–attacks; and, (3) Minimize damage and recovery time from cyber–attacks that do occur. To meet these objectives, the National Strategy outlines five national priorities: The first priority, the creation of a National Cyberspace Security Response System, focuses on improving the government’s response to cyberspace security incidents and reducing the potential damage from such events. The second, third, and fourth priorities (the development of a National Cyberspace Security Threat and Vulnerability Reduction Program, the creation of a National Cyberspace Security Awareness and Training Program, and the necessity of Securing Governments' Cyberspace) aim to reduce threats from, and vulnerabilities to, cyber–attacks. The fifth priority, the establishment of a system of National Security and International Cyberspace Security Cooperation, intends to prevent cyber–attacks that could impact national security assets and to improve the international management of and response to such attacks. Ultimately, the National Strategy encourages companies to regularly review their technology security plans, and individuals who use the Internet to add firewalls and anti–virus software to their systems. It calls for a single federal center to help detect, monitor and analyze attacks, and for expanded cyber–security research and improved government–industry cooperation.

A national strategy is certainly both necessary and appropriate to effectively deal with the many problems of computer network security. However, despite the apparent relevance of such a plan amid the current administration’s "war on terrorism," the National Strategy seems to have slipped in importance for both the Bush administration and the information technology industry. One obvious indication was the dramatic decrease in the visibility of the National Strategy. Original plans called for the final version of the National Strategy to be released on 19 September 2002, complete with a presidential signing ceremony at Stanford University amid technology luminaries like Microsoft chairman Bill Gates. The White House decided to hold back the final plan, and instead released a draft to seek further comment from the industry. After a few months of spattering coverage and debate — mostly in industry publications — the U.S. Department of Homeland Security unceremoniously released the final version of the National Strategy on Valentine’s Day, 14 February 2003 (Krebs, 2003).

The non–controversial nature of the final National Strategy drew sharp and immediate criticism (Krim, 2003; Fitzgerald, 2003; Fisher 2003; Lemos and McCullagh, 2002; Forno, 2002). Many information security experts shared the criticism of Richard Forno (2002), who noted that the National Strategy simply "‘addresses’ various security ‘issues’ instead of directing the ‘resolution’ of security ‘problems’ — tiptoeing around the problems instead of dealing with them head–on and demanding results." Rather than target specific industry segments and require that they secure themselves by recommending tough new laws and regulations, the National Strategy instead recommends that industry and individuals simply take greater care. Unlike earlier drafts that asked the private sector to take concrete steps to protect their systems, the majority of the final document directs the government to lead by example by tightening the security of federal information systems. Omitted from the final plan were proposals to require technology companies to contribute to a security research fund and for Internet service providers to bundle firewall and other security technology with their service. Adding to the National Strategy’s perceived weaknesses, the White House cyber–security czar, Richard Clarke, resigned from his post only two weeks prior to its release (Krebs, 2003). Without the continued support of Clarke — or someone else with equivalent political clout and technical knowledge — the National Strategy very well may "languish as just another policy document with plenty of good ideas but few teeth" (Fisher, 2003).

Such critiques are reasonable. The National Strategy is short on regulations and long on recommendations. True, more rigorous steps could be taken. The government could take steps to transform the architecture of the Internet to make it more regulable, thereby increasing national security. The government could require that all forms of encryption have a "back–door" for government to enter and examine the data. It could force Internet service providers to install security technologies that would require the use of government–issued personal digital IDs, effectively preventing anonymous access to the Internet. More radically, the government could mimic efforts in China to restrict and funnel access to the global Internet through State–controlled nodes, effectively creating a national intranet (Deibert, 2002b; Kalathil and Boas, 2001). While critics of the current National Strategy to Secure Cyberspace likely are not suggesting the government strengthen the National Strategy by taking such drastic measures, similar steps could be justified in the name of "national security," and would likely provide increased protection for the nation’s vital infrastructures.

Nevertheless, such efforts — indeed, any effort to "secure cyberspace" — conflict with the prevailing "nature" of the Internet. What most critics of the National Strategy fail to recognize is the rising tension between the nature of Internet and controlling tendencies of State power. This tension has both a structural and substantive element. Structurally, the Internet is a global, distributed network governed by open and interoperable protocols, resulting in a non–hierarchical, end–to–end and anarchic network. These structural features of the Internet are obstacles for the exertion of State power. Richard Clarke acknowledged this structural tension when the National Strategy was first announced: "The government cannot dictate. The government cannot mandate. The government cannot alone secure cyberspace" (Lemos and McCullagh, 2002). Thus, the National Strategy, much to the consternation of its critics, stresses that primary responsibility for Internet security must come its community of users, rather than the government.

The structural explanation for the tension between the nature of the Internet and the government is only half of the story. There also is a substantive tension, that is, a tension between the very essence of the Internet, its biases and values, and the predilection of the government to exert control. For many, the Internet embodies a new libertarian utopia where freedom from State control reigns. Building from the structural nature of the Internet, we discover that the architecture of the Internet codifies certain substantive values — values which clash with governmental efforts to "secure cyberspace."

My intent is not to debate the benefits to national security (or the threats to civil liberties) of the provisions of the National Strategy are implemented. I am not suggesting whether stricter or looser recommendations are appropriate. But by placing their critique of the National Strategy squarely on its lack of "teeth," its detractors overlook the underlying tensions between the architecture of the Internet and the fundamental nature of State power. In this paper, I aim to illuminate these structural and substantive tensions, and reflect on the fact that any attempt to reconcile these tensions impacts both the national security efforts of the government and the very nature of the Internet as we now know it. As a first step to understanding these tensions, however, an overview of the Internet’s architecture is required.

 

spacer

The architecture of the Internet

Before we can understand the tensions that exist between the core values of the Internet and the government’s attempt to secure cyberspace, we first have to understand the architecture of the Internet. In his essay on the architectural principles of the Internet, Carpenter wrote, "the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network" (Network Working Group, 1996a). This section will provide a foundation for the understanding these key architectural principles of the Internet: The development of packet–switching protocols, its end–to–end design, and its decentralized standard setting process, within which Carpenter’s essay was published and distributed.

The Internet, of course, is not a thing; it is the interconnection of many things — the (potential) interconnection between any of millions of computers located around the world. Each of these computers is independently managed by persons who have chosen to adhere to common communications protocols, particularly a fundamental protocol suite known as TCP/IP, which makes it practical for computers to share data even if they are far apart and have no direct line of communication (Hall, 2000; Stevens 1994). The TCP/IP protocol suite makes the Internet possible. Its most important feature is that it defines a packet–switching network, a method by which data can be broken up into standardized packets that are then routed to their destinations via an indeterminate number of intermediaries. Under TCP/IP, as each intermediary receives a packet intended for a party further away, the packet is forwarded along whatever route is most convenient at the nanosecond the data arrives. By comparison, rather than telephoning a friend, one would tape record a message, cut it up into several pieces, and hand the pieces to people heading in the general direction of the intended recipient. Each time a person carrying tape met anyone going in the right direction, he or she could hand over as many pieces of tape as the recipient could comfortably carry. Eventually the message would get where it needed to go.

Neither sender nor receiver need know or care about the route that their data takes, and there is no particular reason to expect that data will follow the same route twice. More importantly from a technical standpoint, the computers in the network can all communicate without knowing anything about the network technology carrying their messages. As Carpenter outlines, "The Internet level protocol must be independent of the hardware medium and hardware addressing" [3]. Because of the way the protocols were designed, any computer on the network can talk to any other computer on the network, resulting in a robust, interoperable peer–to–peer relationship.

The second key aspect of the Internet’s architecture is a design principle called "end–to–end" (Saltzer et al., 1997). With end–to–end design, the network does not choose how the network itself will be used. Control, or intelligence, is placed at the "ends," the computers used to access the network. Computers within the network are only required to provide the most basic level of service — data transport via the TCP/IP protocols. The network itself is kept simple, incapable of discrimination. Without intelligence imbedded in the network all packets that conform to the protocol are transmitted, regardless of content, regardless of intent, and without any knowledge (or care) of what types of applications or people are utilizing the packets on the ends of the network.

The TCP/IP protocol suite and the end–to–end design of the Internet have become standard practices among the Internet community. The decentralized nature of the standard–setting process represents a third key element of the Internet’s architecture. The Internet is not controlled by a single company or agency. Instead, the Internet is administered, if that even is the word, by an international, unincorporated, non–governmental organization known as the Internet Engineering Task Force (IETF), which allows unlimited grassroots participation and operates under a large, open agenda (Network Working Group, 2001; Borsook, 1995). In marked contrast to more traditional standards organizations (ANSI, ISO), the IETF has no strict bylaws, no board of directors, nothing so much as official "membership" — anyone can register for and attend any meeting. "The closest thing there is to being an IETF member is being on the IETF mailing lists" [4]. The culture of the IETF invokes open and democratic participation. As long–time IETF member, and MIT professor, Dave Clark remarked, "We reject: kings, presidents, and voting. We believe in: rough consensus and running code" (Clark in Borsook, 1995).

A primary activity of the IETF is Internet standard–setting. The Internet Standards Process is concerned with all protocols, procedures, and conventions that are used in or by the Internet, including the TCP/IP protocol suite (Network Working Group, 1996b). While the process is somewhat complex, it is "designed to be fair, open, and objective; to reflect existing (proven) practice; and to be flexible" [5]. The process is gradual, deliberate and negotiated; it provides ample opportunity for participation and comment by all interested parties (Galloway, 2004). At each stage of the standardization process, a specification is repeatedly discussed and its merits debated in open meetings and/or public electronic mailing lists, and it is made available for review via worldwide on–line directories. This is accomplished through the extensive use of "Request for Comments" (RFC) documents. RFCs cover a wide range of topics in addition to Internet standards, from early discussion of new research concepts to status memos about the Internet to philosophical and historical treatments of the Internet. RFCs have become the principle means of open expression in the computer networking community, the accepted way of recommending, reviewing and adopting new technical standards. As Galloway notes, the RFC process "is a peculiar type of anti–federalism through universalism — strange as it sounds — whereby universal techniques are levied in such a way as ultimately to revert much decision–making back to the local level" (Galloway, in press). It was a working anarchy.

Combining these three features of the Internet — packet–switching protocols, end–to–end network design, and its standard–setting process — results in a distributed, and essentially un–intelligent, computer network rooted in an anarchic ethos. Borrowing from Vaidhyanathan, this anarchy is not necessarily chaotic and dangerous: "Anarchy is organization through disorganization. ... anarchy is a process, a set of behaviors, and a mode of organization and communication" (Vaidhyanathan, in press). It is this sense of anarchy, what Vaidhyanathan labels "information anarchy," that becomes the core value of the architecture of the Internet. In his outline of the architectural principles of the Internet, Carpenter recognizes its anarchic nature outright: "In search for Internet architectural principles, we must remember that technical change is continuous in the information technology industry. The Internet reflects this. ... The principle of constant change is perhaps the only principle of the Internet that should survive indefinitely" [6]. This anarchic nature of the Internet inevitably — perhaps automatically — challenges any principle of stability or authority within its reach. Unsurprisingly, tensions emerge when the "information anarchy" is confronted with government attempts to "secure cyberspace."

 

spacer

The structural tensions with state power

The previous section detailed the architecture of the Internet, illustrating the common description of the Internet as a distributed, anarchic network. These structural features of the Internet are obstacles for the exertion of State power. As the network spreads and as communication flows become both more opaque and swift, three key structural tensions rise to the surface in relation to the government’s national security efforts.

First, the Internet lacks any loci of control from which to monitor and/or block potentially harmful actions. Inherent to the Internet’s architecture, there are no central nodes through which all information passes [7]. Nor is there any single route through which particular messages travel as the packet–switching protocols partition and distribute data across numerous independent trajectories along the network. The distributed nature of the Internet makes any form of central monitoring or control almost impossible, complicating any governmental attempt to monitor network traffic for potential threats.

Even if centralized monitoring was possible, however, the protocols and end–to–end design of the Internet present a second tension vis–á–vis governmental control: the network is indiscriminate as to its content. As noted above, data is routed throughout the Internet without any knowledge or prejudice of what is being transmitted. With the intelligence of the Internet existing on the ends, the network itself is unable to determine what the content or purpose of any particular packet. The architecture of the Internet makes it impossible to distinguish between packets that have malicious intent and those that are normal. This constrains any desire by the government to identify potentially harmful packets as they pass through the network [8].

Finally, a third structural tension emerges due to the supra–national character of the Internet. It becomes increasingly impossible for a State, restricted within its artificial borders, to enforce its rules and laws over such a medium that is oblivious to geography. Regarded from the perspective of national security, the concept of national borders gradually becomes less clear with the worldwide expansion of networks. The Internet — with its end–to–end design and non–discriminatory protocols allowing unfettered international access and use — is a "new continent that knows neither borders nor treaties" [9]. In this way, the supra–national quality of the Internet poses not only a structural constraint on a State’s ability to govern the medium beyond its borders, but also substantively changes the very notion of territorial sovereignty.

 

spacer

The substantive tensions with state power

The rise of information technologies, including the Internet, impacts the way governance is organized and power is exercised in our society. As Castells notes, "Networks constitute the new social morphology of our societies, and the diffusion of networking logic substantially modifies the operation and outcomes in processes of production, experience, power and culture" [10]. This poses immense constraints on any government’s attempt to secure cyberspace. While the structural tensions noted above seem clear, more abstract constraints to State power lurk just below the surface, exposing deep substantive tensions. These include challenges to the hierarchical structures of the nation–state, the blurring of territorial boundaries, and general resistance to power in a society increasingly focused on control.

Information technology networks contribute to the departure from traditional hierarchical authoritative contexts privileging nation–states. As Arquilla and Ronfeldt explain, the rise of global information networks sets in motion forces that challenge the hierarchical design of many institutions:

"It disrupts and erodes the hierarchies around which institutions are normally designed. It diffuses and redistributes power, often to the benefit of what may be considered weaker, smaller actors. It crosses borders, and redraws the boundaries of offices and responsibilities. It expands the spatial and temporal horizons that actors should take into account. And thus, it generally compels closed systems to open up." [11]

As a consequence of the Internet’s capacity for anarchic global communication, new global institutions are being formed that are preponderantly sustained by network rather than hierarchical structures — examples include peer–based networks such as Slashdot.org, or even the IETF itself. Such global, interconnected networks help to flatten hierarchies, often transforming them altogether, into new types of spaces where traditional sovereign territoriality itself faces extinction.

The substantive tension between the traditional territoriality of the State and the push to control the supra–national Internet is immense. A State is conv

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.