By Donaldson Tan

cia

Introduction

Timing has always been a critical element in strategy, so it is particularly relevant to ask why there now appears a global coordinated effort by governments to encroach into cyberspace. This development also raises questions on whether conditions of Internet have changed such that the Internet is now more governable and whether the gap between real space and cyberspace is rapidly closing.

 

Global Momentum

Global momentum first begun to take root when a joint Russian-Sino proposal (Anderson, 2011) titled International Code of Conduct of Information Security was submitted to the United Nations (UN) General Assembly in 2011. While the proposal attracted much opposition, it galvanised many countries into action.

Although many countries could not agree on how the Internet should be governed (Couts, 2012), there remains consensus among governments to wrestle the responsibility of internet governance from the non-profit organisation Internet Society to the United Nations, placing the Internet directly into the hands of governments under the auspices of the International Telecommunication Union (ITU). In fact, the negotiation of the new ITU treaty (McDowell, 2012) to broaden its regulatory powers to cover the Internet is expected to continue until at least 2014.

 

Convergence of Sovereign Interests

So why now? For a long time, the Internet has been regarded as a lawless frontier because the borderless nature of the Internet makes it difficult for any government to project jurisdiction, as characterised (Bauml, 2011) by the 2000 landmark case Yahoo! Inc v. La Ligue Contre Le Racisme et L’Antisemitisme.

The French Court (Bauml, 2011) had ruled that the American Internet firm Yahoo! had violated French criminal laws by allowing French citizens to access the sale of Nazi memorabilia on its website although such an act may not be illegal in the United States. Yahoo! appealed to the American Court on First Amendment grounds to reverse the judgement of the French Court but the final American Court decision was that there was uncertainty where the line should be drawn between the sovereignty of each country when it comes to regulating Internet content.

An international treaty would be required to resolve the uncertainty since this involves concession as well as adjusting sovereign boundaries between states. However, such a resolution would not be feasible unless states share other overarching interests concerning the Internet, besides arbitrating sovereignty in cyberspace. Two notable key interests are cybercrime and the militarisation of cyberspace.

Ronald Deibeit, Director of Citizen Lab at the University of Toronto, (Deibert, 2012) noted that “the underworld of cybercrime has exploded worldwide by any estimate.” He cited a Norton study (Symantec Corp, 2011) estimated that over 1 million people per day become victims of some cybercrime and that the annual cost of global cybercrimes is estimated to be US$144B. He further added that “there have been a series of high profile data breaches of major businesses, defence contractors, and government agencies worldwide, that appear to be continuing unabated”.

The militarisation of cyberspace is very real and states have legitimate interest to build up capability to initiate or respond to cyber warfare. During the 2008 Russia-Georgia War in South Ossetia (Deibert, 2012), Georgian government ministries were subjected to a massive distributed denial of service (DDoS) attack. As a result, the Georgian government was unable to disseminate information to the public and key infrastructure such as the financial sector was struck. Unfortunately, the Georgian government’s effort to mitigate the DDoS attack by filtering access to Russian news and information sources backfired as the lack of access to official Georgian information sources and Russian news resulted in an information blackout, planting fear and panic in the Georgian capital.

Another notable example of militarisation of cyberspace is the deployment of the Stuxnet worm (Kushner, 2013) to cripple Iran’s uranium enrichment facility in 2010. Stuxnet is a malicious code that targeted the Siemens Step7 industrial control software which was used to control the centrifuges at Iran’s uranium enrichment facility. It was spread through the Internet. Stuxnet caused the centrifuges to tear by undermining the programmable logic controller. Subsequently, other variants of the Stuxnet have been discovered, such as Duqu, Flame and Gauss. Stuxnet is an example of cyber warfare used as an extension of a country’s foreign policy; and in this case, the Stuxnet worm is used to advance the American and Israeli interests that the Islamic Republic of Iran should not proceed with its alleged nuclear weapon program.

The destructive potential of the Stuxnet worm should not be taken lightly. It raises the need of an international weapons control regime to regulate weapon development in this arena. There may be certain no-go areas for weapon development. For example, the Comprehensive Test Ban Treaty (CTBT) prohibits nuclear weapon states from carrying out comprehensive testing of nuclear weapons, providing a constraint on new weapon development by nuclear weapon states. At the same time, states may require international assistance in building capacity and technical advisory for emergency situation arising from cyber threats. Such assistance may be defined in a weapon control regime. For example, the Organisation for the Prohibition of Chemical Weapons (OPCW) provides for the destruction of existing weapons and their means of destruction under the auspices of the Chemical Weapons Convention (CWC). OPCW also provides specialised training to chemists and engineers in safe management of chemicals and the implementation of the CWC.

 

Commerce: A Key Enabler

While cybercrime and the militarisation of cyberspace are sufficient causes for states to come to the discussion table, they are not pre-conditions that enable governments to regulate the Internet. The regulability of the Internet depends on the government’s ability to answer the following question: who did what where when?

At the beginning, this question was difficult to answer because the Internet was not built to address this question. The Internet was built using the Transmission Control Protocol / Internet Protocol (TCP/IP) suite. The minimalist design (Lessig, 2006) of the TCP/IP suite, although enabled it to perform a wide range of functions, was not designed to identify and locate a person. On its own, its limited monitoring capability was too crude to be any significant use for tracking down users. There was no identity meta-system while IP addresses were organised logically, not geographically.

However, the explosion of commerce online changed all that (Lessig, 2006). In order to connect a customer to the Internet, the Internet Service Provider (ISP) must first identify him before assigning him an IP address at his point of connection, so that it can charge fees to the correct account. The ISP is aware of both the customer’s identity and his location. When goods are bought and sold online, the identities of buyers and sellers are authenticated by intermediaries to assure goods will be delivered and that payment will not bounce. Moreover, there exists today commercial technology that maps IP addresses to geographic locations.

Today, the electronic traceability is further enhanced as internet companies (Bodle, 2011) have strong market incentives to reinforce the norms and attitudes that favour persistent user ID, and to gather information on real entities for advertisers and other third party businesses. Facebook uses algorithms to mine user information to serve relevant advertisements to its users. The same tracking capabilities (Bodle, 2013) also help governments identify and monitor people online and offline via cookies, social plugins and networks, cloud services, mobile applications and devices, and other intermediaries.

Certainly, Internet Service Providers are no longer the sole focal point of governments’ efforts to police the Internet . The enabling role of online commercial players has not gone un-noticed by law makers. Proposed trade agreements (Froomkin, 2008) such as the Anti Counterfeiting Trade Agreement (ACTA) and intellectual property legislation such as the Stop Online Privacy Act (SOPA) require Internet intermediarie to assist governments and others who seek to discover the identity of anonymous authors. There appears to be a trend of creating safe harbour laws to coerce and incentivise Internet intermediaries to cooperate with governments at the expense of end-users. What’s worse is that both government and the Internet intermediaries stand to gain from compromising the privacy of end-user, so the tripartite relationship between end-users, government and Internet intermediaries is pretty lop-sided.

The key question now is whether a tipping point has been reached. Facebook founder Mark Zuckerberg (Schonfeld, 2010) proclaimed at Facebook’s inaugural f8 conference that the Internet is now social by default. This does not mean that every web service out there requires an integration with social media, but rather if a user will find his Internet space becomes significantly narrowed if he does not participate in social media. The implication is that this tipping point creates a very strong incentive for users to adopt a social layer on top of their Internet experience, thus capturing majority of users online. This social layer, although unintended, will aid governments to regulate the Internet because it is a key infrastructure that will help governments to answer this question: Who did what where and when?

 

Going forward

In conclusion, the timing of the global momentum among governments to harmonise Internet policies is not coincidental. Two key conditions have matured, namely the convergence of sovereign interests and the emergence of commerce as an enabler for governments to regulate the Internet. No doubt the militarisation of cyberspace has brought governments to the negotiation table to deliberate on all shared interests related to policing the Internet. Moreover, as reflected by the Internet’s changing regulability, the software code of the Internet is central in gaining granular control over the vast majority of users.

Governments may not have developed the commercial software infrastructure that exerts such granular control, but it is pretty clear governments are demanding access to it. To borrow Joseph Nye’s chess analogy (Nye, 2011), the design of each chessboard is already determined for land, air and sea, thus contestation is limited to changing the rules of the game. However, for Internet, the gameboard itself can be changed by altering the infrastructure, the protocol, or the content layers to affect the actors and their actions.

 

Bibliography

Anderson, N. (2011, September 21). Russia, China, Tajikistan propose UN ‘Code of Conduct’ for Internet. Retrieved October 23, 2013, from Arstechnica: http://arstechnica.com/tech-policy/2011/09/russia-china-tajikistan-propose-un-code-of-conduct-for-the-net/

Bauml, J. E. (2011). It’s a Mad Mad Internet: Globalisation and the Challenges presented by Internet Censorship. Federal Communications Law Journal , 63, 697-732.

Bodle, R. (2013). Ethics of Online Anonymity. Computers & Society , 22-35.

Bodle, R. (2011). Regimes of sharing: Open APIs, interoperability, and Facebook. Information, Communication & Society , 14 (3), 320-337.

Couts, A. (2012, December 14). HOORAY! THE UN DIDN’T TAKE OVER THE INTERNET AFTER ALL. Retrieved October 23, 2013, from Digital Trends: http://www.digitaltrends.com/web/the-un-didnt-take-over-the-internet-afterall/

Deibert, R. J. (2012). Distributed Security as Cyber Strategy. Canadian Defence & Foreign Affairs Institute.

Froomkin, A. M. (2008). Anonymity and the Law in the United States. In LESSONS FROM THE IDENTITY TRAIL: ANONYMITY, PRIVACY AND IDENTITY IN A NETWORKED SOCIETY. New York: Oxford University Press.

Kushner, D. (2013, February 26). The Real Story of Stuxnet. Retrieved October 23, 2013, from IEEE Spectrum: http://spectrum.ieee.org/telecom/security/the-real-story-of-stuxnet

Lessig, L. (2006). Architectures of Control. In L. Lessig, Code 2.0 (pp. 38-60). Perseus Books Group.

McDowell, R. M. (2012, December 13). Commissioner McDowell’s Statement RE: Today’s Action at WCIT-12. Retrieved October 2013, 23, from Federal Communications Commission: http://www.fcc.gov/document/commissioner-mcdowells-statement-re-todays-action-wcit-12

Nye, J. (2011). The Future of Power. New York: Public Affairs.

Schonfeld, E. (2010, April 21). Zuckerberg: “We Are Building A Web Where The Default Is Social”. Retrieved November 1, 2013, from Tech Crunch: http://techcrunch.com/2010/04/21/zuckerbergs-buildin-web-default-social/

Symantec Corp. (2011, September 7). Norton Study Calculates Cost of Global Cybercrime: $114 Billion Annually. Retrieved October 23, 2013, from Symantec Corp: http://www.symantec.com/about/news/release/article.jsp?prid=20110907_02

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
You May Also Like

Improved methodology and equipment is better than just outsourcing

The Straits Times has recently reported that through the tender documents it…

Casey – The Aftermath

By Ghui Anton Casey has come to symbolise everything we hate about…

Assailant intolerant and cowardly, but let’s forgive

By Andrew Loh Neo Gim Huah has been sentenced to three weeks’…