September 10, 2002


by Andy Oram
American Reporter Correspondent

CAMBRIDGE, MASS.—The attacks of September 11, one year ago, not only left thousands of bereaved families but invoked in everyone a fundamental human anxiety. It cut through all psychological denial and left us facing our terrible vulnerability. And immediately, the question came up of the vulnerability of the world’s computers, networks, and complex information systems.

Furthermore, while we grew alarmed over the critical role these electronic networks play in our life and society, we bristled at the realization that they are also part of the communications infrastructure used by those who want to destroy this life and society.

Attacks on information networks can be divided into two types. The first uses the network itself to cause damage; this type raises scary scenarios of electronic grids being overloaded, critical water or fuel systems disrupted, air control towers blacking out, and industrial processes sent out of control. But such attacks are actually quite hard to carry off. While a few intrusions into such systems have been detected, they have not resulted in any disruptions.

The second and more likely attack is disruption of an information network in tandem with a more conventional attack, in order to slow down effective responses to a medical outbreak, explosion, or other emergency.

Even though the potential for the first kind of attack is uncertain, the risk is great enough to warrant drastic protection—to wit, taking key systems entirely off of public networks. And this solution was suggested by Bruce Schneier, the computer field’s most prominent security expert, in my article Cyber-security: Uncle Sam Needs You a couple months after September 11. (Bruce Schneier’s views on security in general appeared in The Atlantic Monthly’s September 2002 issue.

However, a lot of critical systems remain connected to the Internet. Last month, a security firm testing military computers succeeded in getting access to sensitive information by such garden-variety techniques as guessing generals’ passwords.

Obviously, total isolation is not a solution for everything. In October 2001, Bush advisor Richard Clarke released a call for a secure government network that would be completely separate from the Internet. A gargantuan caricature of the principle of disconnection discussed here, it was quickly ridiculed and little heard from again. What kind of applications do they expect to run, and how can they keep off-the-shelf software secure? How can hundreds of thousands of government employees be monitored for security? How can routine information be exchanged on a daily basis between the supposedly secure network and the rest of the world?

Let’s turn to systems that must stay connected. Computer security, like security across society as a whole, can be achieved on multiple levels. On the high level, expects check immigrants’ visas and infiltrate suspected movements; I will examine the network equivalents (data retention and wiretapping) later.

But since these high-level initiatives are costly, scatter-shot, and often abusive, security must also be achieved at a lower level. Many think that the front line in security is provided by the individual guard checking badges at the gate, the individual baggage screener, and the individual police officer. These people do not actively spy (unlike the distasteful TIPS campaign suggested by Attorney General Ashcroft a few months ago) but simply carry out their jobs in a businesslike, alert manner.

And this is the level where computers must also be secured, as I pointed out several years ago in my American Reporter article Cyber Hygiene, Not Cyber Fortress Protects Our Networks. For any system on a network to be secure, all systems must be secured—and that through the individual attention given by their users or administrators to setting proper access rights, installing updates to buggy software, checking logs for suspicious behavior, and so on.

Some businesses have taken up this challenge. Thus, an Australian newspaper, The Age, reports that a “recent AusCERT study stated that 70 per cent of Australian organisations surveyed had increased spending on information security in the past year.”

Another study (PriceWaterhouseCoopers in the U.K.) found that 73% of businesses they polled consider computer security a major issue; a similar percentage had experienced an “extremely serious” security incident in the previous year. Both of these statistics rose considerably since the previous year. On the positive side, 76% express confidence that they can prevent or detect such incidents. But considering that they tend to spend small amounts on security, it may by over-confidence they are expressing.

So it’s hard to establish any one trend in security. (One widely cited article whose opening appeared alarmist, Year After 9/11, Cyberspace Door Is Still Ajar in the New York Times two days ago, turned out to be have more positive things to report further down.) But there is a consensus that we are moving far too slowly in this grassroots endeavor.

Perhaps the problem is that network intrusions have become familiar over time. There is nothing special about the quality of attacks during the past year, post-9/11. Still, the quantity is noticeable higher. Every recent study shows an enormous increase in the number of cyber-attacks in the past year, sometimes even a doubling.

Security can be enhanced at another level: that of producing more robust products with fewer bugs. The declaration last January by Bill Gates that Microsoft would start making security its highest priority is good news (despite the obvious hyperbole involved), but it will be a long while before we see whether that spirit suffuses throughout the company.

Meanwhile, security flaws turn up in software of every variety, week after week—even the software from places with the best reputations; even software designed explicitly for security. The vast majority of such flaws are based on buffer overflows, a result of common programming errors. And nearly all software development—open source as well as proprietary—continues to use the C or C++ languages, which permit such buffer overflows, rather than newer languages that incorporate strong protections against such errors.

Furthermore, new and poorly understood technologies create new chinks in an armor that is not too strong to begin with. One oft-cited example is the wireless LAN, which requires more care to keep secure than a network where wires are packed away behind walls. Yet standards for wireless have been weak on security; the situation is improving only slowly.

The Office for Homeland Security has promised to release a large report on September 18 that will lay out guidelines for computer and network security. Anything useful it has to say will simply be a repeat of the advice published by security consultants over the years. It will probably be similar to (if more detailed than) the generic security guidelines adopted by the European Union and the Organization for Economic Co-operation and Development (OECD).

On the other hand, there is real value in the Network Operations Center announced by the agency. I don’t share the fears of some observers that it will turn into a spy outfit (we’ve already got as many of those as we know what to do with). Rather, it appears to be an information clearinghouse where private organizations can report attacks, ask for advice, and share best practices.

Proposals for a small program that makes it simple to secure a Windows system, and for test software that helps sites check their security, are also commendable. But it is up to the users to carry through the suggestions; security simply cannot be centrally administered.

Since defensive measures are difficult and far from being universally employed, what about the pro-active side of fighting terror? What can governments do to catch the culprits?

Success here is elusive, and usually comes less through high-tech brilliance than through old-fashioned luck and tip-offs. The embarrassing difficulties that the FBI had in trying to retrieve email from a Hotmail account opened by accused September 11 suspect Zacarias Moussaoui shows that the expanded powers being handed to law enforcement by legislatures may not be exploited competently.

Perhaps the missing Hotmail messages would be there if the U.S. had a data retention law, like many passed in numerous European countries. These laws require network operators (both phone companies and Internet providers) to preserve user data for a number of months or years so that the police can look at it later and search for evidence of a crime.

Several checks and balances are built into these laws: the police must go through a standard routine to obtain a court order, as with any search; the data retained is limited to “traffic data” (which means data used to set up and manage the connection, rather than the content sent over the connection); and the data can be discarded after a fixed period of time. The latter two limitations are not only legal safeguards against abuse, but measures to lighten the enormous storage burden these laws place on network operators.

Yet several factors make these laws more ominous than previous ones regarding the police authority to tap phones and look for evidence.

First, traffic data on the Internet contains much more information than the numbers used on traditional telecom networks to route phone calls: such data could include the subject lines of mail messages and the terms used during a Web search.

Second, the accumulation of so much information, covering every transaction by every network user, tempts both governments and independent criminals to break in and mine the data for malicious purposes.

Third, the very existence of what is, in effect, a continent-wide distributed database represents a step toward the much-feared form of social control that European residents remember from the Nazi era. Indeed, much of the second half of the twentieth century was spent building up laws to protect individuals and groups against the collection of information; but in the wake of September 11 the pendulum is swinging back.

Data retention was legally permitted by a directive of the European Parliament in May; now the European Union is discussing the codification of such laws across the entire membership. Canada is considering something along the same lines. Many countries around the world are tightening their surveillance of networks, as documented in a recent report by independent privacy organizations EPIC and Privacy International.

Proponents promise to respect privacy laws and believe they can be compatible with data retention. Despite the key departures from European tradition of privacy protection, proponents present the new laws as a way to maintain balance—to let law enforcement do its traditional job in the face of challenges from new technologies.

In framing the discussion this way, EU parliamentarians have borrowed a leaf from the propaganda campaign waged in the U.S. for many years by administrations aiming to pass the Communications Assistance for Law Enforcement Act (CALEA). This law facilitated wiretapping in digital phone networks. It was passed after years of debate in 1994, whereupon several more years of debate followed in the Federal Communications Commission concerning how much data should be surrendered, and with how much oversight from courts.

CALEA comes up, several years later, as a topic in this article because part of the powers requested by the FBI—and explicitly denied by Congress—was tapping Internet communications. Barred from doing this through CALEA, the FBI tried to institute the practice informally through the use of a device called Carnivore. The legality of installing Carnivore at Internet providers’ hubs was being debated when September 11 hit; now the FBI has been granted this right in the PATRIOT act of 2001.

Much has been written about the draconian provisions—many of which appear clearly unconstitutional—of the PATRIOT act. It is no consolation to know that Ashcroft and Bush were seeking yet more power in the original version of the act, and it is certainly not reassuring to read the rebuke that a federal court has given the Justice Department for misusing evidence in some 75 applications for search warrants and wiretaps. But here I will stay focused on the provisions related to computers and networks.

For the most part, PATRIOT allows police to do whatever snooping and tracking online they were doing before, but with less court oversight. The FBI can demand customer information from a telephone company or ISP at any time, without a warrant. Nor do they need a warrant to install their Carnivore equipment and pick up traffic data.

Streamlined procedures, unfortunately, are easily abused procedures. Critics of the cyber-espionage provisions fear that police will go on fishing expeditions, and that unscrupulous inheritors of J. Edgar Hoover’s legacy will use the information for nefarious projects unrelated to crime or national security.

It requires no stroke of brilliance to compare the past year both to the Allies’ mobilization for World War II (as do those who praise the year’s developments) and to the Cold War (by those who deplore them).

In the case of World War II, a frightening situation led to heroic exertions and a sense of pulling together. Yet while some initial and superficial gestures were made in that direction after September 11, the general trend has been more like the Cold War. To wit: the government loses all discernment and ability to judge situations properly; ideologues draw hard-and-fast lines along which they condemn many innocent bystanders; schemers milk the panic for undeserved financial gain and power.

If the Bush administration were serious about increasing security, it would invest the necessary money in buying sensors for biological agents, securing nuclear power plants, and guarding reservoirs and other public works. Instead, it rejects these basic measures and concentrates on pursuing its Orwellian goal of permanent war against one country after another.

Reports of censorship in the name of fighting terrorism since September 11 have appeared, both in the United States and abroad. Meanwhile, information useful to public interest groups, such as data on environmental hazards, gets taken down from government sites on the basis that it might be exploited by terrorists.

Certain consequences for the computer field and the growth of the Internet follow immediately from this atmosphere of suspicion. For instance, take the FBI’s reaction to wireless local networks, which represent the most promising extension of high-speed networking and user choice in the past decade.

While other agencies (particularly the FCC) move cautiously to promote such networks, and while community activists open them to everyone, the FBI shrilly proclaims that it will treat unauthorized use of such a network as a serious crime.

To be sure, wireless security is unsatisfactory, a situation that standards bodies are moving to fix. But while organizations should be aware of the need to protect their data, the notion spread by the FBI that these networks are hotbeds of potential crime simply imposes a chill on useful innovation.

The chill is even worse in Europe, in the wake of the data retention laws mentioned earlier. Although they have not so far been applied to cafes and community networks that permit anonymous access, a strict interpretation of the law would definitely rule out this beneficent development.

Privacy expert Simon Davies, in the September 2002 issue of the leading academic computer journal, the Communications of the ACM, decries the “arbitrary distinction between conventional technologies (the motor vehicle, telephone, and fax) that enjoy the protection of technological neutrality, and new technologies” that governments feel the need to penetrate and control.

In the midst of his survey of assaults on privacy (biometrics, interception of email, and so on) Davies issues a warning that we should keep in mind in all times and places: “Governments and their agencies have traditionally viewed new technologies with suspicion...[Most companies and government agencies] resist their implementation and attempt to use legal mechanisms to frustrate access to such technologies and techniques.”

We have seen how businesses and agencies are slow to implement useful security measures; instead, the government hysterically institutes lifelong punishment for computer intrusions. Music and movie studios are now shoving their way to the trough and trying to reclassify copyright violations as serious security breaches.

In the first weeks after September 11, while the rest of the country was mourning and reeling from the shock, large copyright holders were calculating their positions on the legislative chess board and trying to win from Congress an exemption from anti-intrusion laws: they wanted the right to break into and sabotage the computer systems of people they suspected of sharing copyrighted files! While they did not win this oligarchic privilege in the PATRIOT act, they are trying again now.

The counter-position of fear and innovation are illustrated best by the history of the peer-to-peer phenomenon, which for a moment in 2000 and 2001 seemed to offer the source of valuable innovations in computer applications: distributed file systems, powerful search capabilities, flexible collaboration systems, peer journalism. Fear may keep us from realizing the promise of putting power on the end-users computers and fostering communities.

In a sense, security may be enhanced by more openness rather closure. Creative security researchers are pursuing clever distributed techniques for sharing information about anomalies (potential break-ins, viruses, and denials of service) so that cooperating organizations can react to them more quickly.

Examples of these innovative experiments include a system that classifies normal behavior throughout a large network and flags anything abnormal, and registries where thousands of system administrators around the world can store patterns revealing suspected break-ins.

One can only hope that the shock of September 11’s revelations will make organizations realize that their safety lies in overcoming the current temptation to pull inward, and once again embrace change.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Editor, O’Reilly Media
Author’s home page
Other articles in chronological order
Index to other articles