June 2, 1998


by Andy Oram
American Reporter Correspondent

CAMBRIDGE, MASS.—You chose an obvious password, so it’s your fault. You chose to use your last name, or your cat’s name (forgetting that you mentioned it on your Web page), or a simple string like 1234. Now a malicious intruder is rifling with virtual fingers through the files on your organization’s server.

Or maybe it’s your system administrators’ fault. They didn’t enforce elementary precautions on the passwords their users chose. And they forget to remove general user permissions from a tiny, tucked-away utility that writes critical system files. Now the intruder has total access to your server with system-administrator privileges.

Or could it be the computer manufacturer’s fault? Who made computers and networks so hard to administer anyway? And why didn’t they send out a fix for a well-publicized flaw that the intruder is now using to gain access to military sites where some of your organization’s users have accounts?

Finally, the fault may lie with organizations that rely so much on computers. Some tasks just call for a well-trained, well-trusted human hand. Maybe society has to learn not to place life-and-death matters so much on the (non-human) shoulders of computers.

Social scientists and computer experts have been warning about the security risks of high-tech for decades, and now there’s more chance society will respond. On May 22 the Clinton Administration announced a major initiative to spread information and pull together public efforts at securing computer networks, along with other parts of the nation’s infrastructure like transportation and water systems.

The President’s White Paper was timed well politically, coming right on the heels of Congressional hearings that retold (in somewhat exaggerated terms) the dangers of malicious damage to electronic networks. But the paper had its real roots in a report issued last October by a committee called the President’s Commission on Critical Infrastructure Protection.

Like the President’s Commission, the new White Paper recognizes that security requires intelligent education and coordination on many levels. Certainly, software companies can improve their products and organizations can set policies. But in addition, each individual is responsible for the security of the system where he or she has an account. Each system needs administrators with the time to install security patches and to follow up with frequent checks of log files and accounts.

Computer security, then, can be regarded as a kind of hygiene that depends on a combination of public policy and private care. Things don’t work so well if governments and large organizations see security as a cyberwar calling for top-down control.

Conceivably, a cyberwar could be launched by groups determined to wage destruction, but that doesn’t mean that it can successfully fought by pulling wagons into a circle. The whole point of networks are to allow sharing. Computers that don’t need to be open can simply be disconnected from the net, but all others must rely on their users to keep them safe.

Security experts have repeatedly insisted that the majority of intrusions come either from weak security measures taken by individuals or by deliberate inside sabotage. Following poorly chosen passwords, the next most common failures come from well-known security holes that could be easily fixed if the administrators had the time and inclination.

For instance, in 1986, Clifford Stoll (who wrote the famous book “The Cuckoo’s Egg” about his work) uncovered an international espionage network that effected their attacks by cracking common passwords and exploiting a bug in a text editor. Most sites know better how to protect their system administrator accounts these days, but new services on the Internet (like CGI scripts that support Web forms and other forms of remote interaction) provide new chances for security lapses.

The “Internet worm” of 1988 was especially damning, exposing a festering rot in the accountability of computer companies. The worm, which did practically no damage but scared hundreds of network administrators into disconnecting temporarily, exploited a basic C library function that was widely known as a security risk, and which appeared in the sendmail utility (the software that used to run on almost every Unix computer to direct electronic mail).

The problem that enabled the worm was so well-known that hackers casually talked of “the sendmail bug.” Unix vendors could have fixed it with a one-line change in source code, but it remained on most computer systems years after its discovery. System administrators should remember this lapse as they indulge in their favorite pastime of bashing Microsoft for the security holes in Windows NT.

Windows NT simply hasn’t been out in the field very long, so its flaws have not had time to settle out as they have in older operating systems. Nevertheless, Microsoft is widely criticized because it doesn’t show the concern for fixing known bugs that would mark a company ready for a networked age. And while the President’s White Paper tries to summon companies to a greater effort, the government’s own computers are among the worst offenders in the security area.

The White Paper shows aspects of both hygiene and fortress. Just as we have to expect computer attacks by people ranging from the criminal to the merely curious, we should expect that the Administration and law enforcement groups will use the threat of attacks to strengthen their own hand at the expense of civil liberties. After all, the Clinton Administration has a surprisingly poor record among civil libertarians, despite its liberal rhetoric.

After the Oklahoma City bombing, which marked a historic turning point in Americans’ sense of security, Congress considered counterterrorism bills that called for severe intrusions into traditional civil rights. While the changes regarding wiretapping and other forms of surveillance dropped out along the way, the final bill signed in October 1996 still had troubling provisions. For instance, the President at his discretion can declare organizations anywhere in the world to be terrorist, and prosecute anyone who gives money to those organizations.

Previous Republican administrations, of course, have also contributed to the degradation in rights, as has the Supreme Court. Death row prisoners have a much harder time appealing their cases, even when they demonstrate gross miscarriages of justice in their original trials, and immigrants both legal and illegal have more trouble exerting their rights.

It is perfectly understandable that the White Paper brings the Department of Defense and the FBI into security procedures, along with creating some new agencies. But their participation could add weight to a fortress approach, such as attempts to require government access to individuals’ encryption keys.

I want law enforcement agencies and the courts to understand computer crime and prosecute it effectively. Attorney General Janet Reno has also suggested that private companies will be more willing to share problems in a closed session with law enforcement than in public discussion.

But measures that facilitate surveillance will dangerously weaken networks. Hopefully, the White Paper will fulfill its potential to provide everyone with an impetus to take much-needed measures, and not simply push a hidden agenda for social control.

As we read this past week of crackers breaking into user accounts at America Online—breaches caused by the most trivial of security lapses, in which customer service representatives forget to demand adequate proof of identity from people calling up over the phone—we should remember that we cannot depend on a government agency to protect our computer systems. The power lies, at it should in computer networks, at the grass roots.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Editor, O’Reilly Media
Author’s home page
Other articles in chronological order
Index to other articles