Peter Seebach ([mailto:email@example.com?cc=&subject=Yes, Virginia, security affects usability] firstname.lastname@example.org) Freelance writer 01 August 2003
Abstract: Peter gets cranky over incoherent or ill-considered advice about security. In this installment of The cranky user, he looks at how security affects usability and offers some background information on the threats computers face -- types of attacks, types of security holes, and how these problems affect users.
Know what two-word phrase immediately brings a sense of frustration and foreboding to my forebrain? "Critical updates."
My notebook's hard drive failed recently. As a result, I spent more than a month having a Unix-only laptop (yes, there are such things). After a month-and-a-half, when I got the replacement drive, I restored my backups and decided to check if Windows Update had any new updates.
I found a few of them. In fact, five of them were considered critical updates; all were security patches. And, as a tandem task, I needed to update my anti-virus definitions.
As users, you and I spend a lot of time applying security patches and fixes of various sorts to our computers. Often, security features seem to exist primarily to disrupt the normal, convenient use of our computers. What's going on with this?
The term security implies that you need to secure your assets against something. In most cases, that's a person (or persons) who wants to do something illicit with your computer's resources. All over the news, you hear stories of hackers/crackers who use regular folks' machines to covertly perform some task. (The newsbits that grab the most headlines are the ones in which networks of porn spammers hijack susceptible machines to serve as tiny Web servers for their advertisements, making it harder to shut down the individual spam operations. Hey, sex sells.)
A program designed to hijack your computer and use it to send spam, or to use your bandwidth for someone else's Web site, is ripping off your resources. A program which deletes files or overloads a machine is designed to prevent you from using those resources.
The former kind of attack is not generally targeted at specific people or organizations. The attacker doesn't care who you are, as long as he can access your resources. The latter type of attack is generally targeted at a specific entity. The goal is to harm that entity.
For instance, a large-scale Distributed Denial of Service (DDoS) attack, the kind that occasionally makes the news, is usually not intended to accomplish anything other than shutting down the target systems. Curiously though, these attacks are often rooted in the first type of attack -- hijacking of other machines. An attacker will gain access to thousands of machines (which are then called zombies) and use them to launch large, coordinated attacks on a central target.
Most users rarely are the subjects of DoS attacks, but they may frequently be the subjects of attacks designed to take over their machines. A casual search through the Web logs on one of my machines yielded something like 30,000 attempts per month to hijack my personal Web server. In fact, the volume of such attacks can sometimes end up being effectively a DoS attack on a small network connection.
In recap, personal computer users can probably expect the following type of attack, in which the attacker:
Home users are not generally worried about attacks in which the attacker:
Businesses, organizations, agencies, and institutions can probably expect both kinds of attacks.
I want to look at some of the more common ways into a system that attackers use.
Hackers can exploit several kinds of security holes. Some holes require that a malicious user has access to your machine physically. These kinds of security holes are generally the least harmful to most users, but can nonetheless be devastating to a corporate network. Such attacks require active intervention by the machine's user to breach system security. When you're using a computer, these are not generally a threat to you; they're a threat to you when someone else uses your computer.
A second tier of attack depends partially on user action, but can be triggered unintentionally. The most obvious example is modern macro virus e-mail messages, most of which exploit security holes in specific applications (most often Outlook Express) to send themselves around to other machines. E-mail viruses come in a slew of different versions and these attacks are often known as "Trojan horses" since they depend on tricking a user into giving them access to a system. (Or, as in the case of Outlook viruses, on a very bad initial design.)
It's hard to issue a security patch for a program like this -- after all, it's not really a bug when the system allows the user to run a program, one they've legally purchased! A lot of spyware programs (designed to report back to the original software companies on your system usage) are installed with other software and then act just like a Trojan horse. If there is a distinction to be made, it is a subtle one.
Some attacks don't require the user to do anything -- they just require a computer to be on the network. Code Red was a famous example of this type of hole use. It simply took advantage of a hole in Microsoft's IIS server to break into any machine running IIS. It is crucial to obtain security patches for these bugs, because they can be exploited without any kind of intervention on the part of the user
First, your computer is the one that can be damaged by these attacks. An insecure system can also be a danger to others. Many of the more subtle viruses and worms have survived by being sufficiently invisible to the users affected -- the users simply don't notice. Their machines continue to do various harmful things without their knowledge, and it's hard to stop them.
As a user, you are also affected because you probably have to download security updates and update anti-virus software definition lists on a fairly regular basis. This is inconvenient and annoying, and often, the solutions being pushed -- updating more often, automatically, being the most common -- are only marginally more convenient than being attacked and marginally more effective than less frequent updates.
In many cases, a real fix is hard or impossible to execute in software. The guy who insists on opening every attachment sent to him, who always clicks on "go ahead and allow this" when security software asks if something is allowed -- this guy cannot be reasonably protected by software.
Original design is the main culprit to this problem. User interface design makes the problem harder to solve than it should be. Many programmers have made questionable, or just plain stupid, decisions in deciding how their software handles incoming data.
One really bad programming decision is when programs allow scripts to be run by default (Microsoft's track record here isn't good, for example). Users generally are conditioned to think of data files as inactive and program files (executables) as active. Opening a data file should not affect your system nor cause it to do anything. Creating a feature to have some macros run automatically is a serious user interface problem. And what's worse, the tremendous outcry about the virus problems pushed the designers into implementing an exceptionally poor solution.
Users were offered a choice. They could either open a file and run any associated macros or not open the file at all. This is an exceptionally bad pair of choices -- in essence, it offered users the opportunity to infect their systems or the inability to use the software they had paid for.
A better solution have been to offer the user the option of viewing the data without running any macros. Better still would have been for the software to not allow for automatic macros in the first place. Users who need to run a macro can easily do so, under their own control. Right now, if you look at all of the automatic invocations of macros in the history of Microsoft Word, I'd guess that viruses make up well over 90 percent of the instances.
A lot of security features in software are badly designed. Users are given confusing, semantically obtuse, or meaningless prompts. Furthermore, tests and checks are often so unreliable that users just go ahead and open files that might possess viruses or worms. The end result is that, when a worm comes along, it is run instantly because there's no way for the user to know that the anti-virus software that cried wolf isn't just playing around.
No silver bullet is available yet. For now, users are stuck downloading security patches every time they turn around. Users can help substantially by taking a little time to educate themselves about the kinds of attacks to which their systems may be vulnerable.
In the long run though, security vendors (as well as any software maker that includes a security feature in its product) could do a lot of good by improving the user interfaces that security software provides.
This week's action item: Try to identify common elements between various famous e-mail worms, such as Melissa. Is this a good reason to argue for heterogeneous computing environments?
Photo of Peter Seebach Peter Seebach thinks that commercial software programmers (and their bosses) should be fitted with electro-shock studs. Then, every time a user gets nailed by a virus simply because they didn't understand the security software's instructions, an e-mail would bang into a central database and issue a little shock to the responsible programmer (and marketing manager, and HR maven, and CEO). What a great illustration of how poor security programming affects usability. You can reach him at [mailto:email@example.com] firstname.lastname@example.org.