Peter Seebach ([mailto:firstname.lastname@example.orgemail@example.com&subject=Hardware and usability, Part 1] firstname.lastname@example.org) Freelance writer 01 Nov 2004
Abstract: Usability studies tend to focus entirely on software, ignoring the impact of hardware design and features on a system's usability. In this first installment of a two-part miniseries, Peter takes a look at the interactions between hardware and usability.
It's pretty easy to identify the things that annoy you when using a computer. Since these annoyances are typically software-based, software developers tend to get most of the flak for unusable systems and applications. But some problems have little or nothing to do with the zeroes and ones. In fact, some of the worst computer snafus have to do with poorly designed and implemented hardware.
The next couple of installments of The cranky user look at the negative effects of lousy hardware choices on usability. I'll start this one with a look at how poorly designed hardware impacts one of the most important features of your computer system: reliability.
The everyday annoyance of computer crashes is a recurring theme in this column, so I'll take that on first. While most computer crashes can be blamed on badly written software, adding hardware problems to the mix compounds the issue. In many cases, a hardware reliability problem won't make just one program unusable; it will make lots of them unusable. Worse, the behavior is very likely to be unpredictable. With a hardware problem, you can't just avoid doing the thing that triggers the bug. Furthermore, hardware errors reduce the likelihood that error reports (if you get them) will mean anything. Something reported as a "corrupt file" error, for example, may actually be bad memory reporting the wrong bits.
The combination of bad software and failing hardware can be very challenging to detect, let alone actually debug. If you've become accustomed to having to reboot once or twice a day, you probably think blue screens are fairly normal. With so many users trained to meekly accept (and acclimate to) software failures, it can be hard to detect that a given machine actually has a failing processor; after all, it's just doing what all the other machines in the office do, if marginally more often.
In the old days, you could use hardware-based workarounds, such as parity memory. Parity memory stored 9 bits of data for every 8 bits of system data. The ninth bit was used to see whether the date had changed since the last write. The ninth bit made it possible to detect single-bit errors. On the downside, most older systems simply froze when such errors occurred; but at least you knew what had gone wrong.
Unfortunately, parity memory was a solution suited to the predominantly single-tasking systems of its time. A memory error was necessarily in the program you were working on, since you couldn't go to other programs to save your work. Yesterday's solutions don't apply so well to today's systems, which is why many current systems rely on ECC.
Error Checking and Correction or Error Correcting Code (ECC) memory uses the same number of extra bits, but it uses them in larger batches. Through a fairly clever algorithm, ECC memory can detect up to two-bit errors, and actually correct single-bit ones.
ECC memory makes a significant difference in reliability. One of my friends ran a system for a couple of months with a memory module known to be failing, because with ECC it was reliable enough to depend on. Unfortunately, systems with ECC also cost more than those without it. Memory prices are a bit higher (often 10 to 20 percent), but system boards that can do error correction are usually significantly more expensive than their less reliable brethren.
ECC systems not only keep running after small errors but they can also tell you about them. Rather than a program just crashing, you get a notification that your system had a memory problem. That's a nice feature and I wish it were more widely available. In particular, I'd probably pay a fair bit more for a laptop computer with ECC support.
Unfortunately, the high cost of ECC means that all users lose. If you do any kind of serious work, it takes one or two memory errors for you to spend more on fixing them than the cost of purchasing ECC hardware to start with. Too bad that most vendors don't even offer ECC as an option just because it costs a few dollars more to build the board.
Another downside of ECC is the temptation it can pose to the unscrupulous: vendors have been known to misrepresent the ECC compatibility of their products, presumably just to get in on the profits. One board vendor proudly boasted "ECC compatibility" on a motherboard that actually didn't do any kind of error correction but could boot with ECC memory inside. That was misleading at best.
The thing about hardware is that (unlike software wrinkles, which can often go undetected), its annoying and undesirable aspects often show up right on the surface. For example, I've spent a stunning amount of time fighting with recalcitrant mice -- let the hours I've spent actually cleaning out greasy, gummy trackballs serve as my evidence.
Optical mice and trackballs have made a huge improvement in the quality of the user experience; if you haven't tried one yet I strongly recommend it. There's a lot to be said for a trackball that can be cleaned in 2 minutes instead of 20!
Worse than the obvious failures, though, are the great ideas gone wrong. The energy-saving mode for monitors is one example: it's theoretically smart, but doesn't always work so well in practice. The biggest problem with this feature is that it's never really under user control. The energy saver tends to lead to all sorts of problems in special cases (see below), and a glitchy monitor can be really hard to work around.
For instance, most monitors go into a low-power mode when they no longer receive a video signal. It's a reasonable thing to do and it probably saves a lot of power. Unfortunately, if you're trying to debug a problem that occurs very early in your system's boot process -- like during the POST (Power-On Self Test) phase -- low-power mode might pose a real problem. Any error message you get is likely to come at the time when the monitor is still warming up, which means this mode will be in effect, which means you won't see the error message.
One solution is to use a console switch and switch the monitor over to a machine that is producing a video signal. Another is to start adjusting the monitor: if it has an on-screen display it will likely warm up to show you the contrast adjustment.
Power mode can be just as annoying when it doesn't work as when it does. I use a 19-inch monitor on a KVM (keyboard, video, mouse) switch. For reasons beyond my comprehension, the power-save mode doesn't work. I wrote the vendor (IOGEAR) to ask about this and got back the following classic response:
This behavior is by design. What happens is that the KVM has "VSE" technology that is constantly sending video signals so that the monitor will not shut off. Otherwise it will drop the video.
Well, I knew that. The problem is that it's impossible for my monitor to go into power-saving mode, ever. I have to actually turn the monitor off when I want it to go into sleep mode. If I forget to turn it off and trust the computer's normal screen-blanker to do this (which works with every other KVM switch I've used), it will stay on, in full-power mode, chewing up electricity like it's free.
What I particularly liked about this vendor's response, though, was that the company actually came up with an acronym for "doesn't do what you want or what any of the specs say it should." While it doesn't exactly help, at least I know what's annoying me: "VSE" is annoying me. Now, why can't I turn it off? My query went unanswered.
No discussion of hardware nuisances is complete without acknowledging the countless injuries I've sustained from computer maintenance work. Surprisingly enough, most of them have come from the unexpectedly sharp edges on my hardware. The marginal cost of de-burring metal edges to make them a little less likely to cut the user is apparently too high for some cheap vendors to bother with. What's more surprising is that some vendors who otherwise provide high-quality parts have done the same thing; for instance, almost all old Macintosh computers have sharp edges.
Proper (and user-safe) hardware design is one of those things that ought to be taken for granted. There's no real reason for a computer's metal parts not to be de-burred; in fact, the razor-sharp edges I've seen on some cheap cases (and on one of my old Power Macs) suggest an almost deliberate process of honing!
Usability discussions tend to focus entirely on software, maybe because for many users it's the unknown, uncontrollable factor that defines their (bad) experience. But good hardware design is important too.
In this installment of The cranky user, I've looked at the impact of hardware on some of the most important areas of computer system design: reliability and error checking. I've also talked about the ongoing annoyance of features that don't work like they should (and vendors who won't 'fess up) and the most surprising "Ouch!" of all: externals that cut you.
The next installment on this topic will feature more reasons to think harder about hardware, as well as a checklist of hardware maintenance tricks that might significantly improve your user experience.
This week's action item: Write a vendor and ask for a way to disable an obnoxious feature or work around it. Who knows -- you might get an answer!
Photo of Peter Seebach Peter Seebach has been using computers for years and is gradually becoming acclimated. He still doesn't know why mice need to be cleaned so often, though. You can contact Peter at [mailto:email@example.com?subject=The cranky user: Hardware and usability, Part firstname.lastname@example.org] email@example.com.