Jumat, 14 Agustus 2015

Medical Usability: How to Kill Patients Through Bad Design (nngroupa)

 Usability is often a matter of life or death. In a fighter plane's user interface, for example, taking a second off the time required to operate targeting-and-firing systems offers pilots a dramatic edge in dog-fights.

The most striking example of how bad design can kill comes from in-car user interfaces: thousands of deaths per year are related to drivers being distracted by overly complex designs. Conversely, good automotive design can save lives. As an example, take my new Lexus LS430's slightly nagging navigation system, which tells you far in advance whether the freeway exit you need will be to the left or the right. This feature gives you plenty of time to change lanes, rather than having to wait until the last moment, which is when you typically spot the road sign. (The number of people killed due to poor sign usability must be astounding.)

Medical systems have also provided many well-documented killer designs, such as the radiation machines that fried six patients because of complex and misleading operator consoles. What's less known is that usability problems in the medical sector's good old-fashioned office automation systems can harm patients just as seriously as machines used for treatment.
Field Study in a Hospital

In a recent Journal of the American Medical Association paper, Ross Koppel and colleagues reported on a field study of a hospital's order-entry system, which physicians use to specify patient medications. The study identified twenty-two ways in which the system caused patients to get the wrong medicine. Most of these issues are usability problems. I'll briefly discuss the ones of general interest here.

Misleading Default Values. The system screens listed dosages based on the medication units available through the hospital's pharmacy. When hospital staff members prescribed infrequently used medications, they often relied on the listed unit as being a typical dose, even though that's not the true meaning of the numbers. If a medication is usually prescribed in 20 or 30 mg doses, for example, the pharmacy might stock 10 mg pills so it can cover both dosage needs and avoid overstocking a rare medication. In this case, users might prescribe 10 mg, even though 20 or 30 would be more appropriate. The solution here is simple: Each screen should list the typical prescription as a guidance. Years of usability studies in many domains have shown that users tend to assume that the given default or example values are applicable to their own situations.

New Commands Not Checked Against Previous Ones. When doctors changed the dosage of a patient's medication, they often entered the new dose without canceling the old one. As a result, the patients received the sum of the old and new doses. This common type of user error is equivalent to a banking interface error, where you specify payment of the same amount to the same recipient twice in one day. Many bank websites will catch these errors and ask you to double-check so you don't pay the same bill twice. In general, if users are doing something they've already done, the system should ask whether both operations should remain in effect or whether the new command should overrule the old one.

Poor Readability. Because patient names appeared in a small font that was difficult to read, it was easy for users to select the wrong patient. The problem was compounded by the fact that names were listed alphabetically rather than grouped by hospital areas, which meant that users looking for a specific patient saw many similar names. Also, in individual patient records, the patient's name didn't appear on all screens, reducing the probability that users would discover the error before reaching a critical point in the interaction.

Memory Overload. At times, users had to review up to twenty screens to see all of a patient's medications. The well-known limits on human short-term memory make it impossible to remember everything across that many screens. In a survey, 72% of staff reported that they were often uncertain about medications and dosages because of the difficulties in reviewing a patient's total medications. Humans are notoriously poor at remembering exact information, and minimizing users' memory load has long been one of computing's top-ten usability heuristics. Facts should be restated when and where they're needed rather than requiring users to remember things from one screen to the next (let alone twenty screens down the road).

Date Description Errors. The interface let users specify medications for "tomorrow." When surgeries were late in the day and users entered such orders after midnight, patients would miss a day's medication.

Overly Complicated Workflow. Many aspects of the system required users to go through numerous screens that conflicted with hospital workflow. As a result, the system wasn't always used as intended. Nurses, for example, kept a separate set of paper records that they entered into the system at the end of the shift. This both increased the risk of errors and prevented the system from reflecting real-time information about the medications each patient had received. In general, whenever you see users resorting to sticky notes or other paper-based workarounds, you know you have a failed UI.
Methodology Weaknesses

To supplement their field observation of actual user behavior, the researchers administered a survey that asked hospital staff how often various errors had occurred during the previous three months. Unfortunately, the paper relies overly much on this self-reported data in estimating the impact of the usability problems. It's well known that people have a hard time remembering what they do with computers. Valid data comes from what people do, not what they say.

When it comes to user errors caused by bad design, there's a further problem as well: If the interface fails to provide adequate feedback, users might not even realize that they've committed an error. With medication errors in particular, it's also quite possible that hospital staff might tend to minimize the extent to which patients get the wrong medication -- even when a survey guarantees anonymity.

I would have much preferred error-frequency estimates based on actual observations, rather than fallible human memory and possibly biased survey answers. Still, the survey indicated that many of the errors reportedly occurred at least weekly. If anything, the true error rate is probably higher than the self-reported estimates in the survey.

It's great to see usability branching out beyond its origins and being researched in a clinical epidemiology department. It's less great to observe methodological weaknesses that stem from studying usability issues without the benefit of the last twenty-five years' experience with usability research. Of the paper's sixty references, 92% are from medical journals and the like. Only five of the sixty references are from the human factors literature. And, despite the fact that the study related to software design, none of the five references are from leading journals, conferences, books, or thinkers in human-computer interaction.

Hospital systems offer just one example of the usability problems that proliferate in domain-specific systems. Such systems rarely get as much public exposure and analysis as websites do. Vendors often think that having domain experts on staff means that their software will work in the field. But the way people are supposed to work in theory never matches reality. The more specialized the system, the more you need user research to ensure success. From physicians to firefighters, if you don't observe real users and test your designs with them, you are guaranteed a plethora of usability problems.
Locating the Paper: More Usability Trials

I'm not a regular reader of the Journal of the American Medical Association; I discovered the study through an article in The New York Times. Unfortunately, getting from the newspaper to the paper it referenced was a trying ordeal.

I never cease to be surprised at the miserable usability of university websites. The Web was invented to disseminate academic papers, but it's almost impossible to find research results on academic websites.

In this case, I didn't know the paper's title, as it wasn't reported in the newspaper. I did have the lead author's name, so I searched for it and was promptly led to a faculty homepage at the University of Pennsylvania. Unfortunately, this page was useless, as are most faculty member homepages. The most recent entry on the "selected publications" list was from 2002. The professor's main research interest was presented in colored text, offering a strong perceived affordance of clickability. Nevertheless, it offered no link. The biography page offered no further information about the professor's research either. It did link to his full curriculum vitae (in PDF, oh woe), but it hadn't been updated since March 2003 and also had no links.

Looking for the author failed to produce any information about the research. What about the academic institution responsible for the project? The newspaper handily provided the department's full name, making for an easy search. The top search result was the correct one, but the page title -- CCEB -- had almost no information scent. Further probing revealed that CCEB stands for "Center for Clinical Epidemiology and Biostatistics." With an entire line available to spell out their names, you'd think organizations would want to help poor outside users by doing so.

But this was far from the worst problem. Sadly, the university has almost no idea of how to use the Web for PR. On a day when the "CCEB" was featured on the front page of The New York Times ' business section, the department's latest News page entry was ten months old.

Where the University of Pennsylvania failed miserably, the American Medical Association performed wonderfully: a search for Journal of the American Medical Association retrieved the journal's website as the first hit. The JAMA homepage offered a direct link to the article I wanted, in keeping with the homepage guideline to feature high-priority content.

hospital777: JAMA's main navbar also had a prominent link to Past issues (unfortunately presented in low-contrast colors and ALL CAPS text). This link led to an archive that included the current issue. This is quite helpful for users -- like myself -- who don't realize their target content is actually in the current issue.

All that said, JAMA's website had plenty of usability problems, including a proliferation of undifferentiated More links that simultaneously hurt the homepage's scannability for sighted users, reduced accessibility for blind users, and prevented search engines from associating destination pages with meaningful keywords from the anchor text.

Mainly, though, JAMA did its job on the Web. Once I changed my strategy and searched for the paper's publisher, rather than the author and his academic institution, it took me about a minute to go from a major search engine to retrieving the paper's full text on the JAMA site.

The fact that academic websites are so miserable to use is surely a contributing factor to the isolating and narrowing effect of current research practices. If outsiders could more easily connect with research results in other disciplines -- where they don't know the scientists personally -- we might see more cross-fertilization and growth in our shared knowledge base. Indeed, a unified, worldwide hypertext system was the Web's founding motivation.


Facebook Twitter Google+

Back To Top