BLOG@CACM
HCI

Designing Effective Warnings

Posted
Carnegie Mellon Associate Professor Jason Hong

In my last blog entry, I gave an overview of some of the issues in designing usable interfaces for security. Here, I will look at more of the nuts and bolts of designing and evaluating effective user interfaces.

Now, there have been whole web sites, courses, and books devoted to how to design, prototype, and evaluate user interfaces. The core ideas all still apply, including observing and understanding your users’ needs, rapid prototyping, iterative design, fostering a clear  mental model of how the system works, and getting feedback from users (both through formal and informal user studies) all still apply.

However, there are also several challenges that are unique to designing interfaces dealing with security and privacy. Let’s look at one common design issue with security, namely security warnings.

Computer security warnings are something we all see everyday. Sometimes these warnings require active participation from the user, for example dialog boxes that ask the user if they want to run a piece of downloaded software or store a password. Other times they are passive notifications that require no specific action by the user, for example, letting users know that they are connected to WiFi or that the web browser is now using a secure connection.

Now, if you are like most people I’ve observed, you either are hopelessly confused by these warnings and just take your best guess, or you pretty much ignore most of these warnings. And sometimes (perhaps too often) both of these situations apply.

There are at least three different design issues in play here. The first is whether the warning is active or passive. Active warnings interrupt a person’s primary task, forcing them to take some kind of action before continuing. In contrast, passive warnings provide a notification that something has happened but do not require any special actions from users. So far, research has suggested that passive warnings are not effective for alerting people to potentially serious consequences, such as phishing attacks. However, bombarding people with active warnings is not a viable solution either, since people will quickly become annoyed with being interrupted all of the time.

The second design issue here is habituation. If people repeatedly see a warning, they will become used to it, and the warning will lose its power in being able to effectively warn people. Worse, people will expect the warning and simply swat it away, even if that was not their intended action. I know I’ve accidentally deleted files after confirming the action, only to realize a few seconds later that I had made a mistake.

A related problem here is that these warnings have an emergent effect. People have been trained over time to hit "OK" on most warnings just so that they can continue. In other words, while people might not be habituated to your warnings specifically, they have slowly become habituated to warnings in general.

The third design issue here is defaults. In many cases, you as the system designer will know more about what users should be doing, what the safer action is. As such, warning interfaces need to guide users towards making better decisions. One strategy here is providing good defaults that make the likely case easy (e.g. no, you probably don’t want to go to that phishing site) while making it possible, but not necessarily easy, to override.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More