Research and Advances
Computing Applications Human-computer etiquette: managing expectations with intentional agents

Introduction

Posted
  1. Article
  2. References
  3. Author
  4. Figures

When I hit my thumb with a hammer, I get mad at myself, at the person making me do the work, and at the cruel and malignant universe, but I don’t get mad at the hammer. By contrast, when my computer crashes, or fails to give me a file that I know is there, or tells me that I failed to shut it down properly when it was the one that crashed, I get mad at it. True, if I stop to think about it, I may get mad at the people who designed, programmed, or last updated it, but my immediate reaction is more personal. Even my language, as I write this, is illustrative: I hit myself with the hammer, while my computer does things to me.

Why should that be? The computer is, at some level, just a hunk of silicon and plastic—every bit as inert as the hammer. Part of the answer has to do with its complexity and autonomy, and with its accompanying unpredictability in use. Somewhere between the hammer and the computer lies a threshold of complexity and capacity for autonomous action that, when surpassed, tempts us to ascribe agent-like qualities such as awareness and intent.

We interact with agents on a social level, and increasingly so with increases in the complexity of behavior and of interaction which the agent affords [2, 8]. As Brown and Levinson [1] point out, social interactions are full of ambiguities, especially in understanding another’s beliefs, intents, and mental processes. Since we can’t predict or understand all of an agent’s actions, we develop codes and expectations about how agents should behave in various roles and contexts. These codes enable us to infer what specific behaviors mean, in terms of the agent’s intentions and beliefs. Not surprisingly, we have evolved a wealth of such expectations for interactions with the most complex and unpredictable agents with whom we regularly work and live—other humans. Some of these codes are quite explicit—such as the communication protocols in which pilots and air traffic controllers are trained [3], or the rituals of a religious service—but others are subtle and largely implicit. These patterns or rules of expectation are what we call “etiquette.”

Etiquette has two related definitions in common usage [5]: Etiquette is a (frequently implicit) set of prescribed and proscribed behaviors that permits meaning and intent to be ascribed to actions, thus facilitating group identification, streamlining communication and generating expectations; and etiquette encodes “thoughtful consideration for others,” that is, etiquette operates (when obeyed) to make social interactions more pleasant, polite, and cooperative and (when violated) to make them insulting, exploitative, and unpleasant. In both senses, etiquette defines behaviors expected of (or prohibited to) agents in specific contexts and roles (see Figure 1). Etiquette allows me to make predictions about what those around me will do (for example, thank me if I give them a gift, acknowledge my presence if I enter a room) and to ascribe meaning to their behaviors or lack of the same (for example, they didn’t like the gift; they didn’t notice me).

Computers long ago bypassed the “agentification barrier” and regularly elicit expectations and responses on our part as if they were human actors. In an extensive series of experiments, Reeves and Nass [9] demonstrated that humans frequently exhibit behaviors with computers similar to their behaviors with other humans. Such behaviors include attraction to agents (whether computer or human) whose characteristics are similar to their own, being less critical to an agent directly versus “behind its back,” and being more willing to accept and believe in flattery versus criticism from an agent.

Reeves and Nass did not need to modify a basic windows and mouse interface to encourage perception of the computer as human. Humans readily generalize their expectations from human-human interaction to human-computer interaction regardless of whether or not that is the intent of system designers.

Nevertheless, comparatively little attention has been paid to understanding and manipulating this dimension of human-computer interaction. Since a computer system will be perceived in light of the etiquette behaviors it adheres to or violates, it behooves designers to consider what etiquette our systems should follow or flout to elicit appropriate perceptions. For most home-based usage purposes, this might mean politeness, subservience, helpfulness, and “the sensitivity of an intuitive, courteous butler” [4], but those might be inappropriate behaviors to exhibit to a pilot or a power plant operator. The study of Human-Computer Etiquette (HCE) should embrace how to make computers more polite or human-like and how to avoid those inappropriate effects when necessary.

My own journey along this path began with work on cockpit-aiding systems for fighter and rotorcraft pilots—not a population typically interested in polite or considerate behavior. Nevertheless, we noted that human pilots in dual-crew cockpits spent as much as a third of their time in intercrew coordination activities, that is, in “meta-communication” about their intents and plans. In designing a Cockpit Information Manager able to determine the pilots’ needs and intents and to reconfigure cockpit displays to support those activities [7], we believed such a system would need to participate in meta-communication, taking instruction and declaring its intent and its understanding of the pilots’ intent. We designed and implemented a simple interface that provided these capabilities in a minimal fashion (see Figure 2). Introducing this interface improved human + machine system performance, and contributed to gains in user acceptance [6]. In hindsight, I believe these improvements came from fitting into the existing etiquette that pilots expected from any new actor in the domain. The interface we implemented did not follow etiquette in the sense of politeness, but it did behave according to the established conventions for any agent filling that functional role.

Taking the etiquette perspective in design means acknowledging that complex, semiautonomous technologies will be regarded as agents and that human interactions with them will be colored by expectations from human-human etiquette. Taking the etiquette perspective forces us to consider aspects of human-computer relationships that traditional approaches do not. By placing the system to be designed in the role of a well-behaved, human-like collaborator, we gain insights into how users might prefer or expect a system to act. We can also infer how system actions (and failures to act) might be interpreted by users. Such insights rarely come from other design approaches (with the possible exception of usability reviews of systems already designed and implemented). I find it instructive to ask two questions of potentially agentified technologies: If this system were replaced by an ideal human assistant, albeit one constrained to act through the interface modalities available to the system, how would that assistant behave?” Alternatively; If a human assistant were to provide this interaction in this way, how would he or she be perceived by colleagues? To pick (unfairly, I acknowledge) on a well-known example: How would I regard a human office assistant who, several times a day, interrupted my work to offer me help writing a letter?


Taking the etiquette perspective in design means acknowledging that complex, semiautonomous technologies will be regarded as agents and that human interactions with them will be colored by expectations from human-human etiquette.


This collection of articles illustrates the beginnings of research that overtly considers etiquette in design and evaluation of human-computer interactions. Some research explicitly manipulates etiquette behaviors to get a desired effect; other research considers the effect of etiquette in the analysis of existing or prototyped systems. Clifford Nass begins with a review of his seminal work demonstrating that humans apply the same etiquette to human-computer interactions that they do to human-human interactions in many cases. He speculates on what dimensions of behavior in an interaction are prone to activate our expectations of human-like etiquette behaviors (and our willingness to provide them).

Timothy Bickmore reports recent work on Embodied Conversational Agents (ECAs)—computer systems with an explicit face and body that enable them to exhibit very complex and subtle etiquette behaviors we associate with body language, facial expressions, and so on. By virtue of their sophisticated and human-like physical embodiment, ECAs represent the high (or at least, most complex) end of the spectrum of computer agents that exhibit etiquette. Bickmore summarizes the range of conversational functions that etiquette plays and provides examples of several ECA systems striving to achieve these functions.

Punya Mishra and Kathryn Hershey discuss the role of etiquette in pedagogical systems, where issues of motivation, interest, and establishment of roles between student and teacher make the use of etiquette behaviors critical. They review the results of experiments that explicitly manipulate the implications of Nass’s paradigm—that humans frequently react to computers as if they were other humans in social settings—in pedagogical applications and show some of the strengths and some of the limitations of this approach.

If the ECAs that Bickmore describes represent the high end of systems striving to achieve a wide range of human-like etiquette behaviors, Raja Parasuraman reports on work generally found at the other end of that spectrum—human interaction with high-criticality automation in domains such as aviation, power generation, and military systems. Here the consequences of human misunderstanding of automation capabilities and behaviors can be catastrophic and, consequently, there has been extreme skepticism about machines that exhibit subtle, human-like behaviors, much less politeness. Nevertheless, Parasuraman summarizes factors that produce and tune human trust in complex automation and then reports an experiment demonstrating that etiquette (whether good or bad) is a factor that must be included in that list since its effects are as significant as a 20% variance in automation reliability.

While the other authors examine direct human-computer interaction, Jennifer Preece is more interested in the effects the computer can introduce into computer-mediated human-human interaction—how computer technology can enhance or disrupt the etiquette of human-human, face-to-face interaction by introducing its own artifacts to distort that interaction. She reports survey data of what users perceive as etiquette violations in Internet interactions, analyzes causes of these perceived violations, and discusses the efficacy of potential solutions in the form of explicit “netiquette” rules and appeals to role models in the form of moderators and early adopters for smoothing the adoption of new etiquettes in different online settings.

As computers become more complex, smarter, and more capable, and as we allow them to take on autonomous or semiautonomous control of more critical aspects of our lives and society, it becomes increasingly important to define styles, norms, roles, and even mores of human and computer relationships that each side can live with. The rules that govern such relationships are etiquette rules; here we argue those who design and analyze human-computer interaction must become aware of those rules and how to incorporate their effects. The articles in this section illustrate a range of methods and outcomes that result from taking the etiquette perspective on human-computer interaction and, therefore, provide us a guide to the terrain we must explore more fully in the future.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Illustration of etiquette as prescribed and proscribed behaviors (collectively and separately) by role.

F2 Figure 2. The Crew Coordination and Task Awareness Display of the Rotocraft Pilot’s Associate. The system reports its interferences about high-level mission activities of task names; pilots can override via button presses.

Back to top

    1. Brown, P. and Levinson, S. Politeness: Some Universals in Language Usage. Cambridge University Press, Cambridge, UK. 1987.

    2. Dennett, D.C. Brainstorms: Philosophical Essays on Mind and Psychology. MIT Press, Cambridge, MA, 1978.

    3. Foushee, H.C. and Helmreich, R.L. Group interaction and flight crew performance. Human Factors in Aviation. Academic Press, San Diego, CA, 1988.

    4. Horvitz, E. Principles of mixed-initiative user interfaces. In Proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems. (Pittsburgh, PA, May 1999).

    5. Miller, C.A. Definitions and dimensions of etiquette. Working notes of the AAAI Fall Symposium on Etiquette for Human-Computer Work (2002). Technical Report FS-02-02. AAAI, Menlo Park, CA, 1–7.

    6. Miller, C.A. and Funk, H. Associates with etiquette: Meta-communication to make human-automation interaction more natural, productive and polite. In Proceedings of the 8th European Conference on Cognitive Science Approaches to Process Control. (Munich, Sept. 24–26, 2001), 329–338.

    7. Miller, C.A. and Hannen, M. The Rotorcraft Pilot's Associate: Design and evaluation of an intelligent user interface for a Cockpit Information Manager. Knowledge Based Systems 12 (1999), 443–456.

    8. Pickering, J. Agents and artefacts. Social Analysis 41, 1 (1997), 45–62.

    9. Reeves, B. and Nass, C. The Media Equation. Reeves, B. & Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press/CSLI, NY, 1996.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More