Research and Advances
Computing Applications

Promoting Personal Responsibility For Internet Safety

Online safety is everyone's responsibility—a concept much easier to preach than to practice.
Posted
  1. Introduction
  2. Threat Appraisal: The Fear Factor
  3. Coping Appraisal: Building Confidence
  4. Rewards and Costs: The Pros and Cons of Safety
  5. Involvement: Central or Peripheral Persuasion?
  6. Self-Regulation: Taking Responsibility
  7. References
  8. Authors
  9. Footnotes
  10. Figures
  11. Tables

How can we encourage Internet users to assume more responsibility for protecting themselves online? Four-fifths of all home computers lack one or more core protections against virus, hacker, and spyware threats [6], while security threats in the workplace are shifting to the desktop [7], making user education interventions a priority for IT security professionals. So, it is logical to make users the first line of defense [10, 12]. But how?

Here, we present a framework to motivate safe online behavior that interprets prior research and uses it to evaluate some of the current nonprofit online safety education efforts. We will also describe some of our own (i-Safety) findings [4] from a research project funded by the National Science Foundation (see Table 1).

Back to Top

Threat Appraisal: The Fear Factor

The most obvious safety message is fear. This strategy is found at all online safety sites in Table 1. Sometimes it works. Among students enrolled in business and computer science courses, awareness of the dangers of spyware was a direct predictor of intentions to take protective measures [2].

More formally, threat appraisal is the process by which users assess threats toward themselves, including the severity of the threats and one’s susceptibility to them. Examples of these and the other user education strategies found here, along with the names of related variables found in prior research and empirical evidence supporting them, are shown in Table 2. The subheadings in the table are organized around the headings in this article and reflect key concepts in Protection Motivation Theory (threat appraisal, coping appraisal, rewards and costs), the Elaboration Likelihood Model (involvement), and Social Cognitive Theory (self-regulation). The interested reader will find an overview of these theories in [1].

It is unfortunate that communication about risk is surprisingly, well, risky. It often fails to motivate safe behavior or has weak effects. And there can be “boomerang effects,” named for the shape of the nonlinear relationships sometimes found between safe behavior and fear [8, 11]. Moderate amounts of fear encourage safe behavior. Low amounts of fear diminish safety, because the threat is not seen as important enough to address. However, intense fear can also inhibit safe behavior, perhaps because people suppress their fear rather than cope with the danger. In our own research involving students from social science courses (who were probably not as knowledgeable as students depicted in [2], but perhaps closer to typical users), we found a boomerang pointing in the opposite direction. Moderate levels of threat susceptibility were the least related to safe behaviors like updating security patches and scanning for spyware, while users with both high and low levels of perceived threat were more likely to act safely. The point is, without knowing the level of risk perceived by each individual, threatening messages have the potential to discourage safe behavior.

Back to Top

Coping Appraisal: Building Confidence

Users also evaluate their ability to respond to threats by performing a coping appraisal. Building self-efficacy, or confidence in one’s abilities and in the safety measures used, is perhaps the most effective education strategy. Self-efficacy is the belief in one’s own ability to carry out an action in pursuit of a valued goal, such as online safety. Perceived behavioral control is a related concept that builds on the notions of controllability and perceived ease of use to predict intentions to enact safety protections [2]. Self-efficacy is distinguishable from “actual” skill levels in that we may feel confident tackling situations we have not encountered before and, conversely, may not feel confident enacting behavior we mastered only during a visit from the IT person months ago.

Beliefs about the efficacy of safety measures are also important. It’s called response efficacy in the present framework although others have identified it as the relative advantage of online protections [5]. Our confidence in our computer’s capability to handle advanced protective measures (computing capacity in [5]) is another response efficacy issue. Self-efficacy and response efficacy have the most consistent impact on safe behavior across many safety issues, and we [4] and others [2, 5] have verified their importance in the online safety domain.

Efficacy has a direct impact on safe behavior, but also interacts with risk perceptions. Fear is most likely to work if the threat information is coupled with information about how to cope with them, since the coping information raises self-efficacy. When messages arouse fears but don’t offer a rational means for dealing with the fear, people are likely to deny the danger exists or is unlikely to affect them [11]. In Internet terms, that defines the “newbie.”

Not all user education sites include self-efficacy messages; some that do set unrealistic expectations: “You can easily keep yourself safe if you just perform these two dozen simple network security tasks daily.” Still, persuasion attempts are a proven approach to building self-efficacy, anxiety reduction is another, but both can backfire if safety measures are complex, are perceived to be ineffective, or have the possibility of making matters worse. The most effective approach is to help users master more difficult self-protection tasks.

Mismatches among threat perceptions, self-efficacy, and response efficacy could explain why so many users fail to enact simple spyware protections [2, 9] and also the inconsistent findings of previous research. Some may not perceive the seriousness of the threat, novice users (such as those surveyed in [9]) may not have the self-efficacy required to download software “solutions,” while others may doubt the effectiveness of the protection. In a sample comprised mainly of industry professionals [5], a self-efficacy variable (perceived ease of use) did not predict intentions to enact spyware protections, but perceptions of response efficacy (relative advantage) did. Possibly the industry professionals had uniformly high levels of self-efficacy but divergent views on the effectiveness of spyware protections so only the latter was important.

Back to Top

Rewards and Costs: The Pros and Cons of Safety

Users perform a mental calculus of the rewards and costs associated with both safe and unsafe behavior. The advantages of safe behavior are not always self-evident and there are negative outcomes (the cons) associated with safe behavior. Safety takes the time and expense to obtain protective software and keep it updated. The negatives must be countered so that fearful users don’t invoke them as rationalizations for doing nothing. We can also encourage safety by disparaging the rewards of unsafe behavior, such as those touted by parties who make unscrupulous promises if we just “click here.”

Another tactic is to stress the positive outcomes of good, that is, safe behavior. Eliminating malware is in itself a positive outcome, but the secondary personal benefits of more efficient computer operation, reduced repairs, and increased productivity also deserve attention. In one study [5] a status outcome, enhancing one’s self-image as a technical or moral leader, was an important predictor of safe behavior. The ability to observe the successful safety behavior of others (visibility in [5] ) or to observe them on a trial basis for ourselves (trialability [5]) also encourages safety.


Users perform a mental calculus of the rewards and costs associated with both safe and unsafe behavior.


Back to Top

Involvement: Central or Peripheral Persuasion?

When users are deeply involved in the subject of online safety they are likely to carefully consider all of the pluses and minuses of arguments made for and against online safety practices. Personal relevance is an indicator of involvement. In the research we will describe, 44% of the participants said that online safety was highly relevant, but the other 56% had lower levels of involvement. However, many users (11% of our sample) did not find online safety relevant at all. Although safety involvement was related to self-efficacy (a significant positive correlation of 0.25) and to response efficacy (0.4 correlation), involvement is conceptually and empirically distinct from both.

Involvement matters. Along with our ability to process information free from distraction or confusion, involvement determines the types of arguments likely to succeed. Here, we argue that even minor deficiencies in involvement make a difference in response to online safety education. When involvement or our ability to process information is low, individuals are likely to take mental shortcuts (heuristics), such as relying on the credibility of a Web site rather than reading its privacy policy. That is when the boomerang effects we mentioned earlier can happen. The fear shuts down rational thinking about the threat to the point that users may deny the importance of the threat and choose unsafe actions [11]. When involvement is high users are likely to elaborate: They are likely to think arguments through, provided they are presented with clear information and are not distracted from reflection. This is known as the Elaboration Likelihood Model (ELM) [8].

“Phishcatchers” exploit ELM. The fear-inducing news that one’s account has been compromised can overwhelm careful thinking even among the highly safety conscious. Spoofed URLs and trusted logos provide peripheral cues that convince users to “just click here,” an action that requires little or no self-efficacy and, they promise, will be an entirely effective response. IT professionals tacitly enlist the peripheral processing route of ELM when they broadcast dire warnings about current network security threats through trusted email addresses.

However, what if the message from the IT department is itself a spoof? How can threats that attack individual desktops and escape the notice of network security professionals be countered? Next, we argue for an approach that promotes user involvement along with personal responsibility and that builds user self-efficacy.

Back to Top

Self-Regulation: Taking Responsibility

Behavioral theories change as unexpected new problems are encountered. A news story about our project prompted a letter criticizing “the professors” for assuming that online safety was the user’s problem. This led us to uncover the role of personal responsibility. There is evidence that collective moral responsibility encourages safe online behavior [5], but not personal responsibility. Indeed, personal responsibility is theoretically an indicator of involvement [8], but we found the two were weakly correlated (r = 0.20), and so a different conceptual approach was required. We realized that personal responsibility is a form of self-regulation in Social Cognitive theory: Users act safely when personal standards of responsibility instruct them to.

In our surveys those who agree that “online safety is my personal responsibility” are significantly more likely to protect themselves than those who do not agree (Table 3). The likelihood of taking many commonly recommended safety measures is related to feeling personally responsible, with large “responsibility gaps” noted for perhaps the most daunting safety measure, firewall protection, and also the easiest, erasing cookies. However, surveys alone cannot establish the direction of causation. It could be that personal responsibility is a post hoc rationalization after users acquire self-efficacy and safe surfing habits, and does not itself cause safe behavior.

So, we investigated personal responsibility in a controlled experiment involving 206 college students from an introductory mass communication class. We split the group into high- and low-efficacy conditions at the median value of a multi-item index. We controlled for involvement based on responses to a multi-item index also included in the pretest. As we noted earlier, about half our sample was highly involved (that is, stated that online safety was highly relevant), so splitting the group at the median separated the “safety fanatics” from the rest. This resulted in four groups: High involvement/high self- efficacy (n= 41), low involvement/low self-efficacy (n=38), high involvement/low self efficacy (n=64), and low involvement/high self-efficacy (n=63)

Prior to taking the posttest, half the respondents in each of the four groups were randomly selected to visit a Web page with online safety tips from Consumer Reports, with the heading “Online Safety is Everyone’s Job!” and a brief paragraph arguing that it was the readers’ responsibility to protect themselves. That was the personal responsibility treatment condition. The other half of the sample was randomly assigned to a Web page headed “Online Safety isn’t My Job!” and arguing that online protection was somebody else’s job, not the reader’s. That was the irresponsibility treatment.

The results are shown in the accompanying figure. The vertical axes indicate average scores on an eight-item index of preventive safety behaviors, such as intentions to read privacy policies before downloading software and restricting instant messenger connections. After controlling for pretest scores, the personal responsibility treatment caused increases in online safety intentions in all conditions except one: Those with low self-efficacy and low safety involvement had lower safety intentions when told that safety was their personal responsibility than when they were told it was not (the lower line on the graph to the right). Thus, those who are not highly involved in online safety and who are not confident they can protect themselves—a description likely to fit many newer Internet users—were evidently discouraged to learn that safety was their responsibility. The positive effect of the personal responsibility manipulation was greatest in the high involvement, high self-efficacy condition and high involvement users (the left-hand graph) exhibited more protective behavior than users with less involvement (the right-hand graph).

When safety maintenance behaviors (for example, updating virus and anti-spyware protections) were examined, the pattern for the low safety involvement group reversed. There, the argument about personal responsibility caused those with high self-efficacy to be less likely to engage in routine maintenance than the argument against personal responsibility. We speculate that those who are confident but not highly involved in online safety reacted by resolving to fix the problems after the fact rather than incur the burden of regular maintenance. The other groups had the expected improvements in safety maintenance intentions with the personal responsibility message. However, there was very little difference between treatments for the low involvement/low self-efficacy group perhaps because they felt unable to carry out basic maintenance tasks.

Thus, it is possible to improve safety behavior by emphasizing the user’s personal responsibility. However, the strategy can backfire when addressed to those who are perhaps most vulnerable; namely, those who are uninterested in safety and who lack the self-confidence to implement protection. The personal responsibility message can also backfire when directed to bold (or perhaps, foolhardy) users, those who think they can recover from security breaches but who are not involved enough to apply routine maintenance.


We speculate that those who are confident but not highly involved in online safety reacted by resolving to fix the problems after the fact rather than incur the burden of regular maintenance.


In the present research safety involvement was a measured variable rather than a manipulated one. However, safety involvement might also be manipulated by linking it to a more personally relevant issue, privacy. This is substantiated by the high correlation (0.72) we found between privacy and safety involvement. Privacy is often conceived as a social policy or information management issue [3], but safety threats affect privacy, too, by releasing personal information or by producing unwanted intrusions. Within an organization, the privacy of the firm might be linked to personal involvement through employee evaluation policies that either encourage safe practices or punish safety breaches.

Among all of the factors we have discussed, personal responsibility, self-efficacy, and response efficacy were the ones most related to intentions to engage in safe online behavior in our research [4]. Intentions are directly related to actual behavior. Self-efficacy has a direct impact on behavior over and above their effects on good intentions [2, 5]. Still, there are factors that intervene between intentions and behavior, especially when the protective measures are relatively burdensome and require attention over long periods of time, as is the case for online safety.

Other sources of self-regulation can be tapped. Social norms also affect safety intentions [see 5] if we believe that our spouses and co-workers wish that we would be safer online. Having a personal action plan helps, as does a consistent context for carrying out the safe behavior. That builds habit strength. Another stratagem is offering ourselves incentives for executing our safety plan (for example, a donut break after the daily protection update). That is action control [1], and it has proven effective in managing long-term health risks that are analogous to the network security problem.

Personalized interventions are critical. Seemingly obvious but undifferentiated communication strategies such as alerting users to spyware (found in [2, 5]) could have unwelcome effects. While there are differences by gender and age [5], our experimental data suggests that a more refined audience segmentation approach is required. User education Web sites could screen visitors with “i-safety IQ” quizzes that would route them to appropriate content. Instead of serving as one-shot repositories of safety tips, online interventions might encourage repeat visits to build self-efficacy and maintain action control. User-side applications that detect problem conditions and alert users to their risks and potential protective measures and walk them through implementation would also help.

We conclude that the average user can be induced to take a more active role in online safety. Progress has been made in uncovering the “pressure points” for effective user education. Here, we have attempted to fit these into a logical and consistent framework. Still, much work needs to be done to better understand online safety behavior, including experimental studies that can validate the causes of both safe and unsafe behavior. More diverse populations must also be studied since much of the currently available research has focused either on uncharacteristically naïve [9] or savvy [2] groups. Our experimental findings suggest that relatively modest, if carefully targeted, interventions can be effective in promoting online safety. Thus, improving user responsibility for overall online safety is a desirable and achievable goal.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

UF1 Figure. Experimental results for safety prevention intentions.

Back to Top

Tables

T1 Table 1. Online safety user education sites.

T2 Table 2. Framework for motivational user education strategies.

T3 Table 3. Personal responsibility and online safety precautions.

Back to top

    1. Abraham, C., Sheeran, P., and Johnson, M. From health beliefs to self-regulation: Theoretical advances in the psychology of action control. Psychology and Health 13, (1998), 569–591.

    2. Hu, Q. and Dinev, T. Is spyware an Internet nuisance or public menace? Commun. ACM, 48, 8 (Aug. 2005), 61–65.

    3. Karat, C.-M., Brodie, C., and Karat, J. Usable privacy and security for personal information management. Commun. 49, 1 (Jan. 2006), 56–57.

    4. LaRose, R., Rifon, N., Liu, X., and Lee, D. Understanding online safety behavior: A multivariate model. International Communication Association (May 27–30, 2005, New York).

    5. Lee, Y. and Kozar, K.A. Investigating factors affecting the adoption of anti-spyware systems. Commun. 48, 8 (Aug. 2005), 72–77.

    6. National Cyber Security Alliance AOL/NCSA Online Safety Study, 2005; www.staysafeonline.info/pdf/safety_study_2005.pdf.

    7. National Cyber Security Alliance. Emerging Internet Threat List, 2006; www.staysafeonline.info/basics/Internetthreatlist06.html.

    8. Petty, R. and Cacioppo, J. Communication and Persuasion: Central and Peripheral Routes to Attitude Change. Springer-Verlag, New York, 1986.

    9. Poston, R., Stafford, T.F. and Hennington, A. Spyware: A view from the (online) street. Commun. ACM 48, 8 (Aug. 2005), 96–99.

    10. Thompson, R. Why spyware poses multiple threats to security. Commun. ACM 48, 8 (Aug. 2005), 41–43.

    11. Witte, K. Putting the fear back into fear appeals: The Extended Parallel Process Model. Communication Monographs 59, 4 (1992), 329–349.

    12. Zhang, X. What do consumers really know about spyware? Commun. ACM, 48, 8 (Aug. 2005), 44–48.

    DOI: http://doi.acm.org/10.1145/1325555.1325569

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More