Research and Advances
Computing Applications

Trusting Technology: Introduction

How can technology be engineered to inspire user trust? How can businesses, consumers, and individuals employ that trust—without destroying it—for the sake of their human relationships?
Posted
  1. Article
  2. Author

When I fly, I decline e-tickets in favor of hard copy to ensure I can actually board the plane. Who’s to say I did or didn’t buy them when the plane’s at the gate? Like millions of other consumers, I still worry about who I’m really dealing with when I type my credit card number in an online order form. I wonder what they really want. Will I get what I’m paying for? What did I just buy? I sometimes wonder too whether I ever click around anymore without leaving a residue of personal and financial information that might be bought and sold over and over for who-knows-what purpose. Such questions, especially those involving money, privacy, and children, have made millions of consumers hesitate to spend or respond online, even to email.

Then again, millions of total strangers around the world have learned to trust each other enough that they now do billions of dollars in e-business and routinely establish personal relationships, however fleeting, on the Net.

Online interactions represent a complex blend of human actors and technology. In light of this complexity, Batya Friedman, Peter H. Kahn, Jr., and Daniel C. Howe in this section pose the basic question: With what or whom can we speak of building trust relationships? The system? Its developers? Web site designers? The people employing them? Online organizations? Other users?

How can those who create and maintain the technological infrastructure help establish the essential climate of trust needed by businesses and consumers alike to interact and engage in e-commerce? As Ben Shneiderman points out, shallow commitments and broken promises are dangerously explosive. For online relationships, they risk some spectacular fraud that captures the public’s attention, costs years of effort to regain the lost trust, and possibly provokes government regulation.

The problem, write Friedman et al., is how to establish trust online in light of the enormous uncertainty about both the magnitude and the frequency of potential harm. We have trouble (sometimes to the point of futility) assessing the intent of others online, as well as our expectations of machine performance. How can the technology allow participants to trust one another and get on with their interactions? Informed consent by individual users—asked explicitly to consent or decline to participate, or opt in or opt out—is an effective way to determine what online organizations should be allowed to do with customer and surfer profiles, as well as with their raw personal data.


Sometimes trust matters, sometimes it doesn’t in online relationships.


Judith S. Olson and Gary M. Olson ask whether trust still depends on face-to-face meetings. Can trust also be established through, say, videoconferencing, rapid response to chats and email, and other online media? Which of them are most likely to allow trust to develop among individuals? Ultimately, they say, trust and trustworthiness depend on how the media transmits a social experience, including its participants’ cultural and personal cues, which may not be visible to any of the parties, as in text-based exchanges.

Paul Resnick, Richard Zeckhauser, Eric Friedman, and Ko Kuwabara explain why explicit reputation systems, like the one at eBay (called Feedback Forum), are so good at fostering trust among strangers. These systems collect, distribute, and aggregate feedback about buyers’ and sellers’ past behavior. Though few of these people know each other, the system helps them decide whom to trust, encourages trustworthy behavior, and deters unskilled or dishonest participants.

The interface alone can inspire users to trust and engage the system. For example, using gestures, gaze, intonation, posture, and speech, in "embodied conversational agents," Justine Cassell and Timothy Bickmore find that users respond with their own trusting behavior. The human-computer interaction proceeds as if it were a trusting interaction between humans, with self-disclosure and without hesitation. The system represented by the agent thus incrementally builds evidence of its own good will and credibility.

Shneiderman spells out design principles and their related guidelines that make explicit the contract-like nature of trust between people and organizations in the interests of inspiring trust online. Designers of online services can use them to enhance cooperative online behaviors and win customer loyalty by giving assurances, references, third-party certifications, and guarantees of privacy and security, backing them up with financial compensation.

Finally, is heavy online activity the same as heavy television watching? Do they both keep people inside their homes, isolated from the civic organizations and social connections that historically have generated trust? Like television, might the Net also lead people to imagine that the real world is as mean and violent as the programs—and Web sites—they see, making them less likely to trust and interact with strangers? Eric Uslaner finds little evidence that the Internet is creating new communities to make up for the decline in civic engagement in U.S. society over the past four decades. He also finds even less evidence that the Internet is pushing people further away from traditional social ties or making them less trusting. Ultimately, he says, the Internet neither destroys nor creates social capital, including group membership and informal social ties. Sometimes trust matters, sometimes it doesn’t in online relationships.

As the authors suggest, answers to questions about technology-mediated trust might ultimately depend on changing human attitudes and behavior—a challenge certainly greater than any of those involving technology alone. In order to determine whether a system is trustworthy, we still have to ask whether we trust the people behind the technology.

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More