An undergrad recently sent me the following email: "I was thinking today that I would like to learn more about what HCI research involves. Can you recommend any papers for me to read?"
I decided to follow Matt Might's advice and write a public blog post about this topic rather than just replying privately to this student.
(Disclaimer: HCI is a very diverse field, so I obviously don't claim to speak for all HCI researchers. If you asked ten randomly-selected HCI researchers to write this post, you will get ten different answers.)
What Is HCI Research?
To me, research in HCI (Human-Computer Interaction) involves
- understanding how humans interact with computers, and
- creating new and effective ways for humans to interact with computers.
Here, the term "computer" can refer to a desktop machine, laptop, tablet, mobile phone, digital eyewear, or an assortment of other electronic devices; it can also refer to both software and hardware running on these devices.
Some HCI research involves doing science (i.e., understanding) while others are more focused on engineering (i.e., creating).
Two Examples of HCI Research
There is no way that I can do justice to the entire world of HCI in one blog post, so instead I will present two papers that exemplify some typical characteristics of modern HCI research.
The lead author on both papers is my colleague Joel Brandt, who performed this work while he was a Ph.D. student in the Stanford Computer Science Department. At the time, Joel's focus within HCI was on how programmers (humans!) interact with computer software used throughout the programming process (e.g., IDEs, debuggers, Web browsers).
Paper 1: Understanding how programmers use Web resources
I'll first discuss Two Studies of Opportunistic Programming: Interleaving Web Foraging, Learning, and Writing Code (Brandt et al., CHI 2009). This paper was published at CHI, a notable academic conference for HCI research.
The research described by this paper is an example of "understanding how humans interact with computers." Specifically, Joel and his colleagues sought to understand how programmers interact with digital resources found on the Web.
To do so, the research team performed two studies:
- Lab study: They invited 20 programmers into a computer lab one at a time, gave each subject a two-hour-long programming task, and watched how the subject used Web resources while programming. Drawing from direct observations of these 20 subjects in a controlled lab setting, the team observed three main forms of interaction with Web resources -- learning, clarification, and reminder -- and described the unique aspects of each form in their paper.
- Query log analysis: The team wanted to validate whether these observations generalize beyond their small and relatively homogeneous population of 20 lab subjects, who were all Stanford students. Working with industry colleagues at Adobe, they obtained a data set containing over 100,000 queries made by over 24,000 programmers to a custom search engine for Adobe programming tools. They parsed and analyzed the data to discover insights that supported observations from their prior lab study.
These two studies complement and reinforce one another: The first provides a great level of detail (direct human observation) but a small sample size (N=20). The second provides little detail (search queries) but a large sample size (N=24,000). By reading both studies in the paper, you can understand the relative strengths and weaknesses of each approach.
The findings presented by HCI studies such as the ones in this paper serve two roles: They contribute to the body of scientific knowledge about a form of human-computer interaction (e.g., Web usage during programming). And they inspire researchers to create new kinds of tools to improve such interactions.
For example, the findings in this paper suggest ways that existing IDEs can be augmented to help programmers better leverage Web resources. These findings directly inspired Joel's next research project, which led to ...
Paper 2: Creating a better way for programmers to leverage example code from the Web
A year after his prior paper, Joel published Example-Centric Programming: Integrating Web Search into the Development Environment (Brandt et al., CHI 2010).
The research described by this paper is an example of "creating new and effective ways for humans to interact with computers." Here, Joel and his colleagues sought to create a new and better way for programmers to use snippets of example code they find on the Web.
To do so, Joel spent a summer internship at Adobe building a plug-in for Adobe Flash Builder, which embeds a domain-specific search engine within the IDE:
This system, called Blueprint, combines an IDE plugin and custom search engine to enable new kinds of user interactions such as
- instant Web search without leaving the IDE's code editor,
- browsing through search results that are automatically formatted in a "code-centric" format, which is more useful to programmers than plain Web pages,
- fast copy-and-paste of retrieved example code snippets into the user's code base,
- and links between the copied code and its source, to support notifications if the source gets updated.
The first half of this paper describes how Joel used insights from the studies in his prior paper to design the Blueprint system. The second half describes two studies that the team ran to show that Blueprint was effective:
- User study: They recruited 20 professional programmers at Adobe to perform a series of programming tasks in a controlled lab setting. They let half the participants use the Blueprint system (treatment group) and the other half use an ordinary Web browser (control group). They then compared the performance of participants in both groups on metrics such as time to complete each task and resulting code quality.
- Longitudinal study: To understand how Blueprint is used in real-world settings, the team deployed the system to over 2,000 users over a three-month time span. They recorded 17,000 queries made by these users and analyzed the contents of those queries to discover insights that complemented their user study findings.
Finally, a customary way to end these sorts of papers is by discussing current limitations of the system and some ideas for future work.
Conclusion and Further Reading
These two papers formed the bulk of Joel's 2010 Ph.D. dissertation. His research started in a university lab at Stanford, continued during summer internships at Adobe, and eventually turned into a feature within a commercial software product (Blueprint) that thousands of people use on a daily basis. I like presenting this work because it's a good example of how HCI research can be done in both academia and industry, and can range from scientific studies to the development of practical tools.
Joel's work is just the tip of the iceberg, though. Besides studying the interaction between humans and computers, there is a lot of HCI research that explores how humans interact with one another via computers. For example, projects might involve
- understanding how programmers interact with one another on the StackOverflow Q&A site (Mamykina et al., CHI 2011), and
- creating a mobile phone app called VizWiz that lets blind users quickly and effectively solicit help from strangers on the Internet (Bigham et al., UIST 2010).
Reading the four papers mentioned in this blog post will give you a sense of how HCI papers are structured. Enjoy!