Research and Advances
Computing Applications Virtual extension

Mobile Web 2.0 With Multi-Display Buttons

Posted
  1. Introduction
  2. Conventional UI for Mobile UGC Services: Hierarchical Structure and Single Display
  3. Web 2.0 Technology for Mobile UGC Services: Tags and Tag Clouds
  4. A Visualization Technique for Mobile UGC Services: Multi-Display Buttons
  5. A New UI for Mobile UGC Services: Tags and Multi-Display Buttons
  6. Empirical Investigation
  7. Conclusion
  8. References
  9. Authors
  10. Footnotes
  11. Figures
  12. Tables

User-Generated Content (UGC) IS A BURGEONING social phenomenon11 being watched in today’s world with keen interest. UGC is an online new-media content created by users rather than by conventional media such as broadcasters.9 A typical example would be Flickr, an online photo-sharing site with 37 million images, to which its 1.2 million members add up to 200,000 images daily.2 UGC is shifting the paradigm of Internet use away from the one-way propagation of media content by companies towards the creation and sharing of media content by and among ordinary users.

The mobile phone is an especially important means of promoting user generation and exchange of media content. Many mobile phones now have built-in digital cameras and inherent network connectivity. These features have greatly facilitated the creation and sharing of media content. For instance, users can immediately upload photos on Flickr with their mobile phones as well as access Flickr and browse other people’s media content.

However, the constraints of a typical mobile phone—its small display and limited number of buttons—make use of mobile UGC services challenging.8 Only a few studies have investigated hardware or software alternatives to address these problems. This article presents a new user interface (UI) for mobile phones, one that makes use of UGC services easier and more efficient.

The new interface has two key characteristics: one pertaining to content structure, and another to content visualization. More specifically, the new UI employs two major mobile Web 2.0 technologies, the tag and the tag cloud, and multi-display buttons increase the display size and flexibility of individual buttons. The interface is dedicated specifically towards supporting exploratory browsing within mobile UGC services, because users of such services are likely to focus on exploratory browsing and serendipitous discovery and be more inclined toward entertainment rather than utility.10 Here, we describe the new interface and investigate whether it aids in enhancing exploratory browsing within mobile UGC services.

Back to Top

Conventional UI for Mobile UGC Services: Hierarchical Structure and Single Display

The conventional UI for mobile phones uses a folder-based hierarchical structure and a single display. Suppose a user wants to open up a picture or video. When the user connects to the main page of the mobile UGC service, several folders appear on the main LCD (see (A) in Figure 1) that represent a predetermined set of categories. Selecting a folder causes subfolders to be displayed on the main LCD (see (B) in Figure 1). When the user selects a subfolder that holds content rather than continue on to further subfolders, the main LCD displays thumbnails of all the photos or videos the phone contains (see (C) in Figure 1). When the user selects the thumbnail of a specific photo or video, the full-size version is shown on the main LCD (see (D) in Figure 1).

The problem with the conventional mobile phone UI is that it does not facilitate exploratory browsing—navigational behavior that depends on serendipity or chance and guided by context rather than well defined goals.4 The conventional UI offers fewer opportunities for serendipitous discovery of content because users within a folder-based hierarchy are forced to navigate a fixed, rigid structure. Photos in the same folder, for instance, belong to the same category, and so users can foresee what pictures the category contains. Furthermore, content on the conventional UI is limited to the main LCD, which confines users from viewing other content or contextual information about the current content (such as thumbnails of all photos in a sub-folder or photo metadata) simultaneously. In sum, exploratory browsing is not well supported by the conventional mobile phone UI. A different approach to organizing and representing media content would likely enhance exploratory browsing for mobile UGC service users.

Back to Top

Web 2.0 Technology for Mobile UGC Services: Tags and Tag Clouds

In terms of content structure, this study proposes a tag-based network structure that may facilitate users’ exploratory browsing within mobile UGC services. The tag is a core technology of Web 2.0 and a popular navigation method used by Web 2.0 services. A tag is defined as a user-generated descriptor or label that refers to an aspect of content.6 Flickr, for example, allows users to assign as many tags as they want when they upload their photos on the website. Users can also sort and selectively display other users’ photos based on tags. On UGC websites such as Flickr, tags are often visualized in a tag cloud, a visual depiction of frequently used tags.6 Selecting a single tag within a tag cloud will generally lead to an array of content associated with that specific tag.

The tag-based structure of Web 2.0 is a considerable departure from the folder-based structure depicted in Figure 1, The new structure forms a network in which tags are inter-linked and each photo can be placed in multiple groups as opposed to the conventional hierarchical structure, which places each photo in just one specific folder or category. Prior research has suggested that exploratory browsing is best supported by network structures.3 The multiple tags attached to content in a tag-based structure may stimulate user curiosity and encourage exploration; as users follow interesting tags, the structure will increase the chances of serendipitously discovering meaningful content.12 Such discoveries are made possible only if users are allowed to jump among different tags freely and range across as many tags for as long as they want.

By instilling the mobile Web 2.0 UI proposed here, media content acquires more vivid and plentiful tags, for users can attach semantic tags to such content as soon as it is captured on a mobile phone.1 When content is captured, contextual tags can also be added automatically on the basis of user name, time, date, and GPS-derived location. This may go some way toward solving a major problem of the tag system, namely, the scarcity of user-generated tags. Users do not usually input multiple tags because of the extra effort needed to create tags manually. With mobile Web 2.0 services, more tags can be generated, both actively by users or automatically, which can thus increase the amount of tags with UGC services. We therefore expect that a tag-based structure will enhance exploratory browsing within UGC services, especially in the mobile environment.

Back to Top

A Visualization Technique for Mobile UGC Services: Multi-Display Buttons

Research in information visualization techniques aims to design interfaces that amplify human cognition. One way to do so is to increase the amount of information that can be accessed quickly.7 In order to increase the amount of information displayed on a single screen, prior research in the field of human-computer interaction (HCI) has suggested important information visualization techniques for the desktop computer, such as the “overview + detail” and “focus + context” techniques. Context (overview) + focus (detail) interfaces allow users to focus on selected content while maintaining a representation of the surrounding context, that is, his or her location within the whole. However, the small screen size of mobile phones limits the amount of displayable information, making it difficult to apply information visualization techniques developed for desktop computers. If the “focus + context” interface technique was indeed applied to current mobile phones, an adequate rendering of the focal content will leave little or no room for the context view.

To overcome this problem, we have developed a multi-display button phone (D-button phone). This phone has a main TFT LCD (240 × 320 pixels) and, instead of the traditional number pad, has 12 3 × 4 display buttons (64 × 48 pixels) (see Figure 2). The system’s key innovation is that each display button can present either content or contextual information individually as well as be pressed physically.a

As shown in Figure 2, each of the 12 display buttons has a transparent window and one fixed edge below the window so that the buttons are vertically flexible. A switch substrate, disposed below the lever buttons, has openings that correspond to each of the windows and generates a switch signal in response to motion of any of the lever buttons. Also disposed below the lever buttons is a single display panel that displays content or contextual information in areas corresponding to the 12 windows.

The D-button phone may have its advantage in having both the characteristics of a keypad and display function. Since the LCD screen is built inside the keypad, users can experience good tactility in physically pressing the keypad and benefit from the multi-display at the same time. There are clear differences between the D-button phone and devices with touch screen LCDs due to some unique characteristics. Devices with touch screen LCDs such as the iPhone and Nintendo DS are similar to the D-button phone but have less tactility when users use the screen as a keypad. The D-button phone, which instead replaces the number keypad with display buttons that can be physically pressed, retains the convenience of the familiar key buttons, and its button-like tactility may help in reducing errors during the input process when compared to touch screen LCDs. Also, the D-button phone can be manufactured efficiently by having one big LCD screen with split screen and 12 individual displays above it instead of having twelve small LCD screen buttons.

The D-button phone enhances exploratory browsing within mobile UGC services in several ways. First, because the phone can present content or contextual information on the display buttons as well as on the main LCD, users can see more simultaneously, thus amplifying cognition.7 Second, because the phone can provide preview thumbnails of media content while detailed content is presented on the main LCD, it can simultaneously display both a view of the context surrounding the content and a view focused on the content itself. Since exploratory browsing depends on chance5 and is affected by context rather than by well-defined goals,4 such browsing within mobile UGC services is likely to be well supported by the D-button phone.

Back to Top

A New UI for Mobile UGC Services: Tags and Multi-Display Buttons

We developed an experimental UI in order to enhance exploratory browsing within mobile UGC services, as well as an experimental mobile UGC service with which to test it. The new UI contains both inter-linked tags and multi-display buttons, as explained earlier. A user connecting to the main page of the experimental service can see a tag cloud on the main LCD (see (A) in Figure 3). When navigating within the tag cloud, the display buttons show thumbnail representations of photos or videos associated with the tag being viewed (see (A) and (B) in Figure 3). When the user presses a button containing a thumbnail of a specific photo or video, the full-size version of that content pops up on the main LCD, and the buttons then display the tags associated with that content (see (C) in Figure 3). If the user then pushes the button displaying one of those tags, the main LCD displays the original tag cloud, with the chosen tag automatically selected in its position in the original cloud (see (D) in Figure 3). The buttons then show thumbnails of photos or videos for the newly selected tag.

Back to Top

Empirical Investigation

Our main assertion is that exploratory browsing within mobile UGC services will be better supported by the new UI suggested here (tag-based structure + multi-display button interface) than by the conventional UI (folder-based hierarchical structure + fixed-button interface). This claim was tested in an experimental trial consisting of 33 mobile phone user volunteers. Of the 33 participants, 17 (51.5%) were male and 16 (48.5%) were female. The age of the participants ranged from 20-28 (M=23.6, SD=2.3), and 87.9% were undergraduate college students. Over seventy percent of participants had more than five years of experience (M=5.6, SD=1.4) using mobile phones at the time of the experiment. Although undergraduate students may not sufficiently represent the entire mobile phone user population, they are the main users responsible for creating and sharing contents actively via mobile phone; thus, undergraduate students well represent the users that we want to focus on for this study. Participants were randomly assigned one of two interfaces: the new UI (17) and the conventional UI (16).

In the experimental sessions, participants were given a task related to exploratory browsing: “Browse photos and videos on this mobile UGC service, with no time limit, and then select the three photos or videos you prefer, without restriction.” Participants subsequently completed a questionnaire regarding their browsing experience using their assigned interface. The questionnaire covered five subjective measures—perceived usefulness, perceived ease of use, perceived enjoyment, satisfaction, and behavioral intention—which were measured using multi-item scales drawn from previously validated instruments. Participants answered each question on a seven-point Likert scale. Table 1 gives sample questions.

Data was analyzed using independent t-tests. Consistent with our assertion, the new UI was perceived to be more useful and more enjoyable, and produced a stronger intention to use the service in the future (see Table 2). Across all measures with the exception of perceived ease of use, the experimental interface was preferred over the conventional interface for exploratory browsing within mobile UGC services.

Back to Top

Conclusion

UGC has become popular among Internet users for creating and sharing new media content. Mobile UGC services, with the technological advantages they possess and the convenience they offer in capturing new media content and adding tags, are likely to become the main driver of the UGC paradigm. Conventional mobile phone interfaces, however, do not support the exploratory browsing behavior typical of mobile UGC.

In this study, we designed a new UI specifically for exploratory browsing with a tag-based structure and a multi-display button interface, and we empirically investigated user perceptions of the new UI. The results indicate that the new UI enhances exploratory browsing within mobile UGC services in terms of usefulness, enjoyment, satisfaction, and intention to use the system again. Interestingly, there were no statistical differences between mean scores for perceived ease of use of the two UIs. A likely explanation is that participants were more familiar with the folder-based hierarchical structure and fixed-button interface. However, the results suggest that the new UI overcame the drawback of unfamiliarity, thus cancelling out the effects of long-term experience with the conventional UI.

An interesting future study would be to compare the D-button phone with other innovative devices like Apple’s iPhone, which replaces the physical keypad with a full-scale touch screen that works simultaneously as a main display and a main input device. As mentioned earlier, the D-button phone, which instead replaces the number keypad with display buttons that can be physically pressed, retains the convenience and familiarity of key buttons. There are, however, some drawbacks regarding the button display. For example, having a broken button would also mean having a broken display LCD, which may double the difficulty experienced. The relative advantages and disadvantages of retaining the physical mechanism remain to be investigated.

The new UI proposed in this study can be used with content other than mobile UGC as well. For instance, if users want to browse through SMS messages, they could view a tag automatically generated by sent or received messages as well as view the main content of the message. The combination of tag-based Web 2.0 and multi-display buttons—one concerned with content structure, one with content visualization—yields many opportunities for enhancing exploratory browsing. The new UI may offer a solution to the generic problems of mobile devices and take a leading role in an Internet paradigm shift towards user-generated content.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Conventional Interface

F2 Figure 2. D-Button Phone

F3 Figure 3. New Interface

Back to Top

Tables

T1 Table 1. Sample Questions

T2 Table 2. Results of Empirical Investigation

Back to top

    1. Ames, M. and Naaman, M. Why we Tag: Motivations for annotation in mobile and online media. In Proceedings of the 2007 ACM SIGCHI Conference on Human Factors in Computing Systems, (San Jose, CA, 2007), 971–980.

    2. Andrews, R. Flickr fans to Yahoo: Flick off! Wired News, (2005); http://www.wired.com/techbiz/media/news/2005/08/68654

    3. de Vries, E. and de Jong, T. Using information systems while performing complex tasks: An example from architectural design. International Journal of Human-Computer Studies 46, 1 (1997), 31–54.

    4. Hong, W., Thong, J.Y.L., and Tam, K.Y. The effects of information format and shopping task on consumers' online shopping behavior: A cognitive fit perspective. Journal of Management Information Systems 21, 3 (2004), 149–184.

    5. Marchionini, G. and Shneiderman, B. Finding facts vs. browsing knowledge in hypertext systems. IEEE Computer 21, 1 (1988), 70–79.

    6. Murugesan, S. Understanding Web 2.0. IT Professional 9, 4 (2007), 34–41.

    7. Pirolli, P., Card, S.K., and van der Wege, M.M. The effects of information scent on visual search in the hyperbolic tree browser. ACM Transactions on Computer-Human Interaction 10, 1 (2003), 20–53.

    8. Sarvas, R., Viikari, M., Pesonen, J., and Nevanlinna, H., MobShare: Controlled and immediate sharing of mobile images. In Proceedings of the 12th Annual ACM International Conference on Multimedia (MULTIMEDIA '04), (New York, NY, 2004).

    9. Schweiger, W. and Quiring, O. User-generated content on mass media web sites: Just a kind of interactivity or something completely different? In Proceedings of the ICA, NY, 2005.

    10. Shneiderman, B., Bederson, B.B., and Drucker, S.M. Find photo!: Interface strategies to annotate, browse, and share. Comm. ACM 49, 4 (Apr. 2006), 69–71.

    11. St. Arnaud, B., Most Significant Economic Challenge to the Future of the Internet. In Proceedings of the NSF/OECD Workshop Social & Economic Factors Shaping the Future of the Internet, (Washington, D.C., 2007).

    12. Toms, E.G. Understanding and facilitating the browsing of electronic text. International Journal of Human-Computer Studies 52, 3 (2000), 423–452.

    a. The D-button phone was programmed in C, has for its main processor a CDMA2000 1× (MSM 6500), and runs on Qualcomm RTOS and Brew 2.0.

    This work is supported by Samsung Advanced Institute of Technology (SAIT).

    DOI: http://doi.acm.org/10.1145/1629175.1629208

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More