Research and Advances
Computing Applications Virtual extension

Exploring the Black Box of Task-Technology Fit

Posted
  1. Article
  2. References
  3. Authors
  4. Footnotes
  5. Figures
  6. Tables

As professionals, such as knowledge workers and managers increasingly perform tasks outside of traditional office environments, mobile technology often provides critical support, in particular to mid-level executives, project managers, company and sales representatives, and field service workers.4 Nevertheless, the requirements for the development and use of mobile information systems to support mobile professionals are not fully understood.6 According to the theory of task-technology fit, an adequate match between information systems and the organizational tasks to be supported or automated is an important precursor to system success as indicated by use and subsequent performance impacts.3 The theory of task-technology fit, however, provides little guidance of how to operationalize fit of particular combinations of tasks and technologies.

In the current study, we apply the concept of task-technology fit to mobile information systems by considering idiosyncrasies of use context and of evolving technological developments.2 Our research model is depicted in Figure 1.

Overall Technology Evaluation. Goodhue3 empirically demonstrated the ability of users to correctly assess task-technology fit. In the current research study, we suggest that a situation of good fit between task, technology, and use context should be reflected in an overall high rating and positive evaluation of the technology by the user. In other words, user-perceived “overall technology evaluation” is viewed as a general indicator of fit that we hope to relate to three additional constructs, namely task-related fit, use context-related fit, and technology performance (all user-peceived).

Tasks and Task-Related Fit. Prior research suggests that task-characteristics may drive the requirements for appropriate technology support. For example, highly non-routine tasks are expected to be best supported with technology that has a high level of media richness, such as interpersonal communication,1 whereas tasks of high interdependence may require more complex methods of coordination than tasks of low interdependence.7 In our research study, we assess task-related fit as a result of appropriate combinations of specific task-characteristics and technology performance. We furthermore expect users who appreciate the support that is provided by the technology to help perform their tasks to also rate the technology highly.

Use Context and Use Context-Related Fit. In order to distinguish the mobile information systems context from the more traditional stationary context, we include the use-context in our research model. Use conditions in mobile and non-mobile environments differ in aspects, such as access to power, network connectivity, distraction, and the ability to carry equipment, thus resulting in varying use requirements. Again, we hope to assess use context-related fit as a result of appropriate combinations of specific use context-characteristics and technology performance. We further expect users who appreciate the support that is provided by the technology for a particular use context to also rate the technology highly.

User-Perceived Technology Performance. Technology performance and technology maturity have been included in research studies on e-commerce business platforms and have been found to have an impact on the overall user evaluation of the technology.5 For our purpose of exploring the antecedents of the overall evaluation of mobile information systems (the fit), we expect technology performance to play an essential role. We expect technology performance to be related with the two concepts of task-related fit and of use context-related fit, and with the overall evaluation of the technology by the users.

In order to explore the black box of task-technology fit for mobile information systems, we applied an inductive research approach. We performed a content analysis of online user reviews of mobile technology products that were posted on www.cnet.com, an online media website. The site provided a large amount of relevant data that were readily available, as well as a homogeneous publishing environment. Since the user reviews were essentially unsolicited, we assume that the comments are particular helpful for an inductive study and to identify issues that are important to users.

To obtain a broad overview of issues that are relevant to task-technology fit of mobile information systems, we selected user reviews of four technology products, namely a smart cell phone, two competing personal digital assistant (PDA) devices, and an ultra-light laptop. The devices were selected based on popularity in the CNET user community, as indicated by the number of posted user reviews, in addition to the number of site visitors who indicated a review to be useful, the number of comments on a review, and replies to comments; and based on the relevance of the device to support mobile professionals, as stated in non-user technology reviews published in the trade-press and online. To ensure comparability of the technologies, we focused on devices that have been introduced into the market during 2005, followed by reviews that were posted in 2005 to early 2006. For each device, between 19 and 44 user reviews were analyzed in the order that the reviews were listed on the website, which by default is according to the number of visitors who indicated they found the respective review useful. The number of visitors who indicated a particular review to be useful (in some cases over 100) may help to offset the limitation of self-selection that is inherent in the current research setup. By relying on the comments of users who chose to voice their opinions online, we can only capture issues that are of importance to that particular user group, and may miss some of the issues that are of importance to users who chose not to share their opinions online. We note the need to address the issue of self-selection in subsequent research studies.

The content analysis resulted in a list of comment categories that reflect issues of importance to the reviewers. In addition, the analysis provided information about the extent to which the individual issues were successfully addressed by the mobile technology products. Table 1 provides examples of user comments, comment categories, and ratings on a 5-point Likert scale ranging from strongly negative to strongly positive.

The comment categories were derived through an iterative process of interpretation, and frequent interaction between two researchers who interpreted the dataset independently. Regular discussions throughout the coding process served to uncover ambiguities with respect to category descriptions and coding guidelines, and also served to assess the completeness of the categories in relation to the user reviews. For the remaining differences in interpretation, an inclusive approach was chosen to include comments that had only been identified by one coder and to allow a comment to appear in more than one category if determined so by the two coders. In the case of differences in ratings, the average rating of the two coders was used for further data analysis. The interpretations of the two coders have a 0.788 correlation (p<.001) for instances where both coders agreed about the relevance of a comment for a particular comment category. The correlation of interpretations between the two coders is lower (.676) yet still highly significant (p<.001) when including instances where only one or no coder determined a comment to be relevant for a particular comment category.a The difference in correlations reflects the high degree of freedom related with the initial interpretation of the reviews, and highlights the need for concise category descriptions. At the same time, the coding results also reflect considerable agreement with respect to the individual ratings once a comment has been determined as relevant for a particular comment category.

The interpretive coding process resulted in 49 items that can be grouped into four of the nine conceptual categories of our research model (Figure 1): overall technology evaluation, task-related fit, use context-related fit, and technology performance. It is important to note that the derived list of items reflects first of all the underlying dataset and cannot be claimed to be exhaustive. In fact, we found little useful information on several concepts of our research model, including underlying task characteristics, technology specifications, context characteristics, extent of use, and performance impacts. We suggest that such information might be more readily available from online user forums, through explicit surveys, and from the manufacturers and retailers of the devices. Table 2 summarizes the results indicating the percentage of reviews that mentioned an issue, and the averages of the respective ratings for each comment category. All ratings range from 1 to 5, except for Overall Rating ranging from 1 to 10, the only category that was not derived interpretively. In addition to the results for each device, Table 2 also lists the average of the four devices.

The majority of user comments referred to various aspects of technology performance, led by comments on form factors, input, and output elements. For comments associated with task-related fit, the issues mentioned most frequently include fit with need for voice communication, messaging communication, and information and data access. The concept of use-context technology fit received a relatively smaller yet sizable number of comments, with the following issues being mentioned most frequently: Fit with need to carry limited equipment, adaptability and customizability, and travel. Additional analyses of user comments found in device-specific user forums suggest further evidence for the general relevance of use context, yet given the difficulties associated with interpretive coding as a result of extensive degrees of freedom, these comments were not included in the current analysis.

The results vary for all four individual devices, for which we offer two possible explanations. First, the variation may reflect a need to treat each of the four devices as a distinct technology, with distinct abilities to achieve task-technology fit. Second, our analysis may have uncovered a number of product-specific issues of immediate concern to reviewers, such as design (cell phone), speed (laptop), stability (PDA 2), and backlight of screen and keyboard (PDA 1). We suggest that product-specific differences can be identified in particular by comparing the two competing PDA-devices where we note differences with respect to speed, stability, fit with need to use entertainment applications, and fit with need for personal productivity.b

In general, we noted that the fact that an issue (such as form factors) was mentioned in a review typically indicated that the issue was of importance to the reviewer. In other words, it was quite rare that a reviewer discussed an issue (such as quality of the camera) and indicated explicitly or implicitly that the issue was of little importance to him or her. As a result, we interpret frequency (in particular if averaged across all devices) as an indicator of user-perceived importance. Using technology performance across all devices as an example, Figure 2 graphically combines importance (based on frequency) with the averaged evaluations, which provides us with an indication of the relative quality of fit.

Across all devices, we identify particular needs to improve form factors, input components and customer service, whereas reviewers seem to be satisfied with the ease of use of the devices. Additional insights could be derived from interpreting the variance of ratings (not depicted in Figure 2) to identify issues that are particularly controversial (high frequency/high variance). Graphs similar to Figure 2 can be constructed for all devices and for all item categories.

To further improve our understanding about the relationships between the identified items and categories, we performed an exploratory factor analysis for all items that were mentioned in more than 10% of reviews (indicated by * in Table 2). The analysis yielded five factors, as depicted in the most left-hand column in Table 3.c We interpret factor 1 to focus on voice communication, factor 2 to focus on mobile office, factor 3 to focus on knowledge work (including internet access and messaging communication), factor 4 to focus on productivity support, versatility, and design, and factor5 to focus on wireless features and stability. While all factors include items related to technology performance (tech), factors 1 to 4 also include items related to task-related fit (task) and factors 2 and 3 include items related to use context-related fit (use context), thus supporting the suggested relevance of all three fit-related concepts.

For each user review, we determined an average score for each factor, which we standardized across cases with a mean of zero and standard deviation of one (Z score). The average standardized factor scores per device indicate the extent to which the factors are best and least supported by the individual devices (middle columns in Table 3). Again, we find distinct differences between all four devices, including the two competing PDAs. Finally, the results of a multiple regression analysis indicate that four of the five factors are significant predictors of overall technology evaluation (far right column of Table 3).

Our look inside the black box of task-technology fit for mobile information systems has allowed us to identify a number of issues that are less abstract compared to what earlier research studies of task-technology fit have proposed. We found five factors that were of importance to user reviewers, each combining different aspects of technology performance, task-related fit, and use context-related fit. We also found that in the eyes of the users individual devices may perform well with respect to one or two factors but less so with respect to the remaining factors, indicating user-perceived specialization of devices. Four out of the five factors that we found were significantly related to users’ overall technology evaluation, which we suggest to reflect an overall fit between technology and factors related to user tasks and use context. We conclude that task-technology fit for mobile information systems can practically be assessed only within a narrow (perhaps even product-based) domain of technology. Currently, true versatility of mobile information systems appears elusive, which may be in line with user preferences and probably also reflects technology maturity. Several limitations of our study, related for example to a relatively small sample size and to a sampling method that included user self-selection and a large degree of interpretive freedom during data analysis, point to the need to conduct additional research studies. We are utilizing the results of the current analysis as input for a larger survey-based research study.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Research model (user reviews that were analyzed for this study provided evidence for the shaded components)

F2 Figure 2. Comment frequency (importance) vs. technology performance (all devices)

Back to Top

Tables

T1 Table 1. Example user comments, ratings, and categorizations (source: user technology reviews published on www.cnet.com)

T2 Table 2. Analysis of online user reviews: Frequency (in percent) and average ratings (scale 1 to 5, unless indicated)

T3 Table 3. Results of factor analysis and multiple regression

Back to top

    1. Daft, R.L. and Lengel, R.H. Information Richness: A new approach to managerial behavior and organization design. Organizational Behavior, 6, (1984), 191–233.

    2. Gebauer, J. and Shaw, M. Success factors and benefits of mobile business applications: Results from a mobile e-procurement study. International Journal of Electronic Commerce. 8, 3, (2004), 19–41.

    3. Goodhue, D.L. and Thompson, R.L. Task-technology fit and individual performance. MIS Quarterly, 19, 2. (1995), 213–236.

    4. Kalakota, R. and Robinson, M, M-Business - The Race to Mobility, McGraw-Hill, NY, 2002.

    5. Kishore, R.; Agrawal, M. and Rao, H.R. Determinants of sourcing during technology growth and maturity: An empirical study of e-commerce sourcing. Journal of Management Information Systems. 21, 3, (2004–5), 47–82.

    6. Sørensen, C. Yoo, Y. Lyytinen, K. and De Gross, J. Designing Ubiquitous Information Environments: Socio-Technical Issues and Challenges. Springer, London, UK, 2005.

    7. Thompson, J.D. Organizations in Action, McGraw-Hill, NY, 1967.

    a. To obtain the correlation between the ratings of the two coders, missing values were replaced by 0.

    b. Given the small sample sizes, the statistical differences have to be treated with caution, yet are supported by the qualitative interpretation of the user reviews upon which the statistical data are based.

    c. For the sample of 144 reviews, we used a principal component analysis with Varimax rotation to explore the structure underlying the items. After dropping five items (alerts, battery, storage, operations, camera/ video recorder), the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was .639. Bartlett's measure of sphericity indicated significance of the results at the p<.001 level. Cumulative variance explained by five factors: 43%.

    DOI: http://doi.acm.org/10.1145/1435417.1435447

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More