A Mixed Methods Approach To Technology Acceptance Research
A Mixed Methods Approach To Technology Acceptance Research
net/publication/228130899
CITATIONS READS
111 26,523
1 author:
Philip Fei Wu
Royal Holloway, University of London
39 PUBLICATIONS 1,252 CITATIONS
SEE PROFILE
All content following this page was uploaded by Philip Fei Wu on 13 May 2015.
Philip Fei Wu
The Surrey Business School
University of Surrey, United Kingdom
[email protected]
Introduction
User acceptance of technology has been a central theme in information systems (IS) research.
While there are quite a few established theories about technology acceptance, the technology
acceptance model (TAM) is perhaps the most popular. Based on the theory of reasoned action
(TRA) (Ajzen and Fishbein, 1980), TAM posits that an individual’s intention of using a
technology is jointly determined by his or her perception of the technology’s usefulness
(Perceived Usefulness, PU) and his or her perception of its ease of use (Perceived Ease of Use,
PEoU). Over the course of two decades, numerous studies have been conducted in order to
validate, extend and apply TAM in various research settings. At the time of this writing, Google
Scholar shows that Fred Davis’ (1989) seminal paper in MIS Quarterly and Davis et al.’s (1989)
paper in Management Science have been the subject of 10,036 and 5,816 citations, respectively.
The popularity of TAM may result from its theoretical simplicity and the robustness of its
standardized measurement. Prior acceptance studies confirm that the model consistently explains
more than 50% of variance in acceptance (Dillon, 2001; Venkatesh et al., 2003; King and He,
2006). Moreover, IS scholars appreciate such a parsimonious model because it provides not only
an initial road map for planning empirical studies, but also a common discourse with which
scholarly dialogues and meaningful comparisons across different studies may be conducted.
Nevertheless, parsimony is also an “Achilles’ heel” of TAM, insofar as generic constructs such
as PU and PEoU in TAM have “seduced researchers into overlooking the fallacy of simplicity”
(Bagozzi 2007, p. 244). As Benbasat and Barki (2007) stated, “study after study has reiterated
the importance of PU, with very little research effort going into investigating what actually
makes a system useful. In other words, PU and PEoU have largely been treated as black boxes
that very few have tried to pry open” (p. 212).
Through a critical review of literature and by reference to recently completed empirical work,
this paper aims to showcase the importance of methodological pluralism in “opening the black
boxes” of TAM. More specifically, through criticizing the IS field’s overuse of quantitative
survey method, it advocates a mixed methods approach to deepening our understanding of
technology acceptance.
3
methods are 1) the time ordering (concurrent or sequential) of the qualitative and quantitative
phases, and 2) the degree of dominance of either quantitative or qualitative methods. Johnson
and Onwuegbuzie (2004) provided a matrix for illustrating the nine possible combinations of the
mixture:
In the matrix figure above, “quan” stands for quantitative and “qual” for qualitative. Capital
letters denote high priority or weight. The sign “+” stands for concurrent, and “à” stands for
sequential. In the same vein, Creswell (2003) described six mixed methods designs: 1) sequential
explanatory design (“QUAN à qual”); 2) sequential exploratory design (“QUAL à quan”); 3)
sequential transformative design (move between qualitative and quantitative without clear
priority); 4) concurrent triangulation strategy (“QUAN + QUAL”); 5) concurrent nested strategy
(qualitative embedded in quantitative, or vice versa); and 6) concurrent transformative strategy
(qualitative and quantitative methods used concurrently without clear priority). Certainly, a
researcher should choose a single combination that best suits his or her research needs in a study.
No matter what design a researcher adopts, the purpose of the mixture is either to examine the
same phenomenon through a different lens with each method, bringing out distinctive insights, or
to use one method to develop and validate the constructs used in another method, or both of
these. The case study described in the following section serves as an example of mixed methods
research on technology acceptance.
1
In the IS literature, it is not uncommon for “case study” to be viewed as a synonym of “qualitative research”, or at
least as a typical qualitative method (e.g., Gable, 1994). However, the case study is a research strategy whose
method can be either qualitative or quantitative in nature, or a mixture of both. For detailed discussions of
methodological paradigms and the case study approach, see (Lee, 1989; Myers, 2009; Yin, 2003).
4
system currently employed at Eastcoast University2. The study illustrates how different methods
may be integrated into one study in order to facilitate a deep understanding of “usefulness” and
“ease of use”. Following a sequential design, the study consisted of three phases and a total of
four different data collection methods were used. The three phases roughly mirror the “three
levels of understanding” proposed by Lee (1991).
2
Both “Campus Alerts” and “Eastcoast University” are pseudonyms.
5
Phase 2. Qualitative 2nd level - Interpretive - Individual interview
Interviewing understanding - Focus group
Conducting in-depth interviews
with users and non-users of the
technology system.
Phase 3: Quantitative Survey 3rd level - Positivist Questionnaire survey
Collecting quantitative data understanding
from a large sample of the user
population.
6
subjective understanding. In order to resolve the “apparent absurdity” and advance to the second-
level interpretive understanding of the phenomenon, a series of qualitative interviews were
conducted in order to further explore users’ (students’) perceptions and experiences with the alert
system.
Using purposive sampling, a total of 13 students with "maximum variation" (Patton, 1987) were
recruited for interviews. The strategy of "maximum variation" attempts to cut across participant
variation so that a great deal of information can be obtained from a limited number of
participants. The sample included both users and non-users of Campus Alerts, female and male,
and undergraduate and graduate students from a variety of departments. Of the 13 participants,
nine were interviewed individually and four participated in a focus group. The interviews and the
focus group were semi-structured, with open-ended questions. The purpose of the interviews was
three-fold. First, qualitative interviews provide a holistic view of the alert technology as it is
perceived by its users or potential users. A holistic picture needs to be drawn before one can
proceed to select interesting theoretical constructs on which to focus the study. Second, the codes
and themes developed from qualitative data analysis inform the design of the questionnaire
drawn up for the subsequent quantitative data collection. Finally, qualitative data collected from
interviews can be used to cross-validate, explain and enrich data obtained through other methods,
as such “triangulation between methods” is able to cancel out the bias inherent in one particular
method and give us a “convergence upon truth” (Losee, 2003, p.98).
Some key interview questions included:
• Why did you sign up for Campus Alerts?
• Why haven’t you signed up for Campus Alerts?
• Based on what you know and what you’ve learned about Campus Alerts, what do you
think about this service?
One thing to be noted here is that the interview instrument was used more as a guideline for
conversation than as a rigid questioning protocol. In fact, the interview protocol was being
constantly refined as the interviews accumulated. This type of open-ended inquiry allowed me to
elicit responses in a non-leading, natural manner (Kvale, 1996; Rubin and Rubin, 2005). The
main points covered in each interview were the same, but the wording and order of questions
were spontaneous in order to accommodate the flow of the conversation. The length of
interviews ranged from 30 minutes to 90 minutes, with an average of 45 minutes. All the
interview and focus-group transcripts were imported into the NVivo 7 software program for
coding and analysis (Bazeley, 2007). Segments of transcripts were labeled with keywords
(codes), and these codes were then categorized and integrated into the evolving coding scheme.
If the integration failed, the coding scheme would be revised to accommodate the new codes.
The interviews confirmed that “usefulness” and “ease of use” are still the main factors in
affecting people’s intention of using a technology. However, the interviews clarified what
exactly these broad terms meant in the use context. Briefly, the thematic analysis of the interview
transcripts suggested that a “useful” alert system should be accessible “anytime anywhere” and
deliver timely, relevant and the right amount of information. The following excerpts from the
interview transcripts are illustrative:
Subject #1: Now they employed the text message thing so they can send it out really quickly
to alert people. I think it’s good. I think it works. It’s instant access to the students, right
7
away. Everyone has a cell phone basically. … I mean, even if they send emails, it gets a
little faster I think. People are always by their phones, word would spread faster.
Subject #5: Some people don’t want to be alerted for certain things. … If there is a tornado
coming through my neighborhood, I’d like to know about it. But I don’t want to get, you
know, a text message telling me that we’re having ice on this day. I personally don’t need
it, I don’t have a car.
While all interviewees stated that using Campus Alerts was “easy”, they desired a certain extent
of controllability over such aspects as when they received alert messages and what type of
messages they received. The following quote is illustrative:
Subject #4: When it comes to a point though, you’re getting a lot of messages but you are
right by your computer and you’re connected anyway, and if you could like reply “Stop”
[through SMS on your cell phone], let’s say. … So, set up a system where you can go and
customize it. You can say – of course, you don’t have to do that – alert me to natural
disasters, alert me to guns. You can pick which one.
In a nutshell, this phase of the study gave the researcher an improved interpretive understanding
of what motivated students to adopt the technology or prevented them from adopting it. The
emerging interpretive understanding encompasses two aspects: 1) the usefulness of the system
depends not just on a vague, general perception of “enhanced safety”, but also on the individual
user’s or non-user’s perception of the timeliness, relevance and amount of safety information
provided; 2) “ease of use” depends not just on familiarity with SMS technology itself, but on the
extent to which the user has control over the system’s behavior. Consequently, the qualitative
data gave rise to a new set of important constructs that might not have been discovered through
using “standardized” TAM survey instruments. In other words, the researcher’s interpretive
understanding of the interviewees’ subjective understanding of technology use forced him to
firmly situate the two core constructs, PU and PEoU, in the use context rather than rushing to
utilize any existing instruments of measurement.
8
identify orthogonal factors that appear to represent the underlying latent variables. The
dependent variables were excluded from the PCA (Straub et al., 2004). The PCA resulted in six
factors using the default Guttman-Kaiser criterion (i.e., eigenvalue <1.0) and a scree plot parallel
analysis. The resulting scale for each of the six constructs was then examined for internal
consistency using the criterion of Cronbach’s alpha greater than .70 (Nunnally, 1978). In
accordance with this criterion, only factors 1, 2, and 3 were retained in subsequent analyses. For
the three factors, each variable loaded highly (greater than .70) on its assigned factor and low
(less than .40) on other factors, indicating convergent and discriminant validity of the constructs.
Upon examining the items that loaded together, the three constructs were identified as “perceived
utility”, “perceived controllability”, and “subjective norm”. The retained questionnaire items and
their relationships to the previous phases of research are presented in Table 2 below.
9
Receiving Campus Alerts messages can be New item. Interpretive
costly. understanding of PU in
Phase 2.
Subjective My friends think I should use Campus Alerts.
Adapted from TAM2.
Norm
Other people who are important to me think I
should use Campus Alerts. Adapted from TAM2.
To further test the validity of these constructs, a revised survey instrument was distributed to six
randomly selected undergraduate classes. 207 usable responses were received. A confirmatory
factor analysis (CFA) of the data in AMOS indicated goodness-of-fit of the measurement model
(CFI = .91, GFI = .95, RMSEA = .06). Finally, two sets of regression analyses were performed
to determine how well the factors were able to predict the user acceptance intention and
behavior. The first analysis was a logistic regression test in which the independent variables
were the three factor scales that were found to have adequate validity and consistency, and the
dependent variable was the acceptance behavior (a dummy variable). The second analysis was an
OLS linear regression with the same set of independent variables and a dependent variable
acceptance intention (a 7-point Likert scale measuring the likelihood of joining the alert service).
The analysis results3 showed that “perceived controllability” (p < 0.01) was a significant
predictor of acceptance behavior, while “perceived utility” (p = .181) was not; on the other hand,
“perceived utility” was significantly associated with the intention of acceptance (p < 0.01),
whereas “perceived controllability” was not (p = .337). The research model and the hypothesis
testing results are illustrated in Figure 3. In summary, the positivist understanding obtained
from the survey results was that “perceived utility” (a PU construct) affects the acceptance
intention and “perceived controllability” (a PEoU construct) affects the acceptance behavior. The
following section of this paper offers possible explanations for these results in the light of
method triangulation.
3
The associations between subjective norm and dependent variables were weak and are therefore excluded from
discussions here.
10
Methodological Implications of the Empirical Study
The sequence, priority and integration of the three phases of research are illustrated in Figure 2.
The design is adapted from the “sequential exploratory design” described by Creswell (2003),
except that Creswell’s original model places priority on the initial qualitative data collection. The
sequential exploratory design is characterized by the collection and analysis of qualitative data
followed by the collection and analysis of quantitative data. In this study, priority was given to
the quantitative element and the main purpose of the qualitative element was to assist in forming
hypotheses and in triangulating the survey results. The analyses from the three phases were
integrated at the stage of result interpretation and discussion.
Figure 4. Sequential Exploratory Mixed Methods Design (Emphasis on the Quantitative Phase)
As previously described, each of the three phases offered a unique perspective for viewing the
acceptance problem and the researcher’s understanding of the issue progressed as different
methods brought out different types of data. Using Lee’s (1991) terminology and framework
depicted in Figure 2, the empirical study can be described as follows:
Arrow 5: From the perspective of a TAM-based positivist understanding, the researcher develops
predictions of what to expect in the subjects’ acceptance behavior in terms of PU (perceived
enhancement of safety) and PEoU (perceived ease of using SMS).
Arrow 6: However, the low acceptance rate of the alert system – a manifestation of the
subjective understanding – failed to confirm the researcher’s predictions of what to expect in
subjects’ acceptance behavior in terms of PU (perceived enhancement of safety) and PEoU
(perceived ease of using SMS).
Arrow 4: The lack of confirmation of the positivist understanding then called for a revision of the
antecedent interpretive understanding.
Arrow 2: In order to improve the researcher’s interpretive understanding of acceptance behavior,
a fresh reading of users’ subjective understanding was obtained through qualitative interviews
with both users and non-users.
11
Arrow 1: The new reading of the user’s subjective understanding then provided the basis for
formulating a fresh interpretive understanding.
Arrow 3: The fresh interpretive understanding included an improved interpretation of PU (the
individual user’s and non-user’s perception of the timeliness, relevance and amount of safety
information provided by the system) and PEoU (the individual user’s and non-user’s perception
of the controllability of the system), which in turn provided the basis for an improved positivist
understanding.
Arrow 5: The improved positivist understanding guided the survey instrument design, and the
quantitative survey solicited the predicted subjective understanding of the alert system’s
“usefulness” (perceived utility) and “ease of use” (perceived controllability).
Arrow 1: The researcher interpreted the subjective understanding observed in the survey
responses.
Arrow 3: The researcher’s interpretive understanding of the survey results gave rise to a
rethinking of the original TAM constructs.
Through the iterations described above, the “apparent absurdities” of the reality were gradually
explained by interview findings and then by the survey results. In the absence of interview data,
the survey study might have led to a confusing view of PU and PEoU in predicting intention of
and behavior in adopting the alert system. However, by triangulating the qualitative
interpretations with the survey data analysis, we were able to reconcile the contradictions and
provide a new interpretation of PU and PEoU.
Specifically, the quantitative study triangulates with the qualitative phase in several ways. First,
the factor analysis confirms that technology acceptance centers on PU and PEoU, although the
meanings of these concepts are more specific in this context. Second, the qualitative data help to
explain the seemingly confusing results from regression analysis: on the one hand, non-users
generally believed that a system like Campus Alerts might be “useful” in terms of improving the
University’s emergency preparedness (hence a significant predictor for the intention of
acceptance); on the other hand, existing users had doubts about Campus Alerts based on their
usage experiences (relevance, accessibility, etc.) of the system (hence the insignificance of
association between PU and behavior). Third, system controllability was a factor identified in
both qualitative and quantitative phases, but the quantitative study highlights the critical
importance of this factor as a strong predictor of acceptance behavior.
The empirical findings prompted us to rethink the meaning of PU and PEoU in the context of
emergency alert systems. Since the “usefulness” of an emergency alert technology is usually
assumed but not tried (unless a real emergency strikes), the PU of the technology is inevitably
vague (Rogers, 2002). In fact, the lack of “triability” reveals a limitation inherent in many
current emergency response systems: the implementation of systems is still grounded in the
traditional Command & Control model of crisis management and only functions when there is
“chaos” (Wu et al., 2008). Such systems are intended to deal with “chaos” and completely ignore
the importance of continuity in emergency response (Dynes, 1994).
Hence, emergency response systems such as Campus Alerts should integrate more peripheral
functions so that continuous use of the system can be guaranteed. As Helsloot and Ruitenberg
(2004) suggested, existing systems used in daily life are more effective in emergencies than
“artificial” response systems. For example, Campus Alerts can be used to notify students about
12
unusual events such as school closure and icy road conditions. A system that only deals with
future emergencies may be perceived as “useful”, but this future utility might not be a strong
motivator for potential adopters. PU, therefore, refers not only to the central and intended utility,
but also to the perceived utilities in dealing with peripheral or even remotely related tasks.
The multi-facet usefulness of emergency alert technologies also links with the technologies’
multi-level ease of use. Although “controllable user interface” (Shneiderman, 1997) is now
widely accepted in interaction design, users of emergency alert systems are hardly viewed as
active agents with a desire to be in control. In many situations it is true that average citizens have
common needs when an emergency strikes; nevertheless, for emergency notification systems
deployed in a community with a large number of users, information needs may vary depending
on the nature of the emergency and on contextual factors related to the user. In the case of
Campus Alerts, PEoU is an important factor that goes beyond the superficial conceptualization
of technical experience or skills. The results of the case study suggest that there are higher levels
of usability issues for information technologies, which need to be considered when evaluating
ease of use.
Conclusions
This paper advocates a mixed methods approach to technology acceptance research and
describes how such an approach was used in an empirical study of emergency alert system
adoption. It illustrates four methodological points: 1) the need to advance technology acceptance
research by changing the methodological dominance of the survey study; 2) the value of a mixed
methods approach in technology acceptance research; 3) the need for evaluation of TAM
constructs in both positivist and interpretive paradigms; 4) the importance of method
triangulation. The case study highlights the iterative nature of a mixed-method design in which
different methodological techniques were called upon in order to confirm or not confirm the
three levels of understanding.
Aside from the sequential exploratory design presented in this paper, there are other research
designs in which quantitative and qualitative phases are given different weights and/or temporal
orders. For example, in the sequential explanatory design (Creswell, 2003) (illustrated in Figure
5), researchers collect and analyze quantitative data first, and then use qualitative methods to
probe, explain, or triangulate the quantitative results.
Figure 5. Sequential Explanatory Mixed Methods Design (Emphasis on the Qualitative Phase)
13
The selection of a particular research design deserves careful thinking and is usually driven by
the research aim and availability of resources. A creative and well-designed mixed methods
study will produce findings from each set of data that complement each other in terms of solving
the research problems. If findings are corroborated across different methods then greater
confidence can be placed in conclusions; if the findings conflict, then the complexity of the
phenomenon may be appreciated and our understanding of the problem advanced.
It must be stressed that the researcher recognizes the legitimacy of using only quantitative or
only qualitative methods and makes no claim that the mixed methods approach proposed in this
paper is the best or the only one that can be employed in IS studies. The empirical study merely
provides a demonstration of the feasibility of integrating multiple methods in order to further the
theories and understanding of user acceptance of technology. It is also hoped that, by introducing
mixed methods into TAM-based acceptance research, researchers will be encouraged to revisit
the constructs of PU and PEoU in greater depth so that “actional advice” (Benbasat and Barki,
2007) may be offered to information system managers and system designers.
Acknowledgements
The author thanks the anonymous reviewer for his/her insightful comments that greatly improved
this paper. The author is also grateful to Dr. Jenny Preece and Dr. Yan Qu for their advice and
support in conducting the empirical study.
14
References
Ajzen, I., and Fishbein, M. Understanding Attitudes and Predicting Social Behavior, Prentice-
Hall, Englewood Cliffs, NJ, 1980.
Bagozzi, R.P. "The Legacy of the Technology Acceptance Model and a Proposal for a Paradigm
Shift," Journal of the Association for Information Systems (8:4), April 2007, pp. 243-254.
Bazeley, P. Qualitative Data Analysis with NVivo, Sage Publications, London, 2007.
Benbasat, I., and Barki, H. "Quo vadis, TAM?," Journal of the Association for Information
Systems (8:4), April 2007, pp. 211-218.
Chen, W., and Hirschheim, R. "A Paradigmatic and Methodological Examination of Information
Systems Research from 1991 to 2001," Information Systems Journal (14:1), January 2004,
pp. 197-235.
Converse, J.M., and Presser, S. Survey Questions: Handcrafting the Standardized Questionnaire,
Sage Publications, Thousand Oaks, CA, 1986.
Creswell, J.W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches
(2nd ed.), Sage Publications, Thousand Oaks, CA, 2003.
Creswell, J.W., and Tashakkori, A. "Editorial: Developing Publishable Mixed Methods
Manuscripts," Journal of Mixed Methods Research (1:2), April 2007, pp. 107-111.
Davis, F.D. "Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information
Technology," MIS Quarterly (13:3), September 1989, pp. 319-340.
Davis, F.D., Bagozzi, R.P., and Warshaw, P.R. "Extrinsic and Intrinsic Motivation to Use
Computers in the Workplace," Journal of Applied Social Psychology (22:14), August 1992,
pp. 1111-1132.
Davis, F.D., Bagozzi, R.P., and Warshaw, P.R. "User Acceptance of Computer technology: A
Comparison of Two Theoretical Models", Management Science (35:8), August 1989, pp.
982–1003.
Denzin, N.K., and Lincoln, Y.S. The SAGE Handbook of Qualitative Research (3rd ed.), Sage
Publications, Thousand Oaks, CA, 2005.
Dillon, A. "User Acceptance of Information Technology," in: Encyclopedia of Human Factors
and Ergonomics, W. Karwowski (Ed.), Taylor & Francis, London, 2001, pp. 1-10.
Dynes, R.R. "Community Emergency Planning: False Assumptions and Inappropriate
Analogies," International Journal of Mass Emergencies and Disasters (12:2), August 1994,
pp. 141-158.
Helsloot, I., and Ruitenberg, A. "Citizen Response to Disasters: A Survey of Literature and Some
Practical Implications," Journal of Contingencies and Crisis Management (12:3), September
2004, pp. 98-111.
Johnson, R.B., and Onwuegbuzie, A.J. "Mixed Methods Research: A Research Paradigm Whose
Time Has Come," Educational Researcher (33:7), October 2004, pp. 14-26.
Kaplan, B., and Duchon, D. "Combining Qualitative and Quantitative Methods in Information
Systems Research: A Case Study," MIS Quarterly (12:4), December 1988, pp. 571-585.
King, W.R., and He, J. "A Meta-Analysis of the Technology Acceptance Model," Information &
Management (43:6), September 2006, pp. 740-755.
Kraemer, K.L., and Dutton, W.H. "Survey Research in the Study of Management Information
Systems," in: The information Systems Research Challenge: Survey Research Methods, K.L.
Kraemer (Ed.), Harvard Business School Press, Boston, MA, 1991, pp. 3-57.
15
Kuhn, T. The Essential Tension: Selected Studies in Scientific Tradition and Change, Chicago
University Press, Chicago, IL, 1977.
Kvale, S. InterViews: An Introduction to Qualitative Research Interviewing, Sage Publications,
Thousand Oaks, CA, 1996.
Lee, A.S. "A Scientific Methodology for MIS Case Studies," MIS Quarterly (13:1), March1989,
pp. 33-50.
Lee, A.S. "Integrating Positivist and Interpretive Approaches to Organizational Research,"
Organization Science (2:4), November 1991, pp. 342-365.
Lee, Y., Kozar, K.A., and Larsen, K.R.T. "The Technology Acceptance Model: Past, Present,
and Future," The Communications of the Association for Information Systems (12:1), 2003,
pp. 752-780.
Losee, J. Theories of Scientific Progress: An Introduction, Routledge, New York & London,
2003.
Manfredo, M.J., and Shelby, B. "The Effect of Using Self-Report Measures in Tests of Attitude-
Behavior Relationships," The Journal of General Psychology (128:6), December 1988, pp.
731-743.
Meehl, P.E. "Theoretical Risks and Tabular Asterisks: Sir Karl, Sir Ronald, and the Slow
Progress of Soft Psychology," Journal of Consulting and Clinical Psychology (46:4), August
1978, pp. 806-834.
Mingers, J. "Combining IS Research Methods: Towards a Pluralist Methodology," Information
Systems Research (12:3), September 2001, pp. 240-259.
Myers, M.D. Qualitative Research in Business & Management, Sage Publications, London,
2009.
Nunnally, J. C. Psychometric Theory (2nd ed.), McGraw-Hill, New York, NY, 1978.
Orlikowski, W.J., and Baroudi, J.J. "Studying Information Technology in Organizations:
Research Approaches and Assumptions," Information Systems Research (2:1), March 1991,
pp. 1-28.
Palvia, P., Mao, E., Salam, A.F., and Soliman, K.S. "Management Information Systems
Research: What's There in a Methodology," Communications of the Association for
Information Systems (11:1), 2003, pp. 289–309.
Patton, M.Q. How to Use Qualitative Methods in Evaluation, Sage Publications, Newbury Park,
CA, 1987.
Rogers, E.M. "Diffusion of Preventive Innovations,” Addictive Behaviors (27), 2002, pp. 989-
993.
Rubin, J.H., and Rubin, J.I. Qualitative Interviewing (2nd ed.), Sage Publications, Thousand
Oaks, CA, 2005.
Scandura, T.A., and Williams, E.A. "Research Methodology in Management: Current Practices,
Trends, and Implications for Future Research," Academy of Management Journal (43:6),
December 2000, pp. 1248-1264.
Sharma, R., Yetton, P., and Crawford, J. "The Relationship between Perceived Usefulness and
Use: The Effect of Common Method Bias," Diffusion Interest Group in Information
Technology Conference (DIGIT /SIGADIT), Washington, DC, 2004.
Shneiderman, B. "Direct Manipulation for Comprehensible, Predictable and Controllable User
Interfaces," in Proceedings of the 2nd International Conference on Intelligent User
Interfaces, Moore, J., Edmonds, E., and Puerta, A. (Eds.), Orlando, Florida, January 1997,
pp. 33-39.
16
Straub, D., Boudreau, M.-C., and Gefen, D. "Validation Guidelines for IS Positivist Research,"
Communications of AIS (13), Article 24, 2004, pp. 380-427.
Straub, D., and Burton-Jones, A. "Veni, Vidi, Vici: Breaking the TAM Logjam," Journal of the
Association for Information Systems (8:4), April 2007, pp. 223-229.
Tashakkori, A., and Teddlie, C. Handbook of Mixed Methods in Social & Behavioral Research,
Sage Publications, Thousand Oaks, CA, 2003.
Trauth, E.M., and Jessup, L.M. "Understanding Computer-Mediated Discussions: Positivist and
Interpretive Analyses of Group Support System Use," MIS Quarterly (24:1), March 2000, pp.
43-79.
Venkatesh, V., and Davis, F.D. "A model of the antecedents of perceived ease of use:
Development and test," Decision Sciences (27:3), Summer 1996, pp. 451-481.
Venkatesh, V., and Davis, F.D. "A Theoretical Extension of the Technology Acceptance Model:
Four Longitudinal Field Studies," Management Science (46:2), February 2000, pp. 186-204.
Venkatesh, V., Morris, M.G., Davis, G.B., and Davis, F.D. "User Acceptance of Information
Technology: Toward a Unified View," MIS Quarterly (27:3), September 2003, pp. 425-478.
Wu, P. F., Qu, Y., Preece, J., Fleischmann, K., Golbeck, J., Jaeger, P., and Shneiderman, B.
"Community Response Grid (CRG) for a University Campus: Design Requirements and
Implications", in Proceedings of the 5th International Conference on Information Systems for
Crisis Response and Management (ISCRAM), Fiedrich, F., and Van de Walle, B.A. (Eds.),
Washington, DC, 2004.
Yin, R.K. Case Study Research: Design and Methods, (3rd ed.) Sage Publications, Thousand
Oaks, CA, 2003.
Yuan, L., Dade, C., and Prada, P. "Texting When There's Trouble," The Wall Street Journal, April 18,
2007, p B.1.
17