0% found this document useful (0 votes)
330 views32 pages

Measuring The Performance of Information Systems: A Functional Scorecard

By JERRY CHA-JAN CHANG AND WILLIAM R. KING This study develops an instrument that may be used as an information systems (IS) functional scorecard (ISFS). It is based on a theoretical input-output model of the IS funcfion's role in supporting business process effecfiveness and organizafional performance. Tbe research model consists of three system output dimensions—systems perfonnance, informafion effectiveness, and service performance. The "updated paradigm" for instrument development was followed to develop and validate the ISFS instrument. Construct valid afion of the instrument was conducted using responses from 346 systems users in 149 organizafions by a combinafion of exploratory factor analysis and structural equation modeling using LISREL. The process resulted in an instrument that measures 18 unidimensional factors within the three ISFS dimensions. Moreover, a sample of 120 matched-paired responses of separate CIO and user responses was used for nomological validation. The results showed that the ISFS measure reflected by the instrument was positively related to improvements in business processes effectiveness and organizafional performance. Consequently, the instrument may be used for assessing IS performance, for guiding information technology investment and sourcing decisions, and as a basis for further research and instrument development.

Uploaded by

Ariful Islam Apu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
330 views32 pages

Measuring The Performance of Information Systems: A Functional Scorecard

By JERRY CHA-JAN CHANG AND WILLIAM R. KING This study develops an instrument that may be used as an information systems (IS) functional scorecard (ISFS). It is based on a theoretical input-output model of the IS funcfion's role in supporting business process effecfiveness and organizafional performance. Tbe research model consists of three system output dimensions—systems perfonnance, informafion effectiveness, and service performance. The "updated paradigm" for instrument development was followed to develop and validate the ISFS instrument. Construct valid afion of the instrument was conducted using responses from 346 systems users in 149 organizafions by a combinafion of exploratory factor analysis and structural equation modeling using LISREL. The process resulted in an instrument that measures 18 unidimensional factors within the three ISFS dimensions. Moreover, a sample of 120 matched-paired responses of separate CIO and user responses was used for nomological validation. The results showed that the ISFS measure reflected by the instrument was positively related to improvements in business processes effectiveness and organizafional performance. Consequently, the instrument may be used for assessing IS performance, for guiding information technology investment and sourcing decisions, and as a basis for further research and instrument development.

Uploaded by

Ariful Islam Apu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Measuring the Performance of Information Systems: A Functional Scorecard

JERRY CHA-JAN CHANG AND WILLIAM R. KING


is an Assistant Professor in the Department of MIS in the College of Business, University of Nevada, Las Vegas. He has a B.S. in Oceanography from National Ocean University, Taiwan, an M.S. in Computer Science from Central Michigan University, an MBA from Texas A&M University, and an M.S. in MoIS and a Ph.D. in MIS from the University of Pittsburgh. His research interest includes performance measurement, IS strategy, management of IS, group support systems, human-computer interaction, organizational learning, and strategic planning. His work has appeared in Information & Management, Decision Support Systems, DATABASE, Communications of the ACM, and Journal of Computer Information Systems, and several major IS conference proceedings.
JERRY CHA-JAN CHANG

holds the title University Professor in the Katz Graduate School of Business at the University of Pittsburgh. He has published more than 300 papers and 15 books in the areas of Informafion Systems, Management Science, and Strategic Planning. He has served as Founding President of the Association for Informafion Systems (AIS), President of TIMS (now INFORMS), and Editor-in-Chief of MIS Quarterly. He was instrumental in the creation of INFORMS and of tbe Information Systems Research pumdX. He recently received the Leo Lifetime Excepfional Achievement Award by AIS.
WILLIAM R. KING

This study develops an instrument that may be used as an information systems (IS) functional scorecard (ISFS). It is based on a theoretical input-output model of the IS funcfion's role in supporting business process effecfiveness and organizafional performance. Tbe research model consists of three system output dimensionssystems perfonnance, informafion effectiveness, and service performance. The "updated paradigm" for instrument development was followed to develop and validate the ISFS instrument. Construct validafion of the instrument was conducted using responses from 346 systems users in 149 organizafions by a combinafion of exploratory factor analysis and structural equation modeling using LISREL. The process resulted in an instrument that measures 18 unidimensional factors within the three ISFS dimensions. Moreover, a sample of 120 matched-paired responses of separate CIO and user responses was used for nomological validation. The results showed that the ISFS measure reflected by the instrument was positively related to improvements in business processes effectiveness and organizafional performance. Consequently, the instrument may be used for assessing IS performance, for guiding information technology investment and sourcing decisions, and as a basis for further research and instrument development.
ABSTRACT: KEY WORDS AND PHRASES: functional scorecard, information systems performance measurement, instrument development, structural equation modeling. Journal of Management Information 5}'.r(em.s/Summer 2005, Vol. 22, No. 1, pp. 85-115. 2005 M.E. Sharpe. Inc. 0742-1222 / 2005 $9.50 + 0.00.

86

JERRY CHA-JAN CHANG AND WILLIAM R. KING

(IS) function's performance has long been an important issue to IS executives. This interest is evident from the prominence of this issue in the various IS "issue" studies [12, 13, 34,49,72] as well as the popularity of annual publications such as ComputerWorld Premier 100 and InformationWeek 500, which involve the use of surrogate metrics to assess overall IS functional performance (ISFP). Executives routinely seek evidence of returns on information technology (IT) investments and soureing decisionsboth types of choices that have become more substantial and a competitive necessity. As the unit that has major responsibilities for these decisions, the IS function is usually believed to be an integral part of achieving organizational success. Yet the overall performance of the IS function has proved to be difficult to conceptualize and to measure. As the outsourcing of IS subfunctional areas such as data centers and "help desks" has grown into the outsourcing of the entire IS function, there is an ever-growing need for formal performance assessment [61]. This will permit the establishment of baseline measures to use in judging outsourcing success. So, the issue of an overall IS functional metric, which is, and has been, high on IS executives' priorities, is becoming even more important. Although there has been a good deal of research on IS efficiency, effectiveness, and success at various levels of analysis, overall functional-level performance is one of the least discussed and studied. According to Seddon et al. [88], only 24 out of 186 studies between 1988 and 1996 can be classified as focusing on the IS functional level. Nelson and Cooprider's [70] work epitomizes this need. Moreover, while there exist metrics and instruments to assess specific IS subfunctions and specific IS subareas, such as data center performance, productivity and data quality, typically these measures cannot be aggregated in any meaningful way. This limits their usefulness as the bases for identifying the sources of overall performance improvements or degradations. As an anonymous reviewer of an earlier version of this paper said, "The critical issue is that the performance of the IS function is now under the microscope and decisions to insource/outsource and spend/not spend must be made in a structured context." The objective of this research is to develop such an instrumenta "scorecard" for evaluating overall ISFP.
ASSESSING THE INFORMATION SYSTEM

The Theoretical Bases for the Study


THE DEFINITION OF THE

"IS FUNCTION" that is used here includes "all IS groups and departments within the organization" [84]. This definition is broad enough to include various structures for the IS function, from centralized to distributed, yet specific enough to include only the formal IS function that can be readily identified. Figure 1 shows the modified input-output (I/O) model that is the theoretical basis for the study. The model in Figure 1 has been utilized as a basis for other IS research studies [63,113]. It incorporates a simple input-output structure wherein the IS func-

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

87

IS Functional Performance Resources - hardware - software - human resources - integrated managerial and technical capabilities IS Function Outputs IS Function - systems - information - services Business Process EfTectiveness \ Organizationai Performance

Figure 1. Theoretical Input-Output Performance Model

tion uses resources to produce IS performance, which in turn influences both business process effectiveness and organizational performance. The resources utilized by the IS function are shown in Figure 1 to be hardware, software, human resources, and integrated managerial and technical capabilities [14, 15, 36], The IS function is shown to produce systems, information, and services [56, 92], which collectively affect the organization in a fashion that is termed IS functional performance (ISFP), which is to be assessed through an ISfunctional scorecard (ISFS), the development of which is the objective of this study. In the theoretical model, IS outputs are also shown as significant enablers and drivers of business process effectiveness, since IS are often the basis for business process operations and redesign [94, 113], ISFP also is shown to infiuence business process effectiveness, and both influence overall organizational performance [113], Although it is not the primary purpose of this study to directly address the business process effectiveness and organizational performance elements of Figure 1, data were collected on these elements of the model for purposes of nomological validation of the "scorecard" that is being developed. The model of Figure 1 is based on streams of research in IS capabilities, IS effectiveness/success, IS service quality, IS functional evaluation, and IS subfunctional assessment.

IS Capabilities
IS capabilities are integrated sets of hardware, software, human skills, and management processes that serve to translate financial investments in IS into IS performance [17, 23,42, 83,99, 111, 113], For instance, an IS strategic planning capability might consist of well-trained planners, computer-based planning models, knowledgeable

88

JERRY CHA-JAN CHANG AND WILLIAM R. KING

technical people, adequate planning budgets, and a well-formulated and specified planning process.

IS Effectiveness/Success
DeLone and McLean [30] categorized over 100 IS "dependent variables" into six categories and developed an IS success model to describe the relationships between the categories. They concluded that IS success should be a multidimensional measure and recommended additional research to validate tbe model. Other researchers have since tested and expanded their model [7, 46, 79]. DeLone and McLean [31] have updated the model based on a review of research stemming from their original work. They concluded that their original model was valid and suggested that "service quality" be incorporated as an important dimension of IS success.

IS Service Quality
Recognizing the importance of the services provided by tbe IS function, the SERVQUAL measure, originally developed in marketing [74], has been adapted to measure IS service quality [75,110]. However, the controversy over SERVQUAL in marketing [27] has carried over into IS [52,104], suggesting that more research needs to be conducted to measure IS service quality. Proponents of this measure sometimes advocate its use as proxy for ISFP. However, as depicted in Figure 1, it is directly applicable only to one of the three major outputs of the IS function.

IS Functional Evaluation
Only a few studies directly address the comprehensive evaluation of tbe performance of the IS function. No one has developed a validated metric. Wells [112] studied existing and recommended performance measures for the IS function and identified six important goals/issues. Saunders and Jones [84] developed and validated 11 IS function performance dimensions through a three-round Delphi study. They proposed an IS function performance evaluation model to help organizations select and prioritize IS performance dimensions and to determine assessments for each dimension. Both studies focused on top management's perspective of ISFP and did not offer any specific measures.

IS Subfunctional Assessment
Measuring IS subfunctional performance has been important to IS practitioners and academics, and such measures have been developed at a variety of levels using a number of different perspectives. For instance, measurements have been made of the effects of IS on users (e.g., [105]), learning outcomes (e.g., [1]), service (e.g., [52]), e-business (e.g., [96]) and other contexts using economic approaches (e.g., [16]), a

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

89

Table 1. Implementafion of Cameron and Whetton's [19] Guidelines Guidelines


1. From whose perspective is effectiveness being assessed? 2. On wbat domain of activity is the assessment focused? 3. Wbat level of analysis is being used? 4. What is tbe purpose for judging effectiveness? 5. Wbat time frame is being employed? 6. Wbat type of data are being used for judgments of effectiveness? 7. What is tbe referent against which effectiveness is judged?

Implementations
Organizational users of IS services and systems. Products and services provided by tbe IS function. Tbe IS function [84]. Identify strengths and weaknesses; track overall effectiveness. Periodically, ranging from quarterly to annually. Subjective; perceptual data from individual. Past performance measures.

financial perspecfive (e.g., [10]), a social science perspective (e.g., [80]), an "IT value" approaeb [24], a business process viewpoint (e.g., [98]), and probably others. So, there has been no paucity of interest in IS assessment, or in the development of measures. However, there is a great need for a comprehensive measure of IS performance that will provide a configural, or "Gestalt" [41], view of an organizafion's formal IS acfivities and facilitate decision making and functional improvement.

The ]VIethodological Basis for the Study


To ENSURE THE APPROPRIATENESS OF THE STUDY at the IS functional level, it was

designed according to guidelines from the organizational effectiveness literature. These guidelines were developed in response to problems plaguing organizational effecfiveness research as described by Steers [95]. Cameron and Whetton [19] developed seven basic guidelines that are listed in the lefthand column of Table I. Cameron [18] later demonstrated the usefulness of these guidelines in a study of 29 organizations. These guidelines have also been adopted by IS researchers to clarify conceptual developments in examining IS funcfional effecfiveness [69, 88]. The implementafions of Cameron and Whetton's [19] guidelines for this study are shown in the dghthand column of Table 1. Thus, the ISFS developed here is defined as organizational IS users' percepfion of the performance for all of the aspects of the IS function that they have personally experienced. Organizafional users of IS services and systems are the primary stakeholder for the IS function [92]. Although there are many other stakeholders for the IS function, users represent the largest group, and their efficacy in ufilizing IS products and services directly affects the organizafion's bottom line. Tberefore, tbe aggregated evaluation of individual users' assessments forms a quite comprebensive picture of tbe ISFP.

90

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Despite its focus on users, this approach is different from the popular "user satisfaction" measures [6,8, 35], because it is designed to assess people's perceptions of the overall IS function rather than to capture users' attitudes toward a specific system.

The Domain and Operationalization of the IS Performance Construct


USERS' PERCEPTION OF IS ACTIVITIES

derive from their use of the IS "products" and the services provided by the IS function. IS research has traditionally separated the effect of systems and information as two distinct constructs [30]. However, system and information quality are "attributes of applications, not of IS departments" [87, p. 244]. Therefore, they are not sufficient to reflect the effectiveness of the entire IS function.

Domain of the ISFS Construct


The domain of ISFP used in this study reflects the theory of Figure 1 and the models suggested by Pitt et al. [75] and Delone and McLean [31]. The definitions of the three basic output-related dimensions are given below. A model of the ISFS construct, using LISREL notation, is presented in Figure 2. Systems performance: Assesses the quality aspects of systems such as reliability, response time, ease of use, and so on, and the various impacts that systems have on the user's work. "Systems" encompass all IS applications that the user regularly uses. Information effectiveness: Assesses the quality of information in terms of the design, operation, use, and value [108] provided by information as well as the effects of the information on the user's job. The information can be generated from any of the systems that the user makes use of. Service performance: Assesses the user's experience with services provided by the IS function in terms of quality and flexibility [38]. The services provided by the IS function include activities ranging from systems development to help desk to consulting. In order to develop a measurement instrument with good psychometric properties, the "updated paradigm" that emphasizes establishing the unidimensionality of measurement scales [40, 89] was followed. A cross-section mail survey is appropriate to obtain a large sample for analysis and to ensure the generalizability of the resulting instrument.

Operationalization of Constructs
Two sets of constructs were operationalized in this studythe three-dimensional ISFS construct and the constructs related to the consequences of ISFP. These "consequences"

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

91

Systems Performance (SYSP)

Information Effectiveness (INFOE)

Service Performance (SERVP)

Figure 2, Three-Dimensional Model of ISFS

constructs (business process effectiveness and organizational performance), as shown in Figure 1, were used to assess nomological validity. Whenever possible, previously developed items that had been empirically tested were used or adopted to enhance the validity and reliability of the instrument under development. Some new measures were also developed from reviews of both practitioner and research literatures to reflect developments that have occurred subsequent to the development of the measures from which most items were obtained (e,g,, e-commerce, enterprise resource planning [ERP], etc). The three output dimensions of Figure 1 are the basis for three ISFS dimensions. Systems Performance Measures of systems performance assess the quality aspects of systems and the various effects that IS have on the user's work. Empirical studies listed under the categories "system quality" and "individual impact" in DeLone and McLean's [30] IS Success Model were reviewed to collect the measure used in those studies. In addition, instruments developed by Baroudi and Orlikowski [8], Doll and Torkzadeh [35], Davis [29], Kraemer et al, [59], Mirani and King [67], Goodhue and Thompson [43], Ryker and Nath [81], Saarinen [82], and Torkzadeh and Doll [103] were also reviewed and

92

JERRY CHA-JAN CHANG AND WILLIAM R. KING

included for more updated measures published subsequent to DeLone and McLean's original review. Information Effectiveness Measures of information effectiveness assess the quality of the information provided by IS as well as the effects of the information on the user's job. Although DeLone and McLean's [30] "information quality" provided a good source for existing measures, Wang and Strong [109] developed a more comprehensive instrument that encompasses all measures mentioned in DeLone and McLean's review. Therefore, the 118 measures developed by Wang and Strong make up the majority of items in this dimension. However, since the focus of their instrument is on quality of information, in order to ensure coverage of measures on the effects of information on the user's job, some new items were developed. Service Performance Measures of service performance assess each user's experience witb the services provided by tbe IS function in terms of the quality and flexibility of the services. The entire IS-SERVQUAL instrument is included in this dimension for comprehensiveness. New measures were also incorporated to augment the IS-SERVQUAL items based on the more comprehensive view of service performance proposed by Fitzgerald et al. [38]. In addition, literature on three areas of IS functional services that were not explicitly covered by the service quality literaturetraining [60,65,71], information centers [11, 44, 47, 66], and help desks [20]were also reviewed and included to ensure tbe comprehensiveness of measures for this dimension. In addition to utilizing existing items to measure these constructs, the emergence of innovations that have come into use since most of the prior instruments were developed prompted tbe inclusion of new items to measure the IS function's performance in seven new areas: ERP [51], knowledge management [45,64, 97], electronic business [9,55], customer relationship management [39], supply chain management [37], electronic commerce [22, 33, 102], and organizational learning [86, 100]. In total, 31 new items gleaned from the practitioner and research literatures to reflect potential user assessments of IS function's contribution to those areas in terms of systems, information, and services were incorporated to expand the item pools for each dimension.

Instrument Development
A total of 378 items were initially generated. Multiple rounds of Q-sorting and item categorization were conducted [29, 68] to reduce the number of items and to ensure the content validity of the ISFS instrument. The last round of g-sort resulted in the identification of subconstructs with multiple items for each of the three dimensions. Table 2 shows the subconstructs for each dimension that resulted from tbe g-sort process.

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

93

Table 2. Sub-ISFS Constructs from g-Sort Systems performance


Effect on job Effect on external constituencies Effect on internal processes Effect on knowledge and learning Systems features Ease of use

Informafion effectiveness
Intrinsic quality of information Contextual quality of information Presentational quality of information Accessibility of information Reliability of information Flexibility of information Usefulness of information

Service performance
Responsiveness Reliability Service provider quality Empathy Training Flexibility of services Cost/benefit of services

The g-sorts resulted in an ISFS instrument that consists of 42, 36, and 32 items for the three dimensions, respectively. All items were measured using a Likert-type scale ranging from 1 (hardly at all) to 5 (to a great extent) with 0 denofing "not applicable." Tbe final version of the instrument is in the Appendix.

Survey Design and Execution


A sample of 2,100 medium-to-large companies with annual sales over $250 million was randomly selected from Hoover's Online (www.hoovers.com) and InformafionWeek 500. To avoid common-source bias, data were collected from two types of respondents in each of the sampled organizafions. Data for the ISFS instrument were collected from IS users, and organizational CIOs were asked to respond to a "Consequences of ISFP" survey which was used as a basis for establishing nomological validity. A packet consisfing of one "Consequences of ISFP" instrument and three ISFS instruments was sent to tbe CIOs of these companies. The CIO was asked to respond to the "Consequences of ISFP" survey and to forward ISFS instruments to three IS users. The characteristics of desirable user-respondents in terms of various functional areas, familiarity with IS, and so on, were specified. The CIO is deemed to be suitable for receiving the packet because the topic of this research would be of great interest to bim or her, tberefore increasing the potential for participation. The CIO is also an appropriate respondent to the "Consequence of ISFP" survey because he or she is at a high-enough position to provide meaningful responses concerning consequences. Although it is possible tbat tbe CIO migbt distribute the ISFS survey to "friendly" users and potenfially bias the responses, it is unlikely that the users would be able to consciously bias tbe results due to tbe focus of the analysis (variance explanafion) and the length and complexity of the ISFS instrument.

94

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Two rounds of reminders were sent to initial nonrespondents to improve the response rate. In addition, where appropriate, letters were sent to the CIOs who returned the CIO survey soliciting additional user participation. At the conclusion of data collection in 2001, 346 usable ISFS instruments and 130 "Consequences of ISFS" surveys were received, with 120 companies having responses from at least one IS user and the CIO. This resulted in a response rate of 7.2 percent for the CIO survey, 5.6 percent for the ISFS questionnaire, and 6.1 percent matchedpair responses. Two analyses were conducted to assess possible nonresponse bias, f-tests of company size in terms of revenue, net income, and number of employees between responding and nonresponding companies showed no significant differences, r-tests of 30 items randomly selected from the three ISFS dimensions (10 items each) between the early (first third) and late (last third) respondents [4, 62] also showed no significant differences. Therefore, it can be concluded that there was no nonresponse bias in the sample and that the relatively low percentage response rate does not degrade the generalizability of the ISFS instrument [57].

Sample Demographics
The participating companies represent more than 20 industries with nearly a quarter of the companies in manufacturing (24.6 percent), followed by wholesale/retail (13.8 percent), banking/finance (10.8 percent), and medicine/health (7.7 percent). The range of annual sales was between $253 million and $45,352 billion, with an average of $4 billion for the sample. For the "Consequences of ISFP" surveys, 46.9 percent of the respondents hold the title of CIO. More than 80 percent of the respondents have titles that are at the upper-management level, indicating that the returned surveys were responded to by individuals at the desired level. For the ISFS instrument, 47.7 percent of the respondents are at the upper-management level and 39.9 percent were at the middle management level. The respondents are distributed across all functional areas, with accounting and finance, sales and marketing, and manufacturing and operations being the top three.

Instrument Validation
of content validity, reliability, construct validity, and nomological validity. Following Segars's [89] process for instrument validation, we first use exploratory factor analysis to determine the number of factors, then use confirmatory factor analysis iteratively to eliminate items that loaded on multiple factors to establish unidimensionality.
INSTRUMENT VALIDATION REQUIRES THE EVALUATION

Content Validity
Content validity refers to the extent to which the measurement items represent and cover the domain of the construct [54]. It is established by showing that the "items are

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

95

a sample of a universe" of the investigator's interest and by "defining a universe of items and sampling systematically within this universe" [26, p, 58], Churchill [25] recommended specifying the domain of the construct followed by generating a sample of items as the first two steps in instrument development to ensure content validity. Domain development should be based on existing theories, and sample items should come from existing instruments, with the development of new items when necessary. In this study, domain development was guided by theories in both organizational effectiveness and IS research. Items from existing instruments formed the overwhelming majority of the item pool. The initial items were refined through a series of Q-sorts and a pilot test. These development procedures ensured the content validity of the instruments,

Unidimensionality and Convergent Validity


Unidimensionality requires that only a single trait or construct is being measured by a set of measures and "is the most critical and basic assumption of measurement theory" [50, p, 49], Gerbing and Anderson suggest that "confirmatory factor analysis affords a stricter interpretation of unidimensionality" [40, p, 186] than other commonly used methods. Although the subconstructs of the three basic dimensions described earlier were identified during the g-sort, those factors needed to be empirically tested. Therefore, exploratory factor analyses were first conducted for items within each dimension to determine the factors. This is acceptable, since the items for each dimension were clearly separated in the instrument into sections with opening statements describing the nature of the items in the sections. Three separate exploratory factor analyses were conducted using principal components with varimax rotation as the extraction method. There were seven, seven, and five factors with eigenvalues greater than 1,0 that explained 70,8 percent, 68,6 percent, and 69,6 percent of variance for systems performance, information effectiveness, and service performance, respectively. Review of the items showed that most factors loaded very closely to the subconstructs identified by the Q-sort, To establish unidimensionality, the items that loaded on the same factor were then analyzed with confirmatory factor analysis using LISRELwith two exceptions. One factor in "systems performance" had only one item. Since it is one of the original "ease-of-use" items from Davis [29], it was included into the factor that contains the rest of the "ease-of-use" items. Another factor in "information effectiveness" had only three items. It would be "just identified" for confirmatory factor analysis and was only analyzed in conjunction with other factors in the same dimension. This process resulted in six factors for systems performance, six for information effectiveness, and five for service performance, Segars and Grover suggest that "measured factors be modeled in isolation, then in pairs, and then as a collective network" [91, p, 148], This method of analysis provides the fullest evidence of measurement efficiency and avoids problems caused by excessive error in measurement [2, 3, 53, 90], In total, 17 measurement models were analyzed. Each model went through an iterative modification process to improve its model

96

JERRY CHA.JAN CHANG AND WILLIAM R. KING

fit. First, items with standardized factor loading below 0.45 were eliminated [78] one at a time. Second, error terms between pairs of items were allowed to correlate based on a modification index. However, this modification was only implemented when theories suggested that the two items should be correlated. This process was conducted iteratively by making one modification at a time until either good model fit was achieved or no modification was suggested. Following Segars and Grover's [90] procedure, after every measurement model completed its modification process, pairs of models within each dimension were tested iteratively to identify and eliminate items with cross-loadings. With all cross-loading items eliminated, all factors within the same dimension were tested in a full measurement model. Again, items with cross-loadings in the full model were dropped. After the full measurement models were purified, second-order models tbat reflect the subconstructs within each ISFS dimension were tested. Tbe final, second-order measurement models for tbe three ISFS dimensions are presented in Figures 3, 4, and 5. The chi-square and significant factor loadings provide direct statistical evidences of both convergent validity and unidimensionality [91]. With each of the three ISFS dimensions properly tested independently, all three dimensions were combined and tested for model fit (Figure 6). The complete ISFS model, shown in Figure 6, showed remarkably good fit for such high complexity.

Reliability
In assessing measures using confirmatory factor analysis, a composite reliability for each factor can be calculated [5, 93]. This composite reliability is "a measure of internal consistency of the construct indicators, depicting the degree to which they 'indicate' the common latent (unobserved) construct" [48, p. 612]. Another measure of reliability is the average variance extracted (AVE), which reflects the overall amount of variance that is captured by the construct in relation to the amount of variance due to measurement error [48, 89]. The value of AVE should exceed 0.5 to indicate that the variance explained by the construct is larger than measurement error. Tbe construct reliability and AVE of all dimensions and subconstructs are presented in Table 3. Table 3 indicates that all subconstructs showed good composite reliability except the IS training scale. However, there are some scales with an AVE below 0.50. This suggests that even though all scales (except one) were reliable in measuring their respective constructs, some of them were less capable of providing good measures of their own construct. Despite the low AVE, those scales were retained to ensure the comprehensiveness of the ISFS instrument.

Discriminant Validity
Discriminant validity refers to tbe ability of the items in a factor to differentiate themselves from items tbat are measuring other factors. In structural equation modeling (SEM), discriminant validity can be established by comparing the model fit of an unconstrained model that estimates the correlation between a pair of constructs and a

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

97

Sijl - easier to do job


0.12

- improve perfonnance Sij3 - improve decision Sij4 - give confidence l.86>| SijS - increase productivity ^ Sij6 - decision participation 0.09 I Bfi 1 Sij7 - increase awareness Sij8 - improve work quality |<-0.26 0.06 0.09

Sij9 - enhance problem solving |<-0.30 0.71": Siel - manage external partner f<-0.49s I 0.05 Sie2 - customer satisfaction 0.06 |<-0.43-<

1.84^ Sie3 - customer service Sie4 - share infonnation Sie5 - retain value customer

I Sie6 - select qualified suppliers |<-0.74 Sii5.-reduce process cost "0.88*1 Sii6 - reduce cycle time Sii7 - other area information Sikl - group problem solving ).79| Sik2 - group decision making Sik3 - individual learning Sik5 - knowledge transfer Ssc7 - cost-effective Ssc8 - responsive to changes ).gO| Ssc9 - flexible '0.76*1 Seu 1 - easy to ure
0.68.^1

|<-0.18 ~f<-0.23 W-0.51'0.04 1 n < 0.25

J<-0.60
0.18 0. |<-0.36-*' 3^

^ Seu2 - easy to leam

0.31\
0.19

W-0.54< - easy to become skillful l<-0.7 [<-0.55 "|<-0.33

I /,-M Ssc 1 - fast response time l.82>|Ssc4-reliable -accessible

Figure 3. Full Second-Order Measurement Model for Systems Perfomiance Notes: f = 618.62; d.f. = 411;/? = 0.00; RMSEA (root mean square error of approximation) = 0.038; GH (goodness-of-fit index) = 0.90; AGFI (adjusted goodness-of-fit index) = 0.87.

constrained model that fixes the correlation between the constructs to unity. Discriminant validity is demonstrated when the unconstrained model has a significantly better fit tban the constrained model. The difference in model fit is evaluated by the cbi-square difference (witb one degree of freedom) between the models. Tests of all possible pairs of subconstructs witbin each dimension were conducted; the results are

98

JERRY CHA-JAN CHANG AND WILLIAM R. KING

y | liql - interpretable
0.30

.7(| Iiq2 - understandable


i.73.1

0.85>|lril-reliable -verifiable i.77|lcql - important -relevant i.88>llp<ll -well-organized -well-defined ail -available ,76>j Iai3 - up-to-date

J<-0.27 _]<-0.40

_J<-0.10 |<-0.23 |<-0.24 _|<-0.48 |<-0.43

Iai4 - received in timely nianner|<-0.37 .0.66' j Ifi2 - easily changed

80>j Ifi3 - easily integrat^ -easily updated Iui2 - defining problems


0.78 1.81

| Iui3 - making decisions J<-0.27 |<-0.38 [<-0.3H


0.13

85>| Iui4 - improve efficiency 79. \ Iui6 - give competitive edge 11ui7 - identifying problems

0.83 .

Figure 4. Full Second-Order Measurement Model for Information Effectiveness Notes: f = 216.20; d.f = 156; p = 0.00; RMSEA = 0.033; GFI = 0.94; AGFI = 0.92.

presented in Table 4. As shown, all chi-square differences are significant atp < 0.001, indicating that each scale captures a construct that is significantly unique and independent of other constructs. This provides evidence of discriminant validity.

Nomological Validity
A nomological network that specifies "probable (hypothetical) linkages between the construct of interest and measures of other constructs" [85, p. 14] further clarifies the ISFS construct and provides an additional basis for construct validation. An operationalization of the theoretical model of Figure 1 that considers the two important consequences of ISFP was used. This model consists of the rightmost portions of

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

99

I jj)f| Srpl,- responds timely

|<-0.22

-timely completion

\<-0.19

Srl2 - dependable people


0,84

0,82 0,79^ Srl6 - reliable people ^ SepS - understand needs Stg3 - become better user 7i\ Sspl - polite people

0,16

0.93

0,66.^

0;! ^-0.56*

0.84

95>| Ssp3 - show respect

Ssp4 - pleasant to work with

\4-0.16

0,71

^ Stgl - useful training programs 0.57 i.46| Stg2 - variety of training 0,68,^1 1 ^ Scbl - cost-effective services r<-0,53

0,45

0.86 _^ Sfsl - sufficient service capacity[<-0.6: 0,61

Sfs3 - sufficient service variet71<-0,43 0.41


Sfs4 - sufficient people 72^

0.52,^ n SfsS - extends external services r<-0.73

Figure 5. Full Second-Order Measurement Model for Service Performance Notes: x^ = 139,09; d,f = 94; p = 0,00; RMSEA = 0,037; GFI = 0,95; AGFI = 0,93,

Figure 1 that relate ISFP to business process effectiveness and to organizational performance. Organizational Performance Although a positive relationship between IS effectiveness and business performance has been suggested, the evidence of such an effect has proved to be elusive [58], Using "user information satisfaction" and "strategic impact of IS" as surrogate IS effectiveness measures and several perceptual measures as business performance, Chan et al, [21] empirically showed a significant positive relationship between IS and business performance. Since ISFS is posited as a more comprehensive measure of IS performance, the positive relationship should hold in this study.

100

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Impact on job Impact on external constituencies Impact on internal processes Impact on knowledge and leaming Systems usage characteristics Intrinsic systems quality ity of information Reliability of information ,.-:yC^j__Contextual quality of infonnation 0.65 O.76->C[^_ftsentational quality of i n f o r m a t i o i i _ ^ '85. w , - ~ '~ Accessibility of information Flexibility of infonnation Usefulness of information Responsiveness of services Intrinsic quality of service provider 0.51 -^^^terpersonal quality of service provider IS training 0.91 Rexibility of services

Figure 6. The Complete ISFS Model Notes: y} = 3,164.90; d.f. = 2,094; p = 0.00; RMSEA = 0.039; GFI = 0.79; AGFI = 0.77.

This construct captures the IS function's contribution to the overall performance of the organization. Tbe literature has focused on assessing tbe extent to which the IS improves the organization's return on investment (ROI), market share, operational efficiency, sales revenue, customer satisfaction, competitiveness, and customer relations [21, 77, 101, 106,113]. Since subjective measures of those variables have been

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

101

Table 3. Reliability of Measurement Factors in ISFS Dimensions


Factor names Systems performance Impact on job Impact on external constituencies Impact on internal processes Impact on knowledge and learning Systems usage cbaracteristics Intrinsic systems quality Information effectiveness Intrinsic quality of information Reliability of information Contextual quality of information Presentational quality of information Accessibility of information Flexibility of information Usefulness of information Service performance Responsiveness of services Intrinsic quality of service provider Interpersonal quality of service provider IS training Flexibility of services

Reliability
0.92 0.95 0.88 0.89 0.89 0.85 0.79 0.92 0.73 0.79 0.85 0.87 0.80 0.81 0.91 0.89 0.88 0.84 0.93 0.59 0.69

AVE
0.66 0.68 0.56 0.80 0.62 0.49 0.56 0.63 0.48 0.66 0.75 0.77 0.57 0.58 0.66 0.63 0.79 0.56 0.82 0.33 0.37

considered to be acceptable in the literature [32, 107], seven items tbat assess the CIO's perception of IS's contribution to improving the organization's performance in those areas were used. Business Processes Effectiveness Aside from directly affecting organizational performance, the IS function should also have an effect on organizational perfonnance through its impact on tbe effectiveness of business processes, as shown in Figure 1. IS have traditionally been implemented to improve the efficiencies of internal operations. This use of IT has more recently been applied in redesigning both intra- and interfunctional business processes [94]. Improvements to tbe value-cbain activities through IT are captured in this construct. Based on Porter and Millar [76] and Devenport [28], Xia [113] developed a 39-item instrument to assess executives' perception of the extent to which IT improved the effecfiveness of six value-chain activifies. Data analysis resulted in six factors: production operations, product development, supplier relations, marketing services, management processes, and customer relations. Items representing those six factors were generated for tbis construct. Validity and Reliability of the Measures Used in Nomological Analysis. Although all scales in the "Consequences of ISFP" survey were from previously tested instruments.

102

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Table 4. Chi-Square Differences Between Factors Chi-square differences Factor 1 Systems performance Factor2 39.61*** Factor3 31.33*** Factor4 26.31*** Factor5 66.45*** Factor6 49.47*** Information effectiveness Factor2 63.61*** Factor3 80.96*** Factor4 64.16*** Factor5 65.65*** Factor6 62.68*** Factor7 67.31*** Service performance Factor2 23.84*** Factor3 51.52*** Factor4 46.31*** Factor5 30.69*** ***p< 0.001. Factor2 Factor3 Factor4 Factor5 Factor6

37.00*** 38.67*** 78.42*** 64.66***

28.55*** 59.97*** 55.23***

65.97*** 58.68***

70.76***

73.22*** 48.21*** 40.60*** 52.31*** 47.84***

75.03*** 65.60*** 78.87*** 67.80***

47.34*** 60.04*** 57.82***

48.66*** 47.27***

47.36***

52.93*** 48.52*** 46.89***

73.42*** 75.73***

53.93***

since a different sample was used, tests were conducted to ensure the reliability and validity of those constructs. Reliability was evaluated using Cronbach's alpha. Items with low corrected item-total correlation, indicating low internal consistency for the items, were dropped. Construct validation was assessed by exploratory factor analysis using principal components with oblique rotation as the extraction method. Table 5 presents the results of the analyses. Although both constructs had two factors extracted, the two factors were significantly correlated in both cases. Therefore, all items within each construct were retained and used to create an overall score for the construct. Items in the final measurement models were used to create an overall score for the ISFS construct. The average of all items for each construct was used to avoid problems that may occur due to differences in measurement scales. Table 6 shows the correlation among the constructs. As shown in Table 6, there were significant positive correlations between the ISFS construct and the two consequence constructs. There was also significant positive correlation between business processes effectiveness and organizational performance. Although correlation is not sufficient to establish causal relationships, the purpose here is to demonstrate the expected association between the constructs. Therefore, as shown in Tables 5 and 6, the nomological network was supported.

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

103

Table 5, Reliability and Validity of Nomological Constructs Number of factors extracted 2 2 Number of items retained 6 7 Variance explained (percent) 63,48 70,04 Cronbach's alpha 0,759 0,860

Constructs Business processes effectiveness Organizational performance

Table 6, Correlation of Constructs in1 the NomologicalNetwork Constructs Organizational performance ISFS Business processes 0,750" 0,214" Organizational performance 0,205"

** Correlation is significant at the 0,01 level (two-tailed).

Results, Limitations, and JVIanagerial Uses


THE

ISFS INSTRUMENT IS A COMPREHENSIVE ONE that has been designed to measure the performance of the entire IS function. The instrument consists of three major dimensions: systems performance, information effectiveness, and service performance. Each dimension contains several unidimensional subconstructs, each of which is measured by at least two items. All scales have high reliability. Evidence from discriminant validity analyses showed that each scale is measuring a construct that is different from the other constructs. Of course, some limitations to the instrument need to be pointed out. The sample size, while large, especially for "matched-pair" survey studies, is "borderline" for the number of variables relative to the number of observations that are involved in the SEM analysis. Thus, some caution should be taken until it is revalidated. The nonresponse bias analysis was conducted by comparing early responders to late responders and in terms of organizational size-related variables for responders and nonresponders. Although this is common practice, other variables might have been analyzed [57], We also note that two subconstructs"IS training" and "flexibility of services"were borderline with respect to reliability. Despite this, these items were retained for comprehensiveness or theoretical soundness. Further studies will need to explore and improve these items. The ISFS may therefore be thought of as a preliminary step that can guide future research and enhance practice in a significant, but limited, way. The ISFS integrates aspects of various philosophical approaches that have been taken to developing IT metrics (e,g,, [10, 16, 24, 80, 98]) as well as various subfunctional "levels" that have previously been measured (e,g,, [31, 75, 84]), The comprehensiveness of the ISFS instrument was demonstrated by its consideration of "all"

104

JERRY CHA.JAN CHANG AND WILLIAM R. KJNG

IS activities as reflected in the hierarchical structure of 18 unidimensional subconstructs within the three dimensions. This allows IS managers to use this instrument to assess their strengths and weaknesses in those subconstructs and dimensions. Of course, since the dimensions are not independent, the fact that one dimension may be "low" does not tell the whole story. Thus, such an indication must be further assessed in light of the overall "Gestalt" of dimensions. In this sense, the ISFS also allows the IS function to pinpoint specific areas that need improvements and to track both these areas and overall performance over time, thus providing the basis for "continuous improvement" [73]. When used in large organizations witb decentralized IS functions, the ISFS instrument can offer valuable insights for internal benchmarking. Comparing the results of ISFS instruments from different divisions or areas would help identify areas of IS excellence and facilitate the transfer of knowledge to other areas. It has already been used in each of these ways in a number of organizations that participated in the study. In addition, with the intensifying scrutiny on IT investment, analysis, and outsourcing, the ISFS instrument can be very useful to establish a baseline on the current status of ISFP. Comparing postinvestment or outsourcing IS perfonnance to the baseline would provide a more objective evaluation of the efficacy of the actions taken. This, in turn, will allow the organization to develop follow-up actions to maximize IS performance and, ultimately, to improve organizational performance. Thus, the instrument can be used in various waysas an overall evaluative tool, as a "Gestalt" of areas that may be tracked over time, or in evaluating specific subareas. At this latter level, it also provides means of identifying the specific performance areas, as represented by tbe subconstructs, that may need improvement. Because of the use of data from a cross-sectional field survey for validation, the ISFS instrument is applicable to a variety of industries. When used within an organization, the instrument should be administered to a range of systems users, in terms of both functional areas and organizational levels. This would ensure appropriate representations of the diverse users in the organization. The average scores for each subconstruct or dimension are the indicators of the IS function's performance for the specific subarea or dimension. To be effective, the ISFS instrument should be administered repeatedly at a fixed interval between quarterly and annually. The result of later assessments should be compared to earlier evaluations to detect changes that would indicate improvements or degradation in the IS function's performance and in specific performance areas. One additional caveat may be useful. The nomological validation was performed in terms of the overall ISFS score. As a result, there is no assurance that the "subscores" have the same degree of nomological validity. Since such an analysis is beyond the scope of this study, we leave it to others who may wish to concern themselves with this issue. Overall, the goal of developing a measure to assess the performance of the IS function was successfully achieved in this study. The resulting instrument is not only comprehensive enough to cover all aspects of ISFP but also sensitive enough to pinpoint specific areas tbat need attention. The ISFS instrument should be a useful tool for organizations to use in continuously monitoring the performance of their IS func-

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

105

tion and for researchers to use in studies that require ISFP as a dependent or independent construct, as well as in studies that seek to complement the ISFS througb otber analyses.

REFERENCES

1. Alavi, M.; Marakas, G.M.; andYoo, Y. A comparative study of distributed learning environments on learning outcomes. Information Systems Research, 13, 4 (December 2002), 404415. 2. Anderson, J.C. An approach for confirmatory measurement and structural equation modeling of organizational properties. Management Science, 33, 4 (April 1987), 525-541. 3. Anderson, J.C., and Gerging, D.W. Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103, 3 (May 1988), 411^23. 4. Armstrong, J.S., and Overton, T.S. Estimating nonresponse bias in mail surveys. Journal of Marketing Research, 14, 3 (August 1977), 396-402. 5. Bagozzi, R.P. An examination of the validity of two models of attitude. Multivariate Behavioral Research, 16, 3 (July 1981), 323-359. 6. Bailey, J.E., and Pearson, S.W. Development of a tool for measuring and analyzing computer user satisfaction. Management Science, 29, 5 (May 1983), 530-545. 7. Ballantine, J.; Bonner, M.; Levy, M.; Martin, A.; Monro, I.; and Powell, P.L. Developing a 3-D model of information systems success. In E.J. Garrity and G.L. Sanders (eds.). Information Systems Success Measurement. Hershey, PA: Idea Group, 1998, pp. 46-59. 8. Baroudi, J.J., and Orlikowski, W.J. A short-form measure of user information satisfaction: A psychometric evaluation and notes on use. Journal of Management Information Systems, 4, 4 (Spring 1988), 44-59. 9. Basu, A., and Kumar, A. Workflow management issues in e-business. Information Systems Research, 13, 1 (March 2002), 1-14. 10. Benaroch, M. Managing information tecbnology investment risk: A real options perspective. Journal of Management Information Systems, 19, 2 (Fall 2002), 43-84. 11. Bergeron, F.; Rivard, S.; and De Serre, L. Investigating the support role of the information center. MIS Quarterly, 14, 3 (September 1990), 247-260. 12. Brancheau, J.C, and Wetherbe, J.C. Key issues in information systems management. MIS Quarterly, 11, 1 (March 1987), 2 3 ^ 5 . 13. Brancheau, J.C; Janz, B.D.; and Wetherbe, J.C Key issues in infonnation systems management: 1994-95 SIM Delphi results. MIS Quarterly, 20, 2 (June 1996), 225-242. 14. Broadbent, M., and Weill, P. Management by Maxim: How business and IT managers can create IT infrastructers. Sloan Management Review, 38, 3 (Spring 1997), 77-92. 15. Broadbent, M.; Weill, P.; O'Brien, T; and Neo, B.N. Firm context and patterns of IT infrastructure capability. In J.I. DeGross, S.L. Jarvenpaa, and A. Srinivasan (eds.). Proceedings of Seventeenth International Conference on Information Systems. Atlanta: Association for Information Systems, 1996, pp. 174-194. 16. Brynjolfsson, E. The productivity paradox of information technology. Communications of the ACM, 36, 12 (December 1993), 67-77. 17. Byrd, T.A., and Turner, D.E. Measuring the flexibility of information technology infrastructure: Exploratory analysis of a construct. Journal of Management Information Systems, 17, 1 (Summer 2000), 167-208. 18. Cameron, K.S. A study of organizational effectiveness and its predictors. Management Science, 32, 1 (January 1986), 87-112. 19. Cameron, K.S., and Whetton, D.A. Some conclusions about organizational effectiveness. In K.S. Cameron and D.A. Whetton (eds.). Organizational Effectiveness: A Comparison of Multiple Models. New York: Academic Press, 1983, pp. 261-277. 20. Carr, CL. Managing service quality at the IS help desk: Toward the development and testing of TECH-QUAL, a model of IS technical support service quality. Ph.D. dissertation. University of Minnesota, Minneapolis, 1999.

106

JERRY CHA-JAN CHANG AND WILLIAM R. KING

21. Chan, Y.E.; Huff, S.L.; Barclay, D.W.; and Copeland, D.G. Business strategic orientation, information systems strategic orientation, and strategic alignment. Information Systems Research, 8, 2 (June 1997), 125-150. 22. Chatterjee, D.; Grewal, R.; and Sambamurthy, V. Shaping up for e-commerce: Institutional enablers of the organizational assimilation of Web technologies. MIS Quarterly, 26, 2 (June 2002), 65-90. 23. Chatterjee, D.; Pacini, C ; and Sambamurthy, V. The shareholder-wealth and tradingvolume effects of information technology infrastructure investments. Journal of Management Information Systems, 19, 2 (Fall 2002), 7 ^ 2 . 24. Chircu, A.M., and Kauffman, R.J. Limits to value in electronic commerce-related IT investments. Journal of Management Information Systems, 17,2 (Fall 2000), 59-80. 25. Churchill, G.A. A paradigm for developing better measures of marketing constructs. Journal of Marketing Research, 16, 1 (February 1979), 64-73. 26. Cronbach, L.J., and Meehl, P.E. Construct validity in psychological tests. In D.M. Jackson and S. Messick (eds.). Problems in Human Assessment. New York: McGraw-Hill, 1967, pp. 57-77. 27. Cronin, J.J.J., and Taylor, S.A. SERVPERF versus SERVQUAL: Reconciling performance-based and perceptions-minus-expectations measurement of service quality. Journal of Marketing, 58, 1 (January 1994), 125-131. 28. Davenport, T.H. Process Innovation: Reengineering Work Through Information Technology. Boston: Harvard Business School Press, 1993. 29. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13, 3 (September 1989), 319-340. 30. DeLone, W.H., and McLean, E.R. Information systems success: The quest for the dependent variable. Information Systems Research, 3, 1 (March 1992), 60-95. 31. DeLone, W.H., and McLean, E.R. The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, 19, 4 (Spring 2003), 9-30. 32. Dess, G.G., and Robinson, R.B.J. Measuring organizational performance in the absence of objective measures: The case of the privately-held firm and conglomerate business unit. Strategic Management Journal, 5, 3 (July-September 1984), 265-273. 33. Devaraj, S.; Fan, M.; and Kohli, R. Antecedents of B2C channel satisfaction and preference: Validating e-commerce metrics. Information Systems Research, 13, 3 (September 2002), 316-333. 34. Dickson, G.W.; Leitheiser, R.L.; Nechis, M.; and Wetherbe, J.C. Key information systems issues for the 1980s. MIS Quarterly, 8, 3 (September 1984), 135-148. 35. Doll, W.J., and Torkzadeh, G. The measurement of end-user computing satisfaction. MIS Quarterly, 12, 2 (June 1988), 259-274. 36. Duncan, N.B. Capturing flexibility of information technology infrastructure: A study of resource characteristics and their measure. Journal of Management Information Systems 12 2 (Fall 1995), 37-57. 37. Fan, M.; Stallaert, J.; and Whinston, A.B. Decentralized mechanism design for supply chain organizations using an auction market. Information Systems Research, 14, 1 (March 2003), 1-22. 38. Fitzgerald, L.; Johnston, R.; Brignall, S.; Silvestro, R.; and Voss, C. Performance Measurement in Service Businesses. London: Chartered Institute of Management Accountants, 1993. 39. Gefen, D., and Ridings, CM. Implementation team responsiveness and user evaluation of customer relationship management: A quasi-experimental design study of social exchange theory. Journal of Management Information Systems, 19, 1 (Summer 2002), 47-70. 40. Gerbing, D.W, and Anderson, J.C. An updated paradigm for scale development incorporating unidimensionality and its assessment. Journal of Marketing Research, 25, 2 (May 1988), 186-192. 41. Glazer, R. Measuring the knower: Towards a theory of knowledge equity. California Management Review, 40, 3 (Spring 1998), 175-194. 42. Gold, A.H.; Malhotra, A.; and Segars, A.H. Knowledge management: An organizational capabilities perspective. Journal of Management Information Systems, 18, 1 (Summer 2001)

185-214.

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

107

43, Goodhue, D,L,, and Thompson, R,L, Task-technology fit and individual performance, MIS Quarterly, 19, 2 (June 1995), 213-236, 44, Govindarajulu, C , and Reithel, B,J, Beyond the information center: An instrument to measure end-user computing support from multiple sources. Information & Management, 33, 5 (May 1998), 241-250, 45, Grover, V,, and Davenport, T,H, General perspectives on knowledge management: Fostering a research agenda. Journal of Management Information Systems, 18, 1 (Summer 2001), 5-22, 46, Grover, V,; Jeong, S,R,; and Segars, A,H, Information systems effectiveness: The construct space and patterns of application. Information & Management, 31, 4 (December 1996), 177-191, 47, Guimaraes, T, and Igbaria, M, Exploring the relationship between IC success and company performance. Information & Management, 26, 3 (March 1994), 133-141, 48, Hair, J,F,J,; Anderson, R,E,; Tatham, R,L,; and Black, W,C, Multivariate Data Analysis, 5th ed. Upper Saddle River, NJ: Prentice Hall, 1998, 49, Hartog, C , and Herbert, M, 1985 opinion survey of MIS managers: Key issues, MIS Quarterly, 10, 4 (December 1986), 351-361, 50, Hattie, J, Methodology review: Assessing unidimensionality of tests and items. Applied Psychological Measurement, 9, 2 (June 1985), 139-164, 51, Hitt, L,M,; Wu, D,J,; and Zhou, X, Investment in enterprise resource planning: Business impact and productivity measures. Journal of Management Information Systems, 19, 1 (Summer 2002), 71-98, 52, Jiang, J,J,; Klein, G,; and Carr, CL, Measuring information system service quality: SERVQUAL from the other side, MIS Quarterly, 26, 2 (June 2002), 145-166, 53, Joreskog, K,G, Testing structural equation models. In KA, Boland and L,S, Long (eds,). Testing Structural Equation Models. Newbury Park, CA: Sage, 1993, pp, 294-316, 54, Kerlinger, F,N, Foundations of Behavioral Research. New York: McGraw-Hill, 1978, 55, Kim, J,; Lee, J,; Han, K,; and Lee, M, Businesses as buildings: Metrics for the architectural quality of Internet businesses. Information Systems Research, 13, 3 (September 2002), 239-254, 56, King, W,R, Management information systems. In H, Bidgoli (ed,). Encyclopedia of Management Information Systems, vol, 3, New York: Academic Press, 2003, 57, King, W,R,, and He, J, External validity, coverage and nonresponse errors in IS survey research, Katz Graduate School of Business, University of Pittsburgh, 2004, 58, Kohli, R,, and Devaraj, S, Measuring information technology payoff: A meta-analysis of structural variables in firm-level empirical research. Information Systems Research, 14, 2 (June 2003), 127-145, 59, Kraemer, K,L,; Danziger, J,N,; Dunkle, D,E,; and King, J,L, The usefulness of computer-based information to public managers, MIS Quarterly, 17, 2 (June 1993), 129-148, 60, Kraut, R,; Dumais, S,; and Susan, K, Computerization, productivity, and quality of worklife. Communications of the ACM, 32, 2 (February 1989), 220-238, 61, Lacity, M,, and Willcocks, L, Global Information Technology Outsourcing. Chichester, UK: Wiley, 2001, 62, Lambert, D,M,, and Harrington, T,C, Measuring nonresponse bias in customer service mail surveys. Journal of Business Logistics, 11, 2 (1990), 5-25, 63, Larsen, K,R,T, A taxonomy of antecedents of information systems success: Variable analysis studies. Journal of Management Information Systems, 20, 2 (Fall 2003), 169-246, 64, Lee, H, Knowledge management enablers, processes, and organizational performance: An integrative view and empirical examination. Journal of Management Information Systems, 20, 1 (Summer 2003), 179-228, 65, Lee, H,; Kwak, W,; and Han, I, Developing a business performance evaluation system: An analytical hierarchical model. Engineering Economist, 40, 4 (Summer 1995), 343-357, 66, Magal, S,R,; Carr, H,H,; and Watson, H,J, Critical success factors for information center managers, MIS Quarterly, 12, 3 (September 1988), 314-425, 67, Mirani, R,, and King, WR, The development of a measure for end-user computing support. Decision Sciences, 25, 4 (July-August 1994), 481-498, 68, Moore, G,C,, and Benbasat, I, Development of an instrument to measure the perceptions

108

JERRY CHA-JAN CHANG AND WILLIAM R. KING

of adopting an infortnation technology innovation. Information Systems Research, 2, 3 (September 1991), 192-222. 69. Myers, B.L.; Kappelman, L.A.; and Prybutok, V.R. A comprehensive model for assessing the quality and productivity of the information systems function: Toward a theory for information systems assessment. In E.J. Garrity and G.L. Sanders (eds.). Information Systems Success Measurement. Hershey, PA: Idea Group, 1998, pp. 94-121. 70. Nelson, K.M., and Cooprider, J.G. The contribution of shared knowledge to IS group performance. MIS Quarterly, 20, 4 (December 1996), 409^32. 71. Nelson, R.R., and Cheney, P.H. Training end users: An exploratory study. MIS Quarterly, II, 4 (December 1987), 547-559. 72. Niederman, F.; Brancheau, J.C; and Wetherbe, J.C. Information systems management issues for the 1990s. MIS Quarterly, 15, 4 (December 1991), 475-500. 73. Olian, J.D., and Rynes, S.L. Making total quality work: Aligning organizational processes, performance measures, and stakeholders. Human Resource Management, 30, 3 (Fall 1991), 303-333. 74. Parasuraman, A.; Zeithaml, V.A.; and Berry, L.L. Refinement and reassessment of the SERVQUAL scale. Journal of Retailing, 64, 4 (Winter 1991), 420-450. 75. Pitt, L.F.; Watson, R.T.; and Kavan, C.B. Service quality: A measure of information systems effectiveness. MIS Quarterly, 19, 2 (June 1995), 173-185. 76. Porter, M.E., and Millar, V.E. How information gives you competitive advantage. Harvard Business Review, 63, 4 (July-August 1985), 149-160. 77. Premkumar, G. Evaluation of Strategic Information Systems Planning: Empirical Validation of a Conceptual Model. Pittsburgh: University of Pittsburgh, 1989. 78. Raghunathan, B.; Raghunathan, T.S.; and Tu, Q. Dimensionality of the strategic grid framework: The construct and its measurement. Information Systems Research, 10, 4 (December 1999), 343-355. 79. Rai, A.; Lang, S.S.; and Welker, R.B. Assessing the validity of IS success models: An empirical test and theoretical analysis. Information Systems Research, 13, 1 (March 2002), 50-69. 80. Ryan, S.D., and Harrison, D.A. Considering social subsystem costs and benefit in information technology investment decisions: A view from the field on anticipated payoffs. Journal of Management Information Systems, 16, 4 (Spring 2000), 11^0. 81. Ryker, R., and Nath, R. An empirical examination of the impact of computer information systems on users. Information & Management, 29, 4 (1995), 207-214. 82. Saarinen, T. An expanded instrument for evaluating information system success. Information & Management, 31, 2 (1996), 103-118. 83. Santhanam, R., and Hartono, E. Issues in linking information technology capability to firm performance. MIS Quarterly, 27, 1 (March 2003), 125-154. 84. Saunders, C.S., and Jones, J.W. Measuring performance of the information systems function. Journal of Management Information Systems, 8, 4 (Spring 1992), 63-82. 85. Schwab, D.P. Construct validation in organizational behavior. In B.M. Staw and L.L. Cummings (eds.). Research in Organizational Behavior, vol. 2. Greenwich, CT: JAl Press, 1980, pp. 3-43. 86. Scott, J.E. Facilitating interorganizational learning with information technology. Journal of Management Information Systems, 17, 2 (Fall 2000), 81-114. 87. Seddon, P.B. A respecification and extension of the DeLone and McLean model of IS success. Information Systems Research, 8, 3 (September 1997), 240-253. 88. Seddon, P.B.; Staples, S.; Patnayauni, R.; and Bowtell, M. Dimensions of information systems success. Communications of the AIS, 2 (November 1999), 2-39. 89. Segars, A.H. Assessing the unidimensionality of measurement: A paradigm and illustration within the context of information systems research. Omega, 25, 1 (February 1997), 107-121. 90. Segars, A.H., and Grover, V. Re-examining perceived ease of use and usefulness: A confirmatory factor analysis. MIS Quarterly, 17, 4 (December 1993), 517-525. 91. Segars, A.H., and Grover, V. Strategic information systems planning success: An investigation of the construct and its measurement. MIS Quarterly, 22, 2 (June 1998), 139-163. 92. Segars, A.H., and Hendrickson, A.R. Value, knowledge, and the human equation: Evolution of the information technology function in modem organizations. Journal of Labor Research, 21, 3 (Summer 2000), 431-445.

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

109

93. Sethi, V., and King, W.R. Development of measures to assess the extent to which an information technology application provides competitive advantage. Management Science, 40, 12 (December 1994), 1601-1627. 94. Sethi, V., and King, W.R. Organizational Transformation Through Business Process Re-Engineering. Upper Saddle River, NJ: Prentice Hall, 1998. 95. Steers, R.M. Problems in the measurement of organizational effectiveness. Arfm(m.s/rative Science Quarterly, 20, 4 (December 1975), 546-558. 96. Straub, D.W.; Hoffman, D.L.; Weber, B.W.; and Steinfield, C Measuring e-commerce in Net-enabled organizations: An introduction to the special issue. Information Systems Research, 13, 2 (June 2002), 115-124. 97. Sussman, S.W., and Siegal, W.S. Informational influence in organizations: An integrated approach to knowledge adoption. Information Systems Research, 14, 1 (March 2003) 247-65. 98. Tallon, P.P.; Kraemer, K.L.; and Gurbaxani, V. Executives' perceptions of the business value of infonnation technology: A process-oriented approach. Journal of Management Information Systems, 16, 4 (Spring 2000), 145-173. 99. Tam, K.Y. The impact of information technology investment on firm performance and evaluation: Evidence from newly industrialized economies. Information Systems Research, 9 1 (March 1998), 85-98. 100. Templeton, G.R; Lewis, B.R.; and Snyder, CA. Development of a measure for the organizational learning construct. Journal of Management Information Systems, 19, 2 (Fall 2002), 175-218. 101. Teo, T.S.H. Integration between business planning and information systems planning: Evolutionary-contingency perspectives. Ph.D. dissertation. University of Pittsburgh, 1994. 102. Torkzadeh, G., and Dhiilon, G. Measuring factors that influence the success of Internet commerce. Information Systems Research, 13, 2 (June 2002), 187-204. 103. Torkzadeh, G., and Doll, W.J. The development of a tool for measuring the perceived impact of information technology on work. Omega, Intemational Journal of Management Science, 27, 3 (June 1999), 327-339. 104. Van Dyke, T.P; Kappelman, L.A.; and Prybutok, V.R. Measuring infonnation systems service quality: Concerns on the use of the SERVQUAL questionnaire. MIS Quarterly, 21 2 (June 1997), 195-208. 105. Venkatesh, V; Morris, M.G.; Davis, G.B.; and Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Quarterly, 27, 3 (September 2003), 425-478. 106. Venkatraman, N. Strategic orientation of business enterprises: The construct, dimensionality, and measurement. Management Science, 35, 8 (August 1989), 942-962. 107. Venkatraman, N., and Ramanujam, V Measurement of business economic performance: An examination of method convergence. yorna/o/Manageme/tr, 13, 1 (1987), 109-122. 108. Wand, Y, and Wang, R.Y. Anchoring data quality dimensions in ontological foundations. Communications of the ACM, 39, 11 (November 1996), 86-95. 109. Wang, R.Y, and Strong, D.M. Beyond accuracy: What data quality means to data consumers. Journal of Management Information Systems, 12, 4 (Spring 1996), 5-34. 110. Watson, R.T.; Pitt, L.F.; and Kavan, C.B. Measuring information systems service quality: Lessons from two longitudinal case studies. MIS Quarterly, 22, 1 (March 1998), 61-79. 111. Weill, P. The relationship between investment in information technology and firm performance: A study of the valve manufacturing sector. Information Systems Research 3 4 (December 1992), 307-333. 112. Wells, CE. Evaluation of the MIS function in an organization: An exploration of the measurement problem. Ph.D. dissertation. University of Minnesota, Minneapolis, 1987. 113. Xia, W. Dynamic capabilities and organizational impact of IT infrastructure: A research framework and empirical investigation. Ph.D. dissertation. University of Pittsburgh, 1998.

110

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Appendix. ISFS Instrument What Is the IS Function?


THIS QUESTIONNAIRE IS DESIGNED TO ASSESS

the performance of the information systems (IS) function in your organization. The IS function includes all IS individuals, groups, and departments within the organization with whom you interact regularly. As a user of some information systems/technology, you have your own definition of what the IS function means to you, and it is the performance of "your" IS function that should be addressed here. Effectiveness of Information The following statements ask you to assess the general characteristics of the information that IS provides to you. Please try lo focus on the data and information itself in giving the response that best represents your evaluation of each statement. If a statement is not applicable to you, circle 0. To a great extent N/A 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Hardly at all The extent that the information is: Interpretable Understandable Complete Clear Concise Accurate Secure Important Relevant Usable Well organized Well defined Available Accessible Up-to-date Received in a timely manner Reliable Verifiable Believable Unbiased 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

111

Hardly at all The extent that the information: Can be easily compared to past information. Can be easily maintained. Can be easily changed. Can be easily integrated. Can be easily updated. Can be used for multiple purposes. Meets all your requirements.

To a great extent N/A

1 1 1 1 1 1 1

2 2 2 2 2 2 2

3 3 3 3 3 3 3

4 4 4 4 4 4 4

5 5 5 5 5 5 5

0 0 0 0 0 0 0

The following statements ask you to assess the outcome of using the information that IS provided to you. To a great extent N/A

Hardly at all The extent that: The amount of information is adequate, It is easy to identify errors in information, It helps you discover new opportunities to serve customers. It is useful for defining problems. It is useful for making decisions. It improves your efficiency. It improves your effectiveness. It gives your company a competitive edge. It is useful for identifying problems.

1 1 1 1 1 1 1 1 1

2 2 2 2 2 2 2 2 2

3 3 3 3 3 3 3 3 3

4 4 4 4 4 4 4 4 4

5 5 5 5 5 5 5 5 5

0 0 0 0 0 0 0 0 0

112

JERRY CHA-JAN CHANG AND WILLIAM R. KING

IS Service Perfonnance Tbe following statements ask you to assess the performance of services provided by the IS department orfunction. Please circle tbe number that best represents your evaluation of each statement. If a statement is not applicable to you, circle the number 0. To a great extent N/A

Hardly at all The extent that the: Training programs offered by tbe IS function are useful. Variety of training programs offered by the IS function is sufficient. IS function's services are cost-effective. Training programs offered by the IS function are cost-effective. IS function's services are valuable. IS function's services are helpful.

1 1 1 1 1

2 2 2 2 2

3 3 3 3 3

4 4 4 4 4

5 5 5 5 5

0 0 0 0 0

Hardly at all The extent that tbe IS function: Responds to your service requests in a timely manner. Completes its services in a timely manner. Is dependable in providing services. Has your best interest at heart. Gives you individual attention. Has sufficient capacity to serve all its users. Can provide emergency services. Provides a sufficient variety of services. Has sufficient people to provide services. Extends its systems/services to your customers/suppliers.

To a great extent N/A

1 1 1 1 1 1

2 2 2 2 2 2

3 3 3 3 3 3

4 4 4 4 4 4

5 5 5 5 5 5

0 0 0 0 0 0

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

113

Hardly at all The extent that IS people: Provide services for you promptly. Are dependable. Are efficient in performing their services. Are effective in performing their services. Have the knowledge and skill to do their job well Are reliable. Are polite. Are sincere. Sbow respect to you. Are pleasant to work witb. Instill confidence in you. Are belpful to you. Solve your problems as if they were their own. Understand your specific needs. Are willing to help you. Help to make you a more knowledgeable computer user. Systems Perfonnance 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4

To a great extent N/A 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

The following statements ask you to assess tbe extent that systems produce various outcomes for you. Tbe term systems does not refer to the information itself. Rather, it refers to tbe capability to access, produce, manipulate, and present infonnation to you (e.g., to access data bases, or to develop a spreadsheet). Please circle the number tbat best represents your evaluation of eacb statement. If a statement is not applicable to you, circle 0. To a great extent N/A 2 2 2 3 3 3 4 4 4 5 5 5 0 0 0

Hardly at all The extent that systems: Make it easier to do your job. Improve your job performance. Improve your decisions. 1
]

114

JERRY CHA-JAN CHANG AND WILLIAM R. KING

Give you confidence to accomphsh your job. Increase your productivity. Increase your participation in decisions. Increase your awareness of job-related information. Improve the quality of your work product. Enhance your problem-solving abihty. Help you manage relationships with external business partners. Improve customer satisfaction. Improve customer service. Enhance information sharing with your customers/suppliers. Help retain valued customers. Help you select and qualify desired suppliers. Speed product delivery. Help you manage inbound logistics. Improve management control. Streamline work processes. Reduce process costs. Reduce cycle times. Provide you information from other areas in the organization. Facilitate collaborative problem solving. Facilitate collective group decision making. Facilitate your learning. Facilitate collective group learning. Facilitate knowledge transfer. Contribute to innovation. Facilitate knowledge utilization.

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

3 3 3 3 3 3 3 3

4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4

5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5

0 0 0 0 0 0 0 0 0

3 3
3 3 3 3 3 3 3

0 0
0 0 0 0 0 0 0

1 1 1 1 1 1

2 2 2 2 2 2

3 3 3 3 3 3

4 4 4 4 4 4

5 5 5 5 5 5

0 0 0 0 0 0

MEASURING THE PERFORMANCE OF INFORMATION SYSTEMS

115

The following statements ask you to assess general characteristics of the information systems that you use regularly. Please circle the number that best represents your evaluation of each statement. If a statement is not applicable to you, circle 0, To a great extent N/A 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 5 0 0 0 0 0 0 0 0 0 0 0 0 0

Hardly at all The extent that: Systems have fast response time. System downtime is minimal. Systems are well integrated. Systems are reliable. Systems are accessible. Systems meet your expectation. Systems are cost-effective. Systems are responsive to meet your changing needs. Systems are flexible. Systems are easy to use. System use is easy to learn. Your company's intranet is easy to navigate. It is easy to become skillful in using systems. 1 1 1 1 1 1 1 1 1 1 1 1 1

You might also like