Semua
Semua
A CBA should be performed for each investment alternative to enable the evaluation and
comparison of alternatives. However, some mandatory systems will not provide net
benefits to the government. In such cases, the lowest cost alternative should be selected.
If functions are to be added to a mandatory system, though, the additional functions
should provide benefits to the government.
Three measures are used in the cost benefit analysis to indicate the outcome of each
project in an economic sense:
The CBA process can be broken down into the following steps:
1. Determine/Define Objectives
The CBA should include a problem definition; pertinent background information
such as staffing, system history, and customer satisfaction data; and a list of investment
objectives that identify how the system will improve the work process and support the
mission.
• Customer Service —Each customer’s role and services required should be clearly
documented and quantified, if possible (e.g., in an average month, a customer inputs
two megabytes (MB) of data and spends 10 hours on database maintenance).
• System Capabilities—Resources required for peak demand should be listed. For
example, 100 MBs of disk storage space and Help Desk personnel to support 50
users.
• System Architecture —The hardware, software, and physical facilities required
should be documented, including information necessary for determining system
costs, expected future utility of items, and the item owner/lessor (i.e., government
or contractor).
7. Estimate Costs
Many factors should be considered during the process of estimating costs for
alternatives. Full lifecycle costs for each competing alternative should be included, and
the following factors should be addressed:
• Activities and Resources —Identify and estimate the costs associated with the
initiation, design, development, operation, and maintenance of the IT system.
• Cost Categories—Identify costs in a way that relates to the budget and accounting
processes. The cost categories should follow current USDA object class codes.
• Personnel Costs—Personnel costs are based on the guidance in OMB Circular A-76,
“Supplemental Handbook, PART II—Preparing the Cost Comparison Estimates.”
Government personnel costs include current salary by location and grade, fringe
benefit factors, indirect or overhead costs, and General and Administrative costs.
• Depreciation—The cost of each tangible capital asset should be spread over the
asset’s useful life (i.e., the number of years it will function as designed). OMB prefers
that straight-line depreciation be used for capital assets.
• Annual Costs—All cost elements should be identified and estimated for each
year of the system lifecycle.
Table 2. Sample Cost Estimates for an Investment Initiation Activity
DefinitionRequirements
MeasuresPerformance
AnalysisCost-Benefit
DefinitionProblem
Activties/Cost
Security Plan
Categories
Total
Hardware
Software
Services
Support Services 10,000 4,000 1,000 6,000 3,000 24,000
Supplies 100 100 0 100 100 400
Personnel 5,000 10,000 6,000 500 5,000 8,000 34,500
Inter-Agency
Services
Total 5,000 20,100 10,100 1,500 11,100 11,100 58,900
Table 3. Sample System Lifecycle Cost Estimates
8. Estimate Benefits
The following six activities are completed to identify and estimate the value of
benefits:
8.1. Define Benefits—Benefits are the services, capabilities, and qualities of each
alternative, and can be viewed as the return from an investment. The following questions
will help define benefits for IT systems and enable alternative comparisons:
8.2. Identify Benefits—Every proposed IT system should have identifiable benefits for
both the organization and its customers. Organizational benefits could include flexibility,
organizational strategy, risk management and control, organizational changes, and
staffing impacts. Customer benefits could include improvements to the current IT
services and the addition of new services. Customers should help identify and determine
how to measure and evaluate the benefits.
Table 4—shows annual costs and benefits for a system lifecycle, along with the discount
factor, the discounted costs and benefits (present values), and the discounted net present
value [NPV]. The discounted costs and benefits are computed by multiplying costs and
benefits by the discount factor. The net benefit without discounting is $380,000
($3,200,000 minus $2,820,000) while the discounted NPV is less than $60,000 because
the biggest costs are incurred in the first two years, while the benefits are not accrued
until the third year. When evaluating costs and benefits, be cautious of returns that accrue
late in the investment’s lifecycle. Due to discounting, benefits that accrue in later years do
not offset costs as much as earlier-year benefits. Also, these later-year benefits are less
certain. Both the business and IT environments may experience significant changes
before these later-year benefits are realized.
Evaluate All Dollar Values—Once all the costs and benefits for each competing
alternative have been assigned dollar values and discounted, the NPV of the alternatives
should be compared and ranked.
Discounted Net—There will probably be very few cases where the alternative with the
lowest discounted cost provides the highest discounted benefit. The next number to
consider is the Discounted Net (Discounted Benefit minus Discounted Cost). If one
alternative clearly has the highest Discounted Net, it is considered the best alternative;
however, it is usually advisable to look at other factors.
Benefit-Cost Ratio—When the alternative with the highest discounted net is not a clear
winner, the benefit-cost ratio or BCR (discounted benefit divided by discounted cost)
may be used to differentiate between alternatives with very similar or equal Discounted
Nets. In Table E-9— Alternative 4 would be the winner because it has a higher BCR than
Alternative 5. Alternatives 4 and 5 are clearly superior to other alternatives because they
have the highest discounted net.
Evaluate With Intangible Benefits—When all the benefits are intangible, evaluation
will be based on quantifying relative benefits.
Identify Input Parameters—The assumptions documented earlier in the CBA are used
to identify the model inputs to test for sensitivity. Good inputs to test are those that have
significant (large) cost factors and a wide range of maximum and minimum estimated
values. Some common parameters include:
Repeat the Cost Analysis—For each parameter identified, determine the minimum and
maximum values. Then, choose either the minimum or maximum value as the new
parameter value (the number selected should be the one that most differs from the value
used in the original analysis). Repeat the CBA with the new parameter value and
document the results. Prepare a table like Table E-10—to summarize the different
outcomes and enable the results to be quickly evaluated.
Evaluate Results—Compare the original set of inputs and the resulting outcomes to the
outcomes obtained by varying the input parameters. In the previous table, the original
values are the first value listed for each parameter. Sensitivity is measured by how much
change in a parameter is required to change the alternative selected in the original
analysis. The sensitivity guidelines include the following:
In the previous example, the analysis would appear to be somewhat sensitive to the
development costs, but not sensitive to the transition costs and benefits.
2.1 Introduction
Large employers face a challenging future in managing health care benefits. Managers
have many program and coverage options, but are limited by budget constraints and data
availability. Traditionally, decision-makers have used return on investment calculations to
help guide their investment choices, but they can also consider another tool — cost-
effectiveness analysis.
Cost Effectiveness Analysis is a technique for comparing the relative value of various
clinical strategies. In its most common form, a new strategy is compared with current
practice (the "low-cost alternative") in the calculation of the cost-effectiveness ratio.
Cost-effectiveness Analysis, or CEA, is a comparison tool to help evaluate choices. It will
not always indicate a clear choice, but it will evaluate options quantitatively based on a
defined model. For managers, CEA provides peer-reviewed evidence for decision
support.
Cost-effectiveness analysis is closely related to cost-benefit analysis in that both represent
economic evaluations of alternative resource use and measure costs in the same way (see
Cost Benefit Analysis). However, cost-benefit analysis is used to address only those types
of alternatives where the outcomes can be measured in terms of their monetary values.
For example, educational alternatives that are designed to raise productivity and- income,
such as vocational education, have outcomes that can be assessed in monetary terms and
can be evaluated according to cost-benefit analysis. However, most educational
alternatives are dedicated to improving achievement or some other educational outcome
that cannot be easily converted into monetary terms. In these cases, one must limit the
comparison of alternatives to those that have similar goals by comparing them through
cost-effectiveness analysis.
2.3 Measuring Cost Effectiveness
The basic technique has been to derive results for educational effectiveness of each
alternative by using standard evaluation procedures or studies (Rossi and Freeman 1985)
and to combine such information with cost data that are derived from the ingredients
approach. The ingredients approach was developed to provide a systematic way for
evaluators to estimate the costs of social interventions (Levin 1983). It has been applied
not only to cost-effectiveness problems, but also to determining the costs of different
educational programs for state and local planning (Hartman 1981).
Before starting the cost analysis, it is necessary to know what the decision problem is,
how to measure effectiveness, which alternatives are being considered and what their
effects are. If a problem has risen on the policy agenda that requires a response, a careful
understanding of the problem is crucial to addressing its solution
Once the problem has been formulated, it will be necessary to consider how to assess the
effectiveness of alternatives. For this purpose, clear dimensions and measures of
effectiveness will be needed. Table I shows examples of effectiveness measures that
respond to particular program objectives
. Table I.
Given the problem and criteria for assessing the effectiveness of proposed solutions, it is
necessary to formulate alternative programs or interventions, The search for such
interventions should be as wide-ranging and creative as possible. This procedure sets the
stage for the evaluation of effectiveness of the alternatives, a process which is akin to the
standard use of evaluation methods (e.g., Rossi and Freeman 1985). Estimates of
effectiveness can be derived from previous evaluations or from tailored evaluations for
the present purpose. It is important to emphasize that the evaluation of effectiveness is
separable from the evaluation of costs. Most standard evaluation designs for assessing the
effectiveness of an intervention are also suitable for incorporation into cost-effectiveness
studies.
The first step is to ascertain which ingredients are required for an intervention . Most
educational interventions are labor-intensive, so an initial concern is to account for the
number and characteristics of personnel. It is important to stipulate whether personnel are
part-time or full-time and the types of skills or qualifications that they need. Beyond this
it is necessary to identify the facilities, equipment, materials, and other ingredients or
resources which are required for the intervention.
Identification of ingredients requires a level of detail that is adequate to ensure that all
resources are included and are described adequately to place cost values on them. For this
reason, the search for ingredients must be systematic rather than casual.
The primary sources for such data are written reports, observations, and interviews.
Written reports, usually contain at least a brief history and description of the intervention.
Other sources of information must be used to corroborate and supplement data on
ingredients from evaluations and descriptive reports. If the intervention is present at a
nearby site, it may be possible to visit and gather additional data on ingredients through
observation. A third valuable source is that of interviews, where present or former
personnel are asked to identify resources from among a number of different
classifications. The three principal types of information reports, observations, and
interviews-can be used to assure the accuracy of the data by comparing the findings from
each source and reconciling differences, the process of triangulation.
Once the ingredients have been identified and stipulated, it is necessary to ascertain their
costs. In doing this, all ingredients are assumed to have a cost, including donated or
volunteer' resources. That is, they have a cost to someone, even if the sponsoring agency
did not pay for them in a particular situation. At a later stage the costs will be distributed
among the constituencies who paid them, but at this stage the need is to ascertain the total
costs of the intervention. Ingredients can be divided into those that are purchased in
reasonably competitive markets, and those that are obtained through other types of
transactions. In general, the value of an ingredient for costing purposes is its market
value. In the case of personnel, market value may be ascertained by determining what the
costs would be for hiring a particular type .of person. Such costs must include not only
salary, but also fringe benefits ' and other employment costs that are paid by the
employer. Many of the other inputs can also be cost by using their market prices. These
include the costs of equipment, materials, utilities, and so on. Clearly the cost of leased
facilities can also be ascertained in this way. Although the market prices of some
ingredients such as personnel can often be obtained from accounting data for educational
enterprises, such data are not reliable sources for ascertaining overall program costs. The
accounting systems that are used by schools were designed for ensuring consistent
reporting to state agencies rather than for providing accurate and consistent cost data on
educational interventions. For example, they omit completely or understate the cost of
volunteers and other donated resources. Capital improvements are charged to such
budgets and accounts during the year of their purchase, even when the improvements
have a life of 20-30 years. Normal cost accounting practices would ascertain the annual
costs of such improvements by spreading them over their useful lives through an
appropriate method. Thus, data from accounting and budgetary reports must be used
selectively and appropriately and cannot be relied upon for all ingredients. There exist a
variety of techniques for ascertaining the value of ingredients that are not purchased in
competitive markets. For example, the method for ascertaining the value of volunteers
and other contributed ingredients is to determine the mark-et value of such resources if
they had to be purchased. The value of facilities can be determined by estimating their
lease value. The annual value of facilities and equipment can be estimated through a
relatively simple approach that takes account of depreciation and interest foregone by the
remaining capital investment.
In the case of health screening, it is often difficult to determine the most cost-effective
frequency. Too frequent screening has high cost and possibly limited health benefits,
while too infrequent screening has low cost, but poor health outcomes. Determining
appropriate screening frequencies is a useful application of cost-effectiveness analysis.
The following table taken from an analysis on cervical cancer screening shows that life
years are saved at a relatively low cost in the first comparison (screening versus no
screening), but at a very high cost in the second comparison (the marginal cost and
benefit of decreasing the interval between screenings). Typically, an intervention that
costs less than $30,000/life year gained is considered cost-effective medicine. Based on
this analysis, cervical cancer screening every four years is a relatively cost-effective
benefit to cover. It is certainly more cost-effective than screening every three years.
Carnoy M, Levin H M 1975 Evaluation of educational media: Some issues Instr. Sci.
4(3/4): 385-406
Cook T D, Campbell D T 1979 Quasi-Experimentation. Houghton Mifflin, Boston,
Massachusetts
Jamison D, Klees S, Wells S 1978 The Costs of Educational Media: Guidiness for
Planning and Evaluation. Sage, Beverly Hills, California
Levin H M 1970 Cost-effectiveness analysis of teacher selection. J. Hum. Resources 5(l):
24-33
Levin H M 1983 Cost-Effectiveness: A Primer. Sage Beverly Hills, California
Levin H M, Glass G V, Meister G 1987 A Cost-effectiveness analysis of computer-
assisted instruction. Eval. Rev. 11(l): 50-72
Mayo J, McAnany E, Klees S 1975 The Mexican telesecundaria: A cost-effectiveness
analysis. Instr. Sci. 4(3/4): 193-236
Quinn B, VanMondfrans A, Worthen B R 1984 Cost-effectiveness of two math programs
as moderated by pupil SES. Educ. Eval. Policy Anal. 6(l): 39-52
Rossi P H, Freeman H E 1985 Evaluation: A Systematic I Approach, 3rd edn. Sage,
Beverly Hills, California
Tatto M T, Nielsen D, Cummings W, Kularatna N G, Dharmadasa K H 1991 Comparing
the Effects and Costs of Different Approaches for Educating Primary School Teachers:
The Case of Sri Lanka. Bridges Project, Harvard Institute for International Development,
Cambridge.
Gold M. R., Siegel J. E., Russell L. B., Weinstein M.C. Cost-Effectiveness in Health and
Medicine. New York: Oxford University Press, 1996.
Neumann P.J. “Why Don’t Americans Use Cost-Effectiveness Analysis?” American
Journal of Managed Care 2004; 10: 308-312.
Chapter 3
Utility Analysis
3.1. Introduction
Utility analysis is a quantitative method that estimates the dollar value of benefits
generated by an intervention based on the improvement it produces in worker
productivity. Utility analysis provides managers information they can use to evaluate the
financial impact of an intervention, including computing a return on their investment in
implementing it.
The concept of utility was originally introduced by Brogden (1949) and Brogden
and Taylor (1950) and further developed by Cronbach & Gleser (1965). The concept has
been researched and extended by Cascio (1982); Schmidt, Hunter, and Pearlman (1982);
and Reilly and Smither (1983), among others. It was introduced as a method for
evaluating the organizational benefits of using systematic procedures (e.g., proficiency
tests) to improve the selection of personnel but extends naturally to evaluating any
intervention that attempts to improve human performance.
3.1.1.Basic Assumptions
The first assumption of utility analysis is that human performers generate results
that have monetary value to the organizations that employ them. This assumption is also
the basis on which people claim compensation for the work they do.
The second assumption of utility analysis is that human performers differ in the
degree to which they produce results even when they hold the same position and operate
within like circumstances. Thus, salespersons selling the same product line at the same
store on the same shift will show a variation in success over time with a few doing
extraordinarily well, a few doing unusually poorly, and most selling around the average
amount for all salespersons. This assumption is broadly supported in common experience
and in research. It is, for example, the basis on which some performers demand and
receive premium compensation.
The direct implication of these assumptions is that the level of results produced by
performers in their jobs have different monetary consequences for the organizations that
employ them. Performers are differentially productive and the productivity of performers
tends to be distributed normally (Exhibit 1).
3.1.2.How Utility Analysis Builds on These Assumptions
The approach of utility analysis asserts that the utility of any intervention can be valued
by determining how far up the productivity distribution the intervention moves the
performer. The distance the performer is moved is translated into a productivity gain and
the dollar value of that productivity gain is what is termed the utility (U$) of the
intervention.
With these elements of information, the analyst can compute the utility of the
intervention in dollars.
To accomplish the analysis, the analyst must be skilled in the methods of quantitative
analysis in general and utility analysis in specific. This person needs to be aware of the
variety of ways one can measure human productivity, determine its monetary value, and
gauge the affects of interventions on participant performance.
Given that there are a variety of methods for computing utility, the exact resources
needed for the task will depend on the method the analyst selects. The least set of
resources anyone will need are:
• Access to the people who will be using the results of the study to make decisions;
• The identity of the intervention whose utility we will measure;
• A subject matter expert who is knowledgeable of the intervention;
• A description of each affected role including its duties, outputs, and success
criteria;
• The compensation scale for each affected role; and
• A subject matter expert who is knowledgeable of the role(s) affected by the
intervention.
3.2. Method
3.2.1.1. Understand the people whose decision-making the study will support.
Tip: We need to meet the people who will use our study's findings in order to
understand what information they are seeking, what decisions they will use the
information to make, and any issues or concerns they may have about the study. We
should also alert them to our ongoing need for their feedback on the methods we will
propose for accomplishing the study. Assure them that we will guarantee that the methods
we propose satisfy the professional criteria, but that their feedback is needed to ensure
that the methods are also credible in their eyes and the eyes of anyone with whom they
will share the results.
3.2.1.3. Learn about the role(s) whose productivity is affected by the intervention.
Tip: Obtain a description of each affected role. Contact the subject matter expert
who is knowledgeable of each role. Learn each role's purpose, duties, outputs, and
success criteria. We also need to understand how the role is valued from a compensation
perspective. For example, is compensation linked to output or is it paid as a salary? We
will want to understand, as well, how the company values the output of each job. If the
output is sold, is it valued by cost or price? And we need to uncover how much
responsibility each role has for the outputs its performers produce. Finally, for each job
that is salaried, obtain its compensation scale and the average salary paid to its
incumbents. If salaries are not normally distributed, we may need to obtain either the
median or modal salary instead of the mean.
3.2.1.4. Determine how to measure the productivity of the performers of each role.
Tip: We will need to develop a productivity measure and a method for
determining the status of each role incumbent on the measure. We will need to use our
understanding of each affected role and the assistance of its subject matter expert. The
subject matter expert will have to approve the method of measurement we devise,
otherwise our approach to measuring productivity will not have credibility in the
workplace.
In devising the productivity measure, it is preferable to base the measure on
production of correct outputs for example, the total amount of sales generated less returns
or the number of welds made per unit of time less the number of welds that fail
inspection. Outputs are the tangible contributions a role makes to an enterprise and
measuring the quantity, quality, and complexity of outputs generated by performers is
usually a measure of productivity that is readily accepted.
Sometimes, however, a workplace will not accept a measure of productivity that
is tied to outputs. In these situations, we still need a way to measure how well the role is
performed. Sometimes supervisor ratings of successful performance are used or
multirater approaches that use rating of supervisors, peers, and subordinates (when
appropriate).
If the workplace will not agree that different performers achieve different levels
of success or that the level of a performer's success in performing the role can be
measured, then the utility analysis cannot be done.
Once we have devised a measure of productivity, plan how we will gather
information about the status of role incumbents on the measure. Our method must be
feasible meaning that its cost must be reasonable, its result credible, and its burden on
participants acceptable.
3.2.1.6. Decide how to measure the affect of the intervention on role productivity.
Tip: Basically, we need to find a mathematical bridge that relates participation in
the intervention and change in role productivity. There are very many ways to accomplish
this. One way is to use a control group comparison. Here, we identify two sets of people
who are comparable in all important ways except that one set went through the
intervention and the other did not. We compare the differences in productivity of these
two sets of people. If the intervention was effective, the people who went through it will
have higher productivity scores and the difference between the groups will represent the
intervention's impact on productivity. Another way is to use correlational methods to
associate some indicator of participation or benefit from the intervention with scores on
role productivity. Be sure that the information with which we are working satisfies the
requirements of the statistical method we use and that our approach makes sense to the
people who will use the results of the analysis. Our solution needs to satisfy both
professional standards and credibility to provide benefit.
3.2.2.2. Determine the dollar value of a one standard deviation difference in role
productivity (SD$).
Tip. Distribute the productivity scores we gather. Confirm the distribution is
essentially normal and compute its mean and standard deviation. If the distribution is not
normal, use a transformation method (e.g., z-transformation) to normalize it. Apply our
method for valuing role productivity. Derive the dollar value of productivity achieved by
average performers and the dollar value of a one standard deviation difference in
productivity (SD$).
With the role identified, we studied the job it accomplished by reviewing its tasks,
outputs, and performance expectations. No measure of productivity existed—yet the
means for deriving a measure appeared evident. First, the COTR role had a defined
output and criterion for judging success. COTRs were responsible for successfully
satisfying a product or service need within their agency through contracting. Successful
satisfaction of the need meant the timely delivery of products and services that met
technical specifications and the accomplishment of these ends at the cost specified.
Second, there was a monetary value associated with the output. The dollar value of every
contract a COTR managed was systematically determined. Third, there was a logical way
to relate the monetary value of the role's output and its success criterion. A COTR
realized the value of a contract to the degree that the contract was concluded on time, at
cost, and to specifications. Conversely, to the degree it was not concluded on time, at
cost, and to specifications, monetary value was lost.
While the basic logic was sound, conversations with incumbents and supervisors
quickly revealed that while the COTR was responsible for the contract, sometimes he or
she was not free to exercise complete control over its contents or the decision-making
associated with it. Therefore, some amount of the value of the contract was outside the
control of the COTR and its realization or loss should not be credited to the performer.
We also learned that contracts sometimes yielded benefits greater than their face value
and that this could be the result of the COTR's forward thinking, selection of the means
for accomplishing the contract, speed of execution, and other factors.
The COTR Productivity Rating Form was sent to the 266 supervisors of COTRs
randomly selected so that the ratings would reflect the status of COTRs in the general
population. One hundred and thirty (130) responses were received (48.9% response rate).
The response level provided estimates of productivity that were accurate to +/- 5% at a
95% level of confidence.
To calibrate the value of productivity in dollars, the study used the median face
value of contracts managed by COTRs during one year as modified by the control the
COTR has over the outcome of the contracts. The degree to which a COTR brings in his
or her assigned acquisitions at cost, on time, and to specifications determines how much
of the controllable dollar value of those contracts is realized. The median face value of
contracts fulfilled per year by COTRs was $500,000. Corrected for the degree of control
COTRs have over outcomes, as perceived by their supervisors, the median potential
single year benefit a 100% productive COTR produces is $397,525. By multiplying the
average actual productivity of COTRs (81.65%) against the controllable dollar value of
the contracts a COTR manages on a yearly basis ($397,525), the study estimated the
dollar benefits generated by the average performing COTR at $324,583.13. Poor
performing COTRs— that is, performers achieving at or below the 15th percentile of all
COTRs—generated only $274,344.10 of value each year. Exemplary performing
COTRs, defined as incumbents whose productivity was at or above the 85th percentile of
all COTRs, generated $373,895.44 of value.
The correlation between COTR's job proficiency and productivity ratings served
as the mathematical bridge for estimating the course's impact on performer productivity.
The elements required to use this bridge were the amount of proficiency change produced
by the course, the regression coefficient (beta) relating job proficiency scores to
productivity ratings, and the standard deviation of productivity scores. Applying these
elements, the course advances COTRs upward in productivity by .1547 standard
deviations (Exhibit 3).
As stated above, utility is the dollar value of the increased productivity of a single
COTR that is generated by the course. To determine the utility of the course, the study
translated the distance the course advanced COTRs along the productivity continuum into
dollars. As reported, the course advanced COTRs .1547 standard deviations up the
productivity continuum. We previously determined that one standard deviation change in
productivity has a monetary value of $49,239.04. Multiplying this amount by the .1547
provides us the course's utility (.1547 x $49,239.04 = $7,617.27). This figure ($7,616.18)
is the dollar value of the improvement in productivity evidenced by each COTR as a
result of training (Exhibit 4).
The return on investment (ROI) was computed using the conventional method of
dividing the dollar value of the productivity benefits generated by the course by the cost
of participating in the course. In this study a desirable ROI was defined as any value
greater than 1. The study determined the per student cost for completing the COTR
course. It added the fee charged to departments for each COTR taking the course with the
cost of lost opportunity associated with the COTRs not performing their regular job
during the 10-day period of the instruction. This fee ($700) included all expenses
associated with the course. The cost of lost opportunity was computed by dividing the
salary of the typical COTR who participated in the course (GS-14, Step 1) by the number
of hours that define full time employment in the Government (2,087). This per hour cost
is then multiplied by the 80 hours that the COTR is off the job. The opportunity cost per
student was $2,385.63. The total cost for participating in the course was computed as
$3,005.63 per COTR.
The ROI for the course was 2.53 ($7,616,18/$3,005.63) for one year of COTR
performance following completion of the course. This means that for every dollar
invested in completing the course, the sponsoring department receives $2.53 in benefits
the first year. Any reasonable assessment of return should recognize that the benefits of
the course extended forward. Given the general stability of the content the course teaches,
a three year period for return on investment was considered conservative. Within 3 years,
the total productivity improvement benefit is $22,848.54 and the ROI is 7.60—
meaning, for every dollar spent, $7.60 in agency benefits is generated (Exhibit 4).
3.3.7.How Productivity Was Improved
The completion of the two focus group discussions with COTRs who completed
the contracts management course provided insight into the course's mechanism of impact.
Participants uniformly confirmed their experience of benefit from the course. They listed
17 ways their performance was improved by what they learned. One major element they
emphasized was that the course provided a cognitive map of the contracting process that
allowed them to see ahead, to plan and prepare, and feel more confident in the conduct of
their role. As well, the course equipped them to produce the products required by the role
and to know how to judge the adequacy of each product. Also stressed was the learning
about the various players in the contracting process, their responsibilities, the importance
of communicating with them, and the importance of creating a teamed effort. Equally
important, the course participants felt they grasped the principles that ensured the
integrity of the contracting process and that they were able to see how these principles
apply in different contracting situations. Finally, participants also reported the training
coursebook provided with the course served as a continuing learning resource that they
turned to as they encountered new contracting experiences.
3.4. References
Bernstein, Allen L. (1966) A handbook of statistical solutions for the behavioral sciences.
New York: Holt, Rinehart and Winston.
Brogden, H.E. & Taylor, E.K. (1950) The dollar criterion: applying cost accounting
concepts to criterion selection. Personnel Psychology, 3, 133-154.
Cronbach, L.J. & Gleser, G.C. Psychological tests and personnel decisions. (2nd ed.),
Urbana: University of Illinois, 1965.
McDaniel, Michael; Schmidt, Frank L.; & Hunter, John E. (1987) Job experience as a
determinant of job performance. Paper presented at the 95th Annual Convention of the
American Psychological Association, August 1987.
Rossi, Peter H.; Freeman, Howard E.; & Wright, Sonia R. (1979) Evaluation: a
systematic approach. Beverly Hills: Sage Publications.
Schmidt, F.L.; Hunter, J.E.; & Pearlman, K. (1982) Assessing the economic impact of
personnel programs on workforce productivity. Personnel Psychology, 35, 333-347.
U.S. Department of Health Education and Welfare (1975) A practical guide to measuring
project impact on student achievement. Washington, DC: U.S. Government Printing
Office (Stock Number 017-080--1400-2).
Chapter 4
Balanced Scorecard
4.1. Introduction
Kaplan and Norton describe the innovation of the balanced scorecard as follows:
"The balanced scorecard retains traditional financial measures. But financial measures
tell the story of past events, an adequate story for industrial age companies for which
investments in long-term capabilities and customer relationships were not critical for
success. These financial measures are inadequate, however, for guiding and evaluating
the journey that information age companies must make to create future value through
investment in customers, suppliers, employees, processes, technology, and innovation."
Companies that use the Balanced Scorecard methodology get a more accurate,
comprehensive view of their business performance. Balanced Scorecard approach relies
on the monitoring of critical business-strategy-oriented metrics, such as quality, customer
satisfaction, innovation, and market share—measurements that can often reflect a
company’s economic conditions and growth prospects better than its reported earnings.
The Balanced Scorecard, introduced in 1993 by Kaplan and Norton, has served as
the foundation for the Performance Management systems of many fortune 1000
companies and government organizations. A well designed scorecard bridges the gap
between long-term strategies and day-to-day action by aligning performance measures
with the critical perspectives of the organization.
Although by the end of 2001 about 36% of global companies are working with the
balanced scorecard (according to Bain), much of the information in the commercial
sector is proprietary, because it relates to the strategies of specific companies. Public-
sector (government) organizations are usually not concerned with proprietary
information, but also they do not usually have a mandate (or much funding) to post their
management information on web sites.
www.balancedscorecard.org
How do we perform
according to our
shareholders?
Can we continue to
improve & create value?
www.balancedscorecard.org
By viewing the company from all four perspectives, the balanced scorecard provides a
more comprehensive understanding of current performance. to higher education.
• Customer Perspective
Customer Perspective is to achieve our vision and to know we should appears to our
customers
How effectively and efficiently do we satisfy the needs of our customers?
How would you define customer satisfaction?
What customer service factors really matter to your customers?
• Financial Perspective
Financial Perspective is to succeed financially and to know how we perform
according to our shareholders.
How will our strategy, implementation and execution contribute to our financial
improvement?
What are our financial themes?
In order to shield the customer from receiving poor quality products, aggressive
efforts were focused on inspection and testing at the end of the production line. To
establish such a process, feedback data should be examined by managers to determine the
causes of variation, what are the processes with significant problems, and then they can
focus attention on fixing that subset of processes. This creates a double loop feedback
process in the Balanced Scorecard.
Metrics must be developed based on the priorities of the strategic plan, which
provides the key business drivers and criteria for metrics that managers most desire to
watch. Processes are then designed to collect information relevant to these metrics and
reduce it to numerical form for storage, display, and analysis. Decision makers examine
the outcomes of various measured processes and strategies and track the results to guide
the company and provide feedback.
There are two sets of more or less continuous data flows required in the Balanced
scorecard system :
At each level of the organizational hierarchy, data are aggregated across the lower
levels. Aggregation serves to reduce information overload. Periodically, measurements
are collected, aggregated and analyzed at each management level. Performance
evaluations are not only for the top level managers but also in each level. Each level has
its own responsibility.
Mission : Dedication to the highest quality of customer service delivered with a sense of
warmth, friendliness, individual pride, and company spirit.
Vision : Continue building on our unique position - the only short haul, low fare, high
frequency, point to point carier in America.
www.balancedscorecard.org
4.4. Conclusion
4.5. References
http://www.balancedscorecard.org
http://www.balancedscorecardsurvival.com
http://www.qpr.com
Chow, Chee W., Kamal M. Haddad, and James E. Williamson, "Applying the Balanced
Scorecard to Small Companies", Management Accounting, August 1997, p. 21.
Curtis, Carey C. and Lynn W. Ellis, "Balanced Scorecard For New Product
Development", Journal of Cost Management, May/June 1997 Vol. 11, No. 3
Drucker, Peter F., "The Theory of Business", Harvard Business Review, Sep.-Oct. 1994,
p. 95.
Kaplan, Robert S. and David Norton, Translating Strategy Into Action The Balanced
Scorecard (Boston, MA: HBS Press 1997)
Porter, Michael E., "What is Strategy"What is Strategy", Harvard Business Review, Nov.-
Dec. 1996, p. 61
Silk, Scott, "Automating the Balanced Scorecard", Management Accounting, May 1998,
p. 38.
Thomson, Jeff and Steve Varley, "Developing A Balanced Scorecard at AT&T", Journal
of Strategic Performance Measurement, Aug/Sep 1997 Vol. 1, No. 4, p. 14.