0% found this document useful (0 votes)
178 views16 pages

BZFZ Z

ZF BB DB D

Uploaded by

Vasant Hiremath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
178 views16 pages

BZFZ Z

ZF BB DB D

Uploaded by

Vasant Hiremath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Statistical Methods for Monitoring

Service
Monitoring Service Processes Processes

Michael Wood
Portsmouth Business School, University of Portsmouth, Southsea, UK 53
Received July 1993
Introduction Revised January 1994
Statistical process control (SPC) has been widely used as an aid for managing
manufacturing processes. Many books have been published which outline the
basics of SPC (e.g. Feigenbaum, 1983; Juran, 1975; Oakland and Followell, 1990),
and organizations such as the Ford Motor Company (1986) have also published
their own guides.
The underlying problem which statistical process control has been developed
to tackle is, according to a recent text, that:
…there is variation in the characteristics of manufactured articles.... If this variability is
considerable it is impossible to predict the value of the characteristic of any single item. Using
statistical methods, however, it is possible to take meagre knowledge of individual items and
turn it into meaningful statements which may then be used to make decisions about the
process or batch of products (Oakland, 1986, pp. 49-50).

If, of course, there were no variation, each item would be the same, decisions
could be made by inspecting a single item, and statistical methods would not be
necessary. Life in manufacturing, however, is not that simple, so statistical
techniques are necessary to control and, if possible, improve performance.
There is a certain amount of confusion about precisely which techniques are
covered by the umbrella term SPC. Oakland (1986) implies in one part of his
book that SPC encompasses Shewhart and cusum control charts only (p. 52); on
the other hand the much broader coverage of the book suggests that SPC is
much wider. In this article we will take SPC as encompassing Shewhart control
charts, process capability studies, and Pareto analysis, as these are the core
techniques which are used to manage variability and improve processes in the
manufacturing sector[1]. For example, the Ford guide to SPC (Ford Motor
Company, 1986) covers these three areas, and also the basic statistical ideas
which are prerequisites for understanding and using them. This leaves out
certain more recent developments, such as Taguchi methods (Disney and
Bendell, 1990) which may have a role to play in service processes but which
raise issues of a very different kind[2] (and would not normally be considered
part of SPC). We will base our discussion on the opportunities and problems
involved in applying SPC techniques to service processes.

The author is grateful to Nigel Preston and Martin Weddell for providing some of the data International Journal of Service
Industry Management, Vol. 5 No. 4,
discussed in this article, to Dave Preece for his help with an earlier version of this article, and to 1994, pp. 53-68. © MCB University
two anonymous referees for their helpful suggestions. Press, 0956-4233
IJSIM A Shewhart control chart is a graph of a quality measurement against time
5,4 with control lines superimposed to show statistically significant deviation from
the norm. Any such significant deviations are assumed to correspond to
“assignable” or “special” causes which deserve investigation. They are a way of
monitoring a process against time to highlight trends and other changes. Pareto
analysis is an analysis of the “vital few” problems afflicting a process, presented
54 as a bar chart with the problems shown in order of decreasing importance; this
is mathematically trivial, but surprisingly powerful as a practical tool for
helping to see which aspects of a process are major trouble spots and so
particularly worthy of attention. Capability studies are designed to show how
consistently the process is capable of doing what is required of it; in other words
whether the output from the process is comfortably within the required limits.
All three types of technique analyse variation: Pareto analysis looks at the
varied reasons for problems, difficulties or failures; Shewhart control charts
look at variations over time; and capability studies are for analysing whether
the magnitude of the variation is within acceptable bounds. This is a very brief
outline of the standard techniques; for more details the reader is invited to
consult any standard text (e.g. Oakland, 1986; Oakland and Followell, 1990).
The manner in which these typically fit together is illustrated by Figure 1.
This figure also indicates the main potential benefits of SPC in any area: an
improved process and safeguards against deterioration.

Assumptions Underlying SPC


SPC is usually seen as being more than just a set of statistical procedures; it also
encompasses a set of assumptions about the underlying philosophy of quality
management. The most important of these, on which the rationale of SPC
depends, are:
● The important quality characteristics can be and should be measured.
The risks of failing to measure are obvious: problems may not be

Define measurements

Check capability against customer requirements

Improve by Pareto Monitor stability/improvements/


analysis, etc. deterioration by control charts
Figure 1.
Using SPC
noticed, changes may be brought in without a clear analysis of the Monitoring
situation, and there will be no clear evidence of any improvements. Service
● The aim should be the prevention of problems before they occur, rather Processes
than just diagnosing them after they have occurred; to improve the future
rather than simply measuring the past. There are two reasons for
identifying and measuring problems after they have occurred: first to
correct the problem (perhaps by scrapping or reworking the component), 55
and second to learn from past performance in order to improve future
performance. The aims of SPC fall entirely in the second category: the
goal is to improve the process so that – eventually – identifying and
correcting errors becomes unnecessary because there are no errors. This
improves quality and reduces waste because the process has improved,
and also produces large savings in the amounts which need to be spent
on inspection. This argument is as valid in the education industry (for
example) as it is in the motor industry: it is not sufficient to identify
student errors in order to fail students or correct misunderstandings (i.e.
scrap or rework); it is also important to learn from the errors in order to
improve the future teaching process and so prevent future errors.
● Wherever possible the analysis should concentrate on the process rather
than the output. This is likely to be the most effective way of achieving
the previous aim. Concentrating on the output is an inefficient strategy
because it fails to provide information about which aspects of the
production process could usefully be improved, and because of the delay
it may entail. A manufacturer of cars which only checked the finished car
may, for example, end up with a batch of finished cars all of which have
to be scrapped or reworked because of defects in the bodywork. It clearly
makes far more sense to monitor each part of the process of manufacture
– as it happens – so that problems can be found and remedial action
taken immediately to ensure that the problem does not recur. Exactly the
same arguments apply to service processes such as education and
catering.
● The resources devoted to testing, monitoring and inspection should be as
few as possible. This usually means that the amount of data used should
be the minimum to achieve the purpose. A common practice for mean
and range control charts, for example, is to use samples of five items
only.
Indeed, a strong case could be made for the claim that it is these principles, and
not the techniques themselves, that are the real essence of SPC. We will use the
examples in the next section to illustrate these principles as they apply to
service processes.
There are other assumptions which are not necessary prerequisites of SPC,
but which are so commonly associated with SPC that they are often taken to be
so. Many of these are associated with the work of Deming and his “14 points for
management” (Akande, 1992), and with the philosophy of total quality
IJSIM management (Oakland, 1989). For example: the analysis and consequent
5,4 decisions should be the responsibility of the people operating the process rather
than a separate group of inspectors; specific numerical “targets” are to be
avoided wherever possible; the causes of low quality are usually to be found in
the system and not in the failings of individuals; the aim should always be
“continuous improvement”, and so on.
56 However, while undoubtedly valuable, these assumptions go far beyond SPC,
so we will not consider them further in this article.

Service Processes
In the context of the current emphasis on total quality management (TQM),
there has been an increasing interest in applying SPC techniques to non-
manufacturing processes. For example, it has been proposed that SPC can
usefully be applied to safety (Smith, 1989), delivery systems (Zurier, 1989),
health care management (Demos and Demos, 1989), transportation systems and
service industries in general (Mundy et al., 1986). Deming (1986, Chapter 7)
gives a long list of measurements in service industries to which SPC or similar
techniques could be applied, and Oakland and Followell (1990) provide another
series of examples. A textbook on TQM (Oakland 1989, p. 226) claims that
“some of the most exciting applications of SPC” are outside the traditional areas
of production and operations. The writer has applied SPC techniques to
administrative tasks, and to various facets of the computing industry. The
current emphasis on “internal customers” is likely to extend the potential
applications of SPC in service processes because many internal customers are
likely to receive a service rather than a product. The underlying rationale in all
of these cases is that any large system will inevitably encompass variation and
so needs a statistical approach to prevent its performance deteriorating and to
try to improve it. What is more, it has been suggested (Asher, 1987) that the
costs of quality may be greater in the service sector. This means that SPC may
be even more beneficial in the service sector.
However, the traditional approach to SPC was developed almost exclusively
with manufacturing processes in mind. The results of a recent survey (Witt and
Clark, 1990) which showed that SPC techniques are not widely used in the
tourist industry are perhaps not surprising. How well do the traditional
techniques – developed in the manufacturing context – fit service processes?
Most published work on this topic is by enthusiasts for SPC and so,
understandably, emphasizes the opportunities, not the difficulties. For example,
Oakland (1989) comments that:
Data is data, and whether the numbers represent defects or invoice errors, the information
relates to machine settings, process variables, prices, quantities, discounts, customers, or
supply points is irrelevant, the techniques can always be used (p. 226).

He goes on to mention some of the SPC techniques which may be useful outside
the manufacturing area – Pareto analysis, p (proportion defective) charts,
moving average and cusum charts. The implication is that these techniques can
be transferred, unaltered and without problems, from manufacturing processes Monitoring
to service processes. In a similar vein Demos and Demos (1989), Mundy et al. Service
(1986), Smith (1989) and Zurier (1989) explain how SPC techniques can be Processes
applied to service specific processes, but without mentioning any specific
problems or issues to consider. By implication there are none.

Examples of SPC Used in Service Processes 57


The selection of examples below is intended to give some flavour of the range of
applications, and to provide a basis for the development of guidelines for using
SPC in service processes. (These are real examples; they do not necessarily
represent best practice. Some possible improvements will be discussed later in
the article.)

Number of Rings before Leisure Centre Phone Answered


The time taken to respond to external telephone calls is a key measurement for
a leisure centre because failure to respond quickly may lead to lost business.
The manager in charge decided to monitor this by asking colleagues to make
dummy calls at random times of the day. The data were used to produce Figures
2 and 3.
Figure 2 shows the capability of the process. According to the manager, the
“industry target” is that 98 per cent of calls should be answered within three
rings; Figure 2 shows that the current process is incapable of meeting this since
only 53 per cent of calls were answered within three rings.

100

90

80

70
Cumulative percentage
60
Percentage

50

40

30

20

10

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17+ Figure 2.
Number of rings Capability Study for
Phone Rings at Leisure
Centre
IJSIM
5,4 20

18
Mean number of rings in sample of five calls

16

58 14

12
Upper control line
10

0
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Figure 3.
Date
Mean Chart for Phone
Rings at Leisure Centre

A random sample of five of these calls was made every day, and the means of
these times plotted on a standard mean control chart, and ranges on a range
chart. An example of a mean chart appears in Figure 3[3]; the range charts are
not discussed further here because we will argue below that monitoring
variation is very much a secondary consideration in this situation, and also
because the non-normality of the data (see Figure 2) means that the standard
range chart method is not likely to be accurate.
The chart in Figure 3 has two main purposes. First it gives an indication of
how the level of performance varies with time. The example above suggests
that the process is reasonably stable, except for the sixteeth when the mean was
significantly higher than on other days (statistically speaking – as indicated by
the line crossing the upper control line). Presumably there was some special
cause of this fluctuation which deserves investigation. In contrast to this, the
chart indicates that the daily variations at other times are well within the
bounds that would be expected from sampling error, and so it is likely to be a
complete waste of time investigating the reasons for these fluctuations.
The second purpose of the chart is to monitor improvements. The
management at the leisure centre was determined to improve the response to
the phone (the necessity for this being indicated by Figure 2); a control chart
such as this is clearly an essential source of feedback on the success, or
otherwise, of any strategy for improvement.
Proportion of Complaints to an Environmental Health Department Not Monitoring
Responded to within Three Days Service
This was presented as a bar chart giving the proportion on a monthly basis.
The data could have been presented as a p (proportion defective) control chart
Processes
(Figure 4); the main difference being the inclusion of control lines. These
indicate that the process is stable or “in control”. This is directly contrary to the
assumption of the manager (“there are obviously causes of variation beyond the 59
expected random variations”), and implies that time should not be spent
investigating month to month variations. The next stage would be to do a
capability study to see how the capability of the process related to customer
requirements, and perhaps a Pareto analysis to highlight the major causes of
delays. The use of Pareto analysis in service processes such as this is relatively
straightforward; there are several published case studies (e.g. Demos and
Demos, 1989; Mundy et al., 1986).

Complaints Received by a Branch of a Bank


The numbers of complaints from customers received every week were recorded by
means of a bar chart displayed on the wall. As in the previous example, this could
be converted to a control chart, which would monitor more exactly when and
whether changes occur. A Pareto analysis of the causes of the complaints would
also be a valuable additional tool to help the drive for continuous improvement.

Invoicing Errors
A large organization monitors errors in the invoices sent to customers by
recording the number of credits they need to issue to compensate for errors.
Some typical data are given in Table I.

0.35
Upper control line

0.3

0.25
Proportion outside three days

0.2

0.15

0.1

0.05 Lower control line

0 Figure 4.
April May June July August September October November P Chart of Response
Times
IJSIM Errors in invoices are a major cause of unpaid bills and customer dissatisfaction
5,4 for the organization. Data like this are used as the basis of a control chart (a p
chart) for monitoring the process. However, this method of analysis is of
questionable validity (see below) and so will not be described here. Instead, we
will discuss some of the problems below.

60 Guidelines for Using SPC in Service Processes


We will start with the four assumptions underlying SPC listed above. Any
attempt to implement SPC should embody these principles. We will then extend
this with some extra considerations prompted by the particular issues which
arise with service processes.

The Important Quality Characteristics Can Be and Should Be Measured


This is obviously as applicable to service processes – for example those
discussed above – as to manufacturing ones.

The Aim Should Be the Prevention of Problems


This should happen before they occur, rather than diagnosing them after they
have occurred; to improve the future rather than simply measuring the past.
This sounds an obvious principle, but it deserves very careful consideration.
The danger is that, in practice, the measurement may be used to control people
or reward good performance, or to demonstrate how well the team has been
performing (Wood and Preece, 1992). For example the data on the complaints at
the bank may have been perceived, by the front-line staff or the manager, as a
means of control by the management, or as a means of demonstrating how well
the branch was performing relative to other branches; in either case the
emphasis would be on measuring what has happened in the past. In some
circumstances, pressures of this kind may lead to, for example, a tendency to
monitor only major complaints, or even to distort or falsify data. SPC applied in
the proper spirit, however, is concerned only with diagnosing sources of
problems so as to improve the future. This suggests a further principle:
● SPC analysis based on prominent data or data which are also the basis of
reward schemes should be treated with caution. If necessary, a separate
data collection system should be set up for SPC. For example, the bank’s
measure of customer complaints may be subject to the distorting

Number of Number of
Month invoices issued credit notes issued

January 4,315 375

February 4,401 408

Table I. March 3,780 362


Invoicing Errors
influences discussed above, but it may still be vital to monitor the quality Monitoring
of the process in question. The solution is surely to set up another Service
monitoring scheme – perhaps based on interviews with a sample of Processes
customers – whose results are not publicized and not used as the basis of
any kind of reward, but whose sole purpose in the prevention of
problems and the improvement of the process in the future (and, as a by-
product, a reduction in the number of complaints). 61
In general, SPC is likely to require more sensitive or detailed measures
than are used for measuring past performance. For example, customer
complaints are likely to provide a very crude measure of dissatisfaction;
more sensitive and detailed information of more value for diagnosing
problems and detecting subtle changes is likely to be provided by, as
suggested above, interviewing customers. This suggests a further
subsidiary principle:
● Ensure that the measurements used for SPC are sufficiently sensitive and
detailed. There is another way in which measurements which are
suitable for measuring the past may not be ideal for improving the
future. The (assumed) customer’s requirement for response times at the
environmental health department is three days; monitoring proportions
failing to meet this is useful to measure past performance, but future
performance is more likely to be enhanced by monitoring the proportion
failing to meet a stricter deadline – say two days. If this proportion can
be reduced to “zero defects”, there is the added advantage of a safety
margin so that any slight deterioration can be remedied before it reaches
the level customers consider unsatisfactory. This leads to another
subsidiary guideline:
● Consider measurements based on a stricter standard than customers
actually require to provide a safety margin.

Wherever Possible Analysis Should Concentrate on the Process Not Output


This principle is widely accepted in manufacturing, where, for example, the
process of manufacturing a car, and not simply the quality of the finished product,
is monitored. The reasons for this are entirely obvious and difficult to dispute.
Despite this many service processes seem to be stuck at the “inspection” stage of
quality history: there is a tendency for the emphasis to be exclusively on the
quality of the end product. For example, an academic institution may concentrate
on output measures (exam results, publications) whereas analysis of the process
may be more helpful for managing and improving quality; texts on service quality
tend to concentrate on the quality of the end-product rather than analysis of the
process (e.g. Zeithaml et al., 1990). This is not true of all the above cases, and the
problem is to some extent redressed by the notion of an internal customer – i.e. a
customer in the middle of a process. However, a more detailed analysis of some of
these processes may be beneficial: for example, the environmental health
department could look at the detailed process of responding to queries; the process
IJSIM by which the invoices are produced in the final example could be monitored to
5,4 provide more useful information for preventing further difficulties in the future;
and the bank could concentrate on the process instead of simply measuring
“output” in the form of complaints after the process has failed.

The Resources Devoted to Testing and Inspection Should Be Minimal


62 Again, this is entirely obvious, but the prevailing habits in some service processes
may encourage the assumption that data need to be collected on all the items
passing through the system. All except the first of the above examples involve
data on what are often described as “100 per cent samples” (although we will argue
below that this label is seriously misleading). This is very different from the
normal practice in manufacturing where fairly small samples are usually taken.
The reason for this is obvious and at first sight valid: namely, data in the above
service processes are very cheap to collect and may indeed be free in the sense that
they are collected for other purposes. If data on all invoices, for example, are
available on the company’s computer system, why not use these for monitoring the
process? If all customer complaints are logged, this is surely the sensible database
to use for monitoring this process?
However, the difficulty is that, in some, at least, of these cases, the data which
are available are not the most useful data. To take one example, the data on the
errors in the billing system tell us nothing about what the errors are, which part of
the billing system is producing them, or indeed about when they occur except in a
very general sense (the credits issued in January may be due to errors in December
or November or even October). On the other hand, the data from the leisure centre
provide exactly the information required; these data, however, are not a “100 per
cent sample” and were not already available, but had to be gathered at some cost
in staff time.
One aspect of this issue relates to the frequency with which data are collected.
Existing data may be collected on a monthly basis, but for detailed monitoring
more frequent samples may be appropriate. (In manufacturing, samples are often
taken every hour.) This suggests two additional guidelines:
(1) Make sure that data are suitable for the purpose. Do not use information
simply because “it is there”: it may be necessary to collect data more
frequently or to collect different data. Data collected for the purpose are
likely not to be free, or even cheap. Taking the smallest possible sample is
clearly sensible (see Oakland, 1986, pp. 209-17).
(2) Consider the sample size. Is it too big or too small? If it is too small, it may
not provide useful information. For example, in a previous article (Wood
and Preece, 1992) we described a c chart – a chart of a count of defects –
where the upper control line is over three times the average: this means that
performance would have to deteriorate by a factor of three before the
evidence achieves significance.
On the other hand, if the sample is too big, it may be unnecessarily
expensive. It is probably unnecessary to investigate all invoicing errors; a
suitable sample may suffice. Statistical methods are designed to find the
optimum compromise between expense and the necessity for accurate Monitoring
information. It is important to remember that rules of thumb used in Service
manufacturing (e.g. sample sizes of 5 for mean/range charts, and 50 for p Processes
charts) may not apply in service processes.

Analyse How the Chosen Measurement Relates to Customer Satisfaction


In manufacturing applications the “customer” is not usually considered explicitly: 63
customer requirements are assumed to mirror engineering specifications and to be
relatively unproblematic – although the work of Taguchi, and in particular his loss
function (Disney and Bendell, 1990), and the TQM movement, are, to some extent,
changing this. In relation to service processes, it is arguably particularly necessary
to consider customer satisfaction explicitly because these requirements may be
less obvious, and because the relationship with measured variables may be of a
different form from the typical manufacturing pattern. Accordingly, our
recommendation is to sketch a graph showing the probable relationship of the
measured variable to customer satisfaction. As an example, Figure 5 shows such
a graph. In the absence of extensive research, these graphs may only be rough
sketches and may be partly based on guesswork. They should however, prompt
reflection, on, for example, whether the industry target (98 per cent answered
within three rings) is reasonable.

Reducing Variability May Not Be an End in Itself, or Even a Means to an End


(or a Sensible Slogan)
This may mean that the aims and terminology of SPC when applied to services
should be revised. (This also applies to attribute charts used in manufacturing.) To

10

8
Satisfaction (10 = very satisfied)

0 Figure 5.
1 3 5 7 9 11 13 15 17 19 21 Customer Satisfaction
Number of rings for Answering the
Phone
IJSIM see the difficulty here we must consider a typical manufacturing application. The
5,4 next figure (Figure 6) shows a similar customer satisfaction sketch graph for a
typical manufacturing process. The measurement on the horizontal axis refers to
the size of a hole in a component: the formal specification for this is that the flow
rate should lie between 230 and 350, and the graph indicates satisfaction falling off
as the measurement gets further from the centre of this range (see Wood and
64 Preece (1992) for more details of this process). This is in effect a Taguchi loss
function (Disney and Bendell, 1990) – reversed because the vertical axis measures
satisfaction rather than loss or dissatisfaction.
There are two important differences between this graph and Figure 5 which
represents a service process. The first is that the manufacturing graph shows an
optimum in the middle of the scale, whereas the service one shows an optimum at
one of the end points. In the former case the sensible goals for quality
improvement are to ensure that the average flow rate is near the optimum value
and that the variability is as low as possible. These are separate objectives in the
sense that even if, for example, the mean value were the optimum, the process
could still be improved by reducing the variability. For example, if a sample had
components with flow rates 220, 220, 290, 360, 360 this would have the optimum
average but would be unsatisfactory because of the variability. Reducing the
variability (say to 285, 287, 290, 293, 295) without changing the average would
clearly bring large improvements. Thus the objective of reducing variability
makes clear sense as an objective in its own right; the phrases “in control” and
“control charts” are intuitively reasonable ways of expressing this objective of
steering the process so that all values appear towards the top of the curve without

10

8
Satisfaction (10 = very satisfied)

Figure 6. 0
Customer Satisfaction 220 240 260 280 300 320 340 360
Size of hole in a component
for a Manufacturing
Process
too much “wobble” to either side. In manufacturing applications the emphasis is Monitoring
often, quite reasonably, on the reduction of variability as the main objective of SPC. Service
Service processes often show a different pattern with the optimum being at one Processes
end of the scale, as in Figure 5[4]. This means that the objective can always be
expressed as simply to improve the average value. Unlike the “middle is optimum”
process, once the average is at one end of the scale no further improvements can be
made (because, using Figure 5 as an example, if the average is one all 65
measurements must be one). There may be tactical reasons for reducing
variability – stable processes may be easier to understand and manage; finding
reasons for excessive variation (e.g. a particular group of workers or type of
customer) may be useful for improving the process and so reducing the average;
reducing the variability may reduce the average as an arithmetical by-product (as
with a Poisson process) – but reducing the variability is not an end in itself[5] This
means that the phrases “in control” and “control charts” are likely to be
misleading[6]. We would suggest using an alternative name for the charts such as
“quality level charts” and avoiding the use of the phrase “in control” – perhaps
using a phrase like “evidence of change lines” for the control lines. The charts can
then be used for monitoring the process and tracking the reasons for variation –
but with the clear aim of reducing the average.
There is a further reason why variability reduction is not a sensible aim for
many service processes. Customer requirements may vary from customer to
customer. When phoning the leisure centre some potential customers may give up
after three rings, whereas others may have more patience; and in a restaurant, for
example, different customers may have different requirements. The sketch graphs
of customer satisfaction may be different for each customer. This may mean that,
ideally, the measured variables should vary to reflect this variation. As a slogan
for any service which prides itself of meeting the needs of individual customers,
reducing variability, or increasing uniformity, is likely to be disastrous (Tomes,
1993)[7]. To recap, reducing variability is not likely to be a sensible end in its own
right, although it may sometimes be a means to an end. The terminology of
“control” and “control charts” is therefore inappropriate and should be replaced by
“quality level charts” or a similar phrase. Similarly, the phrase “statistical process
control” itself is inappropriate and should be replaced by something else – for
example, “statistical process monitoring”.

Design the SPC System Carefully


Remember that standard techniques, rules of thumb, terminology, conventions etc.
may not be appropriate. Taking Figure 1 as a basis, the procedure is simply to
start with a capability study[8], and then to iterate between Pareto analysis – to
make improvements – and control charts (or quality level charts) to monitor the
improvement (or otherwise) of the process. The aim of course is “continuous and
never ending improvement”. This highlights yet another reason to avoid the use of
phrases such as “in control”: the danger is that this phrase may encourage
complacency because if “it’s in control there’s no point in doing anything”[9].
IJSIM As the examples above indicate, it is tempting to use “100 per cent samples”
5,4 with many service processes. These raise a number of issues which, while they are
just technicalities and not matters of principle, do raise difficulties which must be
acknowledged.
If these 100 per cent samples represent the entire population of interest, then
control lines should not be drawn on control charts because these lines indicate
66 samples which provide statistically significant evidence of change from the norm
in a wider population. Control lines refer to the viability of generalizing the results
to a wider population, and so are meaningless if there is no wider population.
However, in practice, the 100 per cent samples are still samples in the sense that
they are used to provide an indication of the current state of the process – i.e. of a
wider set of possibilities than those which have actually occurred. This, however,
is an extra conceptual complication in an already complex and confused area
(Wood and Preece, 1992), which may make the task of understanding the nature
and role of control charts more difficult. It certainly deserves care and attention.
As before, the answer may lie in changing the terminology: “evidence of change
lines”, while an awkward phrase, seems likely to lead to fewer misunderstandings.
A further problem stems from the fact that the 100 per cent samples are likely to
vary. This means that the sample size will vary from sample to sample, which in
turn means that the control lines will vary in width as in Figure 4. Oakland (1986,
p. 163) suggests ignoring this variation and basing the calculations on an average
sample size provided the variation is less than 25 per cent.) This is perfectly
straightforward from a mathematical point of view, but, from a practical point of
view, it can cause confusion and tends to displace more important issues – such as
the interpretation of the charts – as a focus of attention.
More generally, the standard SPC techniques are undoubtedly useful but need to
be viewed critically. We have discussed problems with the notion of “control”. It is
also arguable that the methods used to calculate the control lines are unnecessarily
complex and obscure and could usefully be simplified: this will be addressed in a
forthcoming article in the International Journal of Quality & Reliability
Management. Similarly the capability indices, Cp and Cpk, could be applied to
service processes as well as manufacturing ones. For example, Oakland and
Followell (1990) explain how Cpk could be calculated for the times cashiers take to
complete transactions in a bank. The calculated value is 0.74 from which they
conclude that the process was “not capable, and not centred” (p. 308). However,
this analysis suffers from three obvious flaws: first, the value of Cpk seems unlikely
to mean much to the cashiers; second, Cpk is designed to measure variability,
whereas in this case the aim is to reduce the mean to three minutes and to keep the
maximum transaction time below five minutes; and third, Cpk implicitly assumes
a normal distribution of the underlying variable – which is most unlikely to be the
case in this situation. Why not simply monitor the process by means of estimates
of the mean time – which can be compared with the three minute goal – and of the
proportion of transactions taking longer than five minutes to process?
Alternatively, the second subsidiary principle under the “Prevention of Problems”
guideline (above) suggests comparing the mean with a more stringent goal, say
two minutes, and estimating the proportion not completed within, say, four Monitoring
minutes. A similar comment applies to Figure 2: we could calculate Cpk but it Service
seems much easier and more useful simply to estimate the proportion of calls Processes
within the “tolerance” limits (i.e. the industry standard) of three rings. This
proportion is 53 per cent, indicating a relatively incapable process.

Conclusions 67
The case for using statistical methods for monitoring service processes appears
unassailable. Managers of service processes need monitoring systems and quality
improvements just as much as managers of manufacturing processes do, and,
with any process with a large throughput, statistical methods are likely to be
indispensable for achieving these aims. The only problems relate to adapting
methods which have evolved in a manufacturing context to service processes. The
above guidelines are intended to help resolve these problems.
One important conclusion concerns terminology. We argued that, since
reducing the variation is not usually an end in its own right, terminology involving
the word “control” is inappropriate. Alternative phrases – e.g. statistical process
monitoring, quality level charts – may be more helpful.
Finally, it is worth noting that many of the arguments presented in this article
are equally applicable to many manufacturing processes: for example, the
guideline on reducing variability not being an end in itself and is relevant to any
control chart based on attributes; and the guideline on relating measurements to
customer satisfaction is in line with the current emphasis on researching customer
requirements which is as relevant to manufacturing processes as to service ones.
The fact that the focus of this article is on service processes should not be taken to
imply that the conclusions do not apply to some manufacturing processes as well.
These may not fit the pattern assumed by conventional SPC techniques any more
than many service processes do.
Notes
1. Cusum charts (Oakland, 1986, pp. 183-98) fulfil a similar role to Shewhart charts but are
more useful in particular types of situation. As they are considerably more complex, and
raise no essentially separate issues of principle, we will not discuss them further.
2. Very briefly, standard SPC techniques are concerned with analysing the quality of an
existing process, whereas Taguchi methods are concerned with quality in the design phase
of a new process: “by making the product or process robust to variation in external factors
such as incoming raw materials, operator effects and environmental conditions of
production and use, Taguchi is able to design a product exhibiting minimum variability
and maximum efficiency” (Disney and Bendell, 1990, p. 4). This seems at least as relevant
to service processes as to any other process.
3. The control lines in Figure 3 are calculated by the standard method which depends on the
variation within the samples of five whose means are plotted. They are not, of course,
based on the standard deviation of the points plotted.
4. The same may be true of some charts monitoring manufacturing processes. In particular
it is true of any attribute chart.
5. This is on the assumption that the customer satisfaction graph shown in Figure 5 is
correct. If customers value consistency as an end in itself – they may like to rely on always
waiting for the same length of time for the phone to be picked up – then Figure 5 is
IJSIM inaccurate. In this case customer satisfaction could not be plotted as a function of a single
waiting time, but instead would need to be plotted as a function of the distribution of
5,4 waiting times. However, treating the reduction of variability of this distribution as an aim
independent of the reduction of the mean entails the risk that the process may be stabilized
with a high mean. In this case this may be just defensible, but if, for example, the data
referred to the time taken to get heart attack victims to hospital, the strategy of reducing
variation while keeping the mean high is most unlikely to appeal to customers.
68 6. There is also evidence that these terms can be misleading in conventional manufacturing
contexts (Woods and Preece, 1992).
7. Exactly the same may be said of any production process aiming at “mass customization”
(Westbrook and Williamson, 1993).
8. Some writers recommend a control chart analysis to check that the process is stable before
analysing the capability. This avoids the danger of pronouncing a process capable when it
may be varying wildly on a day-to-day or a week-to-week basis.
9. Strictly, the phrase “in control” means that there is no evidence suggesting that any
particular sample deserves special attention; however, the process in general may still be
running at an unsatisfactory level.

References
Akande, A. (1992), “Applying Deming to Service”, Management Decision, Vol. 30 No. 3, pp. 3-8.
Asher, J.M. (1987), “Cost of Quality in Service Industries”, International Journal of Quality &
Reliability Management, Vol. 5 No. 5, pp. 38-46.
Deming, W.E. (1986), Out of the Crisis, Cambridge University Press, Cambridge.
Demos, M.P. and Demos, N.P. (1989), “Statistical Quality Control’s Role in Health Care
Management”, Quality Progress, August, pp. 85-89.
Disney, J. and Bendell, A. (1990), “Introduction to Taguchi Methodology”, in Hendry, L.C. and
Eglese, R.W. (Eds), Operational Research Tutorial Papers, Operational Research Society,
Birmingham.
Feigenbaum, A.V. (1983), Total Quality Control, 3rd ed., McGraw-Hill, New York, NY.
Ford Motor Company (1986), Statistical Process Control, Instruction Guide.
Juran, J.M. (1975), Quality Control Handbook, 3rd ed, McGraw-Hill, New York, NY.
Mundy, R.M., Passarella, R. and Morse, J. (1986), “Applying SPC in Service Industries”, Survey of
Business, Vol. 21 No. 3, pp. 24-9.
Oakland, J.S. (1986), Statistical Process Control., Heinemann, London.
Oakland, J.S. (1989), Total Quality Management, Heinemann Professional, Oxford.
Oakland, J.S. and Followell, R.F. (1990), Statistical Process Control, 2nd ed., Butterworth-
Heinemann, Oxford.
Smith, T.A. (1989), “Why You Should Put Your Safety Program under Statistical Control”,
Professional Safety, April, pp. 31-7.
Tomes, A. (1993), “Have a Nice Day! The Struggle for Quality in Service Organisations”, OR
Insight, Vol. 6. No. 1, pp. 30-2.
Westbrook, R. and Williamson, P. (1993), “Mass Customisation: Japan’s New Frontier”, European
Management Journal, Vol. 11 No. 1, pp. 38-45.
Witt, C.A. and Clark, B.R. (1990), “Tourism: The Use of Production Management Techniques”,The
Service Industries Journal, Vol. 10 No. 2, pp. 306-19.
Wood, M. and Preece, D. (1992), “Using Quality Measures: Practice, Problems and Possibilities”,
International Journal of Quality & Reliability Management, Vol. 9 No. 7, pp. 42-53.
Zeithaml, V.A., Parasuraman, A. and Berry, L.L. (1990), Delivering Service Quality: Balancing
Customer Perceptions and Expectations, Free Press, New York, NY.
Zurier, S. (1989), “Delivering Quality Customer Service”, Industrial Distribution, March, pp. 30-5.

You might also like