0% found this document useful (0 votes)
3 views

Unit2 QualityManagementSystem

The document discusses the importance of Quality Management Systems (QMS) and various graphical and statistical techniques for process quality improvement, emphasizing the Seven Basic Tools of Quality (7 QC Tools). These tools, including Stratification, Histogram, and Cause-and-Effect Diagram, are essential for identifying critical control variables and solving quality-related issues across industries. Additionally, it covers concepts like Regression Control Charts and Process Capability Analysis to assess and enhance process performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Unit2 QualityManagementSystem

The document discusses the importance of Quality Management Systems (QMS) and various graphical and statistical techniques for process quality improvement, emphasizing the Seven Basic Tools of Quality (7 QC Tools). These tools, including Stratification, Histogram, and Cause-and-Effect Diagram, are essential for identifying critical control variables and solving quality-related issues across industries. Additionally, it covers concepts like Regression Control Charts and Process Capability Analysis to assess and enhance process performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Unit 2 Quality Management System (QMS) & Process Quality

Improvement

Graphical & Statistical Techniques of Process Quality Improvement


Systematic solution approach to any quality improvement activity is critical and always
emphasized by quality gurus (Juran, Deming, and Shewart). Various tools and techniques are
commonly used to identify the critical control variables. The very basic techniques used in
quality management is 7 QC Tools, which consist of Pareto Diagram, Process Flow Diagram,
Cause and Effect Diagram, Check Sheets, Histogram, Run Charts, and Scatter diagram. Additional
statistical tools used are hypothesis testing, regression analysis, ANOVA (Analysis of Variance),
and Design of Experiment (DOE). In the following section, we will go through each and every
technique in a greater detail.

7QC Tools
The Seven Basic Tools of Quality (also known as 7 QC Tools) originated in Japan when the
country was undergoing major quality revolution and had become a mandatory topic as part of
Japanese’s industrial training program. These tools which comprised of simple graphical and
statistical techniques were helpful in solving critical quality related issues. These tools were often
referred as Seven Basics Tools of Quality because these tools could be implemented by any person
with very basic training in statistics and were simple to apply to solve quality-related complex issues.

7 QC tools can be applied across any industry starting from product development phase till delivery.
7QC tools even today owns the same popularity and is extensively used in various phases of Six
Sigma (DMAIC or DMADV), in continuous improvement process (PDCA cycle) and Lean
management (removing wastes from process).

The seven QC tools are:

1. Stratification (Divide and Conquer)


2. Histogram
3. Check Sheet (Tally Sheet)
4. Cause-and-effect diagram (“fishbone” or Ishikawa diagram)
5. Pareto chart (80/20 Rule)
6. Scatter diagram (Shewhart Chart)
7. Control chart

1. Stratification (Divide and Conquer)

Stratification is a method of dividing data into sub–categories and classify data based on group,
division, class or levels that helps in deriving meaningful information to understand an existing
problem.

The very purpose of Stratification is to divide the data and conquer the meaning full Information to
solve a problem.

 Un–stratified data (An employee reached late to office on following dates)


 5-Jan, 12-Jan,13-Jan, 19-Jan, 21-Jan, 26-Jan,27-Jan
 Stratified data: (Same data classified by day of the week )
2. Histogram

Histogram introduced by Karl Pearson is a bar graph representing the frequency distribution on each
bars.

The very purpose of Histogram is to study the density of data in any given distribution and understand
the factors or data that repeat more often.

Histogram helps in prioritizing factors and identify which are the areas that needs utmost attention
immediately.

3. Check sheet (Tally Sheet)

A check sheet can be metrics, structured table or form for collecting data and analysing them. When
the information collected is quantitative in nature, the check sheet can also be called as tally sheet.

The very purpose of checklist is to list down the important checkpoints or events in a tabular/metrics
format and keep on updating or marking the status on their occurrence which helps in understanding
the progress, defect patterns and even causes for defects.
4. Cause-and-effect diagram. (“Fishbone” or Ishikawa diagram)

Cause–and–effect diagram introduced by Kaoru Ishikawa helps in identifying the various causes
(or factors) leading to an effect (or problem) and also helps in deriving meaningful relationship
between them.

7QC Tools
The Seven Basic Tools of Quality (also known as 7 QC Tools) originated in Japan when the
country was undergoing major quality revolution and had become a mandatory topic as part of
Japanese’s industrial training program. These tools which comprised of simple graphical and
statistical techniques were helpful in solving critical quality related issues. These tools were often
referred as Seven Basics Tools of Quality because these tools could be implemented by any person
with very basic training in statistics and were simple to apply to solve quality-related complex issues.

7 QC tools can be applied across any industry starting from product development phase till delivery.
7QC tools even today owns the same popularity and is extensively used in various phases of Six
Sigma (DMAIC or DMADV), in continuous improvement process (PDCA cycle) and Lean
management (removing wastes from process).

The seven QC tools are:

1. Stratification (Divide and Conquer)


2. Histogram
3. Check Sheet (Tally Sheet)
4. Cause-and-effect diagram (“fishbone” or Ishikawa diagram)
5. Pareto chart (80/20 Rule)
6. Scatter diagram (Shewhart Chart)
7. Control chart
1. Stratification (Divide and Conquer)

Stratification is a method of dividing data into sub–categories and classify data based on group,
division, class or levels that helps in deriving meaningful information to understand an existing
problem.

The very purpose of Stratification is to divide the data and conquer the meaning full Information to
solve a problem.

 Un–stratified data (An employee reached late to office on following dates)


 5-Jan, 12-Jan,13-Jan, 19-Jan, 21-Jan, 26-Jan,27-Jan
 Stratified data: (Same data classified by day of the week )

2. Histogram

Histogram introduced by Karl Pearson is a bar graph representing the frequency distribution on each
bars.

The very purpose of Histogram is to study the density of data in any given distribution and understand
the factors or data that repeat more often.

Histogram helps in prioritizing factors and identify which are the areas that needs utmost attention
immediately.

3. Check sheet (Tally Sheet)


A check sheet can be metrics, structured table or form for collecting data and analysing them. When
the information collected is quantitative in nature, the check sheet can also be called as tally sheet.

The very purpose of checklist is to list down the important checkpoints or events in a tabular/metrics
format and keep on updating or marking the status on their occurrence which helps in understanding
the progress, defect patterns and even causes for defects.

4. Cause-and-effect diagram. (“Fishbone” or Ishikawa diagram)

Cause–and–effect diagram introduced by Kaoru Ishikawa helps in identifying the various causes (or
factors) leading to an effect (or problem) and also helps in deriving meaningful relationship between
them.

The very purpose of this diagram is to identify all root causes behind a problem.

Once a quality related problem is defined, the factors leading to the causal of the problem are
identified. We further keep identifying the sub factors leading to the causal of identified factors till we
are able to identify the root cause of the problem. As a result we get a diagram with branches and sub
branches of causal factors resembling to a fish bone diagram.

In manufacturing industry, to identify the source of variation the causes are usually grouped into
below major categories:

 People
 Methods
 Machines
 Material
 Measurements
 Environment
5. Pareto chart (80 – 20 Rule)

Pareto chart is named after Vilfredo Pareto. Pareto chart revolves around the concept of 80-20 rule
which underlines that in any process, 80% of problem or failure is just caused by 20% of few major
factors which are often referred as Vital Few, whereas remaining 20% of problem or failure is caused
by 80% of many minor factors which are also referred as Trivial Many.

The very purpose of Pareto Chart is to highlight the most important factors that is the reason for major
cause of problem or failure.

Pareto chart is having bars graphs and line graphs where individual factors are represented by a bar
graph in descending order of their impact and the cumulative total is shown by a line graph.

Pareto charts help experts in following ways:

 Distinguish between vital few and trivial many.


 Displays relative importance of causes of a problem.
 Helps to focus on causes that will have the greatest impact when solved.

6. Scatter Diagram
Scatter diagram or scatter plot is basically a statistical tool that depicts dependent variables on Y –
Axis and Independent Variable on X – axis plotted as dots on their common intersection points.
Joining these dots can highlight any existing relationship among these variables or an equation in
format Y = F(X) + C, where is C is an arbitrary constant.

Very purpose of scatter Diagram is to establish a relationship between problem (overall effect) and
causes that are affecting.

The relationship can be linear, curvilinear, exponential, logarithmic, quadratic, polynomial etc.
Stronger the correlation, stronger the relationship will hold true. The variables can be positively or
negatively related defined by the slope of equation derived from the scatter diagram.

7. Control Chart (Shewhart Chart)

Control chart is also called as Shewhart Chart named after Walter A. Shewhart is basically a statistical
chart which helps in determining if an industrial process is within control and capable to meet the
customer defined specification limits.

The very purpose of control chart is to determine if the process is stable and capable within current
conditions.

In Control Chart, data are plotted against time in X-axis. Control chart will always have a central line
(average or mean), an upper line for the upper control limit and a lower line for the lower control
limit. These lines are determined from historical data.

By comparing current data to these lines, experts can draw conclusions about whether the process
variation is consistent (in control, affected by common causes of variation) or is unpredictable (out of
control, affected by special causes of variation). It helps in differentiating common causes from
special cause of variation.
Control charts are very popular and vastly used in Quality Control Techniques, Six Sigma (Control
Phase) and also plays an important role in defining process capability and variations in productions.
This tool also helps in identifying how well any manufacturing process is in line with respect to
customer’s expectation.

Control chart helps in predicting process performance, understand the various production patterns and
study how a process changes or shifts from normally specified control limits over a period of time.

Why use the 7 QC tools?

The 7 QC tools is a title given to a fixed set of graphical techniques identified as being most helpful in
troubleshooting issues related to quality. The 7 QC tools are fundamental instruments to improve the
process and product quality. They are used to examine the production process, identify the key issues,
control fluctuations of product quality, and give solutions to avoid future defects.

These are the tools which facilitate the organization to resolve the basic problems. When an
organization starts the journey of quality improvements, the organization normally has many low
hanging fruits; which should be tackled with these basic 7 QC tools. These 7 QC tools are easy to
understand and implement and does not need complex analytical/ statistical competence.

When to use the 7 QC Tools?

Collectively, these tools are commonly referred to as the 7 QC tools. In the Define phase of the
DMAIC process, Flowcharts are very important. In the Measure phase, the first three of the 7 QC
tools are relevant: Fishbone Diagram, Pareto Chart, and Control Charts. In the Analyze phase, the
Scatter Diagram, Histogram, and Checklist are relevant. The Control Chart is also relevant in the
Improve phase.

Regression Control Charts


In statistical quality control, the regression control chart allows for monitoring a change in a process
where two or more variables are correlated. The change in a dependent variable can be detected and
compensatory change in the independent variable can be recommended.

Regression control chart differs from a traditional control chart in four main aspects:

 It is designed to control a varying (rather than a constant) average.


 The control limit lines are parallel to the regression line rather than the horizontal line.
 The computations here are much more complex.
 It is appropriate for use in more complex situations.

The general idea of the regression control chart is as follows: Suppose one is monitoring the
relationship between the number of parts produced and the number of work hours expended. Within
reasonable limits, on may expect that the greater the number of work hours, the more parts are
produced. One could randomly draw 5 sample work days from each month, and monitor that
relationship. The control limits established in the regression control chart will allow one to detect,
when the relationship changes, for example, when productivity drops and more work hours are
necessary to produce the same number of parts.

Process Capability analysis


Process capability analysis is a set of tools used to find out how well a given process meets a set of
specification limits. In other words, it measures how well a process performs.

In practice, it compares the distribution of sample values—representing the process outcome—to the
specification limits, which are the limits of what we want achieved. Sometimes it compares to a
specification target as well.

Process capability indices are usually used to describe the capability of a process. There are a
number of different process capability indices, and whether you calculate one or all may depend on
your analysis needs. But to calculate any process capability indices you assume stability of your
process; for unstable processes process capability indices are meaningless. So a first step in process
capability analysis is a check for stability throughout the process.

An important technique used to determine how well a process meets a set of specification limits is
called a process capability analysis. A capability analysis is based on a sample of data taken from a
process and usually produces:

1. An estimate of the DPMO (defects per million opportunities).


2. One or more capability indices.
3. An estimate of the Sigma Quality Level at which the process operates.

Capability Analysis for Measurement Data from a Normal Distribution

Process Capability analysis


Process capability analysis is a set of tools used to find out how well a given process meets a set of
specification limits. In other words, it measures how well a process performs.

In practice, it compares the distribution of sample values—representing the process outcome—to the
specification limits, which are the limits of what we want achieved. Sometimes it compares to a
specification target as well.
Process capability indices are usually used to describe the capability of a process. There are a
number of different process capability indices, and whether you calculate one or all may depend on
your analysis needs. But to calculate any process capability indices you assume stability of your
process; for unstable processes process capability indices are meaningless. So a first step in process
capability analysis is a check for stability throughout the process.

An important technique used to determine how well a process meets a set of specification limits is
called a process capability analysis. A capability analysis is based on a sample of data taken from a
process and usually produces:

1. An estimate of the DPMO (defects per million opportunities).


2. One or more capability indices.
3. An estimate of the Sigma Quality Level at which the process operates.

Capability Analysis for Measurement Data from a Normal Distribution

This procedure performs a capability analysis for data that are assumed to be a random sample from a
normal distribution. It calculates capability indices such as Cpk, estimates the DPM (defects per
million), and determines the sigma quality level (SQL) at which the process is operating. It can handle
two-sided symmetric specification limits, two-sided asymmetric limits, and one-sided limits.
Confidence limits for the most common capability indices may also be requested.

Process Capability analysis


Process capability analysis is a set of tools used to find out how well a given process meets a set of
specification limits. In other words, it measures how well a process performs.

In practice, it compares the distribution of sample values—representing the process outcome—to the
specification limits, which are the limits of what we want achieved. Sometimes it compares to a
specification target as well.

Process capability indices are usually used to describe the capability of a process. There are a
number of different process capability indices, and whether you calculate one or all may depend on
your analysis needs. But to calculate any process capability indices you assume stability of your
process; for unstable processes process capability indices are meaningless. So a first step in process
capability analysis is a check for stability throughout the process.

An important technique used to determine how well a process meets a set of specification limits is
called a process capability analysis. A capability analysis is based on a sample of data taken from a
process and usually produces:

1. An estimate of the DPMO (defects per million opportunities).


2. One or more capability indices.
3. An estimate of the Sigma Quality Level at which the process operates.

Capability Analysis for Measurement Data from a Normal Distribution

This procedure performs a capability analysis for data that are assumed to be a random sample from a
normal distribution. It calculates capability indices such as Cpk, estimates the DPM (defects per
million), and determines the sigma quality level (SQL) at which the process is operating. It can handle
two-sided symmetric specification limits, two-sided asymmetric limits, and one-sided limits.
Confidence limits for the most common capability indices may also be requested.

Capability Analysis for Measurement Data from Non-Normal Distributions

This procedure performs a capability analysis for data that are not assumed to come from a normal
distribution. The program will fit up to 25 alternative distribution and list them according to their
goodness-of-fit. For a selected distribution, it then calculates equivalent capability indices, DPM, and
the SQL.
Capability Analysis for Correlated Measurements

When the variables that characterize a process are correlated, separately estimating the capability of
each may give a badly distorted picture of how well the process is performing. In such cases, it is
necessary to estimate the joint probability that one or more variables will be out of spec. This requires
fitting a multivariate probability distribution. This procedure calculates capability indices, DPM, and
the SQL based on a multivariate normal distribution.

Capability Analysis for Counts or Proportions

When examination of an item or event results in a PASS or FAIL rather than a measurement, the
process capability analysis must be based on a discrete distribution. For very large lots, the relevant
distribution is the binomial. For small lots or cases of limited opportunities for failure, the
hypergeometric distribution must be used:
Measurement system Analysis
Measurement system Analysis (MSA) is defined as an experimental and mathematical method of
determining the amount of variation that exists within a measurement process. Variation in the
measurement process can directly contribute to our overall process variability. MSA is used to certify
the measurement system for use by evaluating the system’s accuracy, precision and stability.

A measurement systems analysis (MSA) is a thorough assessment of a measurement process, and


typically includes a specially designed experiment that seeks to identify the components of variation
in that measurement process.

Just as processes that produce a product may vary, the process of obtaining measurements and data
may also have variation and produce incorrect results. A measurement systems analysis evaluates the
test method, measuring instruments, and the entire process of obtaining measurements to ensure the
integrity of data used for analysis (usually quality analysis) and to understand the implications of
measurement error for decisions made about a product or process. MSA is an important element of
Six Sigma methodology and of other quality management systems.

MSA analyzes the collection of equipment, operations, procedures, software and personnel that
affects the assignment of a number to a measurement characteristic.

A measurement systems analysis considers the following:

 Selecting the correct measurement and approach


 Assessing the measuring device
 Assessing procedures and operators
 Assessing any measurement interactions
 Calculating the measurement uncertainty of individual measurement devices and/or
measurement systems

Why Perform Measurement System Analysis (MSA)

An effective MSA process can help assure that the data being collected is accurate and the system of
collecting the data is appropriate to the process. Good reliable data can prevent wasted time, labor and
scrap in a manufacturing process. A major manufacturing company began receiving calls from several
of their customers reporting non-compliant materials received at their facilities sites. The parts were
not properly snapping together to form an even surface or would not lock in place. The process was
audited and found that the parts were being produced out of spec. The operator was following the
inspection plan and using the assigned gages for the inspection. The problem was that the gage did not
have adequate resolution to detect the non-conforming parts. An ineffective measurement system can
allow bad parts to be accepted and good parts to be rejected, resulting in dissatisfied customers and
excessive scrap. MSA could have prevented the problem and assured that accurate useful data was
being collected.

How to Perform Measurement System Analysis (MSA)

MSA is a collection of experiments and analysis performed to evaluate a measurement system’s


capability, performance and amount of uncertainty regarding the values measured. We should review
the measurement data being collected, the methods and tools used to collect and record the data. Our
goal is to quantify the effectiveness of the measurement system, analyze the variation in the data and
determine its likely source. We need to evaluate the quality of the data being collected in regards to
location and width variation. Data collected should be evaluated for bias, stability and linearity.

During an MSA activity, the amount of measurement uncertainty must be evaluated for each type of
gage or measurement tool defined within the process Control Plans. Each tool should have the correct
level of discrimination and resolution to obtain useful data. The process, the tools being used (gages,
fixtures, instruments, etc.) and the operators are evaluated for proper definition, accuracy, precision,
repeatability and reproducibility.

Design and Analysis of Experiment (DOE)


The term experiment is defined as the systematic procedure carried out under controlled conditions in
order to discover an unknown effect, to test or establish a hypothesis, or to illustrate a known effect.
When analyzing a process, experiments are often used to evaluate which process inputs have a
significant impact on the process output, and what the target level of those inputs should be to achieve
a desired result (output). Experiments can be designed in many different ways to collect this
information. Design of Experiments (DOE) is also referred to as Designed Experiments or
Experimental Design – all of the terms have the same meaning.

Experimental design can be used at the point of greatest leverage to reduce design costs by speeding
up the design process, reducing late engineering design changes, and reducing product material and
labor complexity. Designed Experiments are also powerful tools to achieve manufacturing cost
savings by minimizing process variation and reducing rework, scrap, and the need for inspection.

This Toolbox module includes a general overview of Experimental Design and links and other
resources to assist you in conducting designed experiments. A glossary of terms is also available at
any time through the Help function, and we recommend that you read through it to familiarize
yourself with any unfamiliar terms.

Design of Experiments (DOE)

Design of Experiments (DOE) is a branch of applied statistics focused on using the scientific method
for planning, conducting, analyzing and interpreting data from controlled tests or experiments. DOE is
a mathematical methodology used to effectively plan and conduct scientific studies that change input
variables (X) together to reveal their effect on a given response or the output variable (Y). In plain,
non-statistical language, the DOE allows you to evaluate multiple variables or inputs to a process or
design, their interactions with each other and their impact on the output. In addition, if performed and
analyzed properly you should be able to determine which variables have the most and least impact on
the output. By knowing this you can design a product or process that meets or exceeds quality
requirements and satisfies customer needs.

Why Utilize Design of Experiments (DOE)?

Design and Analysis of Experiment (DOE)


The term experiment is defined as the systematic procedure carried out under controlled conditions in
order to discover an unknown effect, to test or establish a hypothesis, or to illustrate a known effect.
When analyzing a process, experiments are often used to evaluate which process inputs have a
significant impact on the process output, and what the target level of those inputs should be to achieve
a desired result (output). Experiments can be designed in many different ways to collect this
information. Design of Experiments (DOE) is also referred to as Designed Experiments or
Experimental Design – all of the terms have the same meaning.

Experimental design can be used at the point of greatest leverage to reduce design costs by speeding
up the design process, reducing late engineering design changes, and reducing product material and
labor complexity. Designed Experiments are also powerful tools to achieve manufacturing cost
savings by minimizing process variation and reducing rework, scrap, and the need for inspection.

This Toolbox module includes a general overview of Experimental Design and links and other
resources to assist you in conducting designed experiments. A glossary of terms is also available at
any time through the Help function, and we recommend that you read through it to familiarize
yourself with any unfamiliar terms.

Design of Experiments (DOE)

Design of Experiments (DOE) is a branch of applied statistics focused on using the scientific method
for planning, conducting, analyzing and interpreting data from controlled tests or experiments. DOE is
a mathematical methodology used to effectively plan and conduct scientific studies that change input
variables (X) together to reveal their effect on a given response or the output variable (Y). In plain,
non-statistical language, the DOE allows you to evaluate multiple variables or inputs to a process or
design, their interactions with each other and their impact on the output. In addition, if performed and
analyzed properly you should be able to determine which variables have the most and least impact on
the output. By knowing this you can design a product or process that meets or exceeds quality
requirements and satisfies customer needs.

Why Utilize Design of Experiments (DOE)?

DOE allows the experimenter to manipulate multiple inputs to determine their effect on the output of
the experiment or process. By performing a multi-factorial or “full-factorial” experiment, DOE can
reveal critical interactions that are often missed when performing a single or “fractional factorial”
experiment. By properly utilizing DOE methodology, the number of trial builds or test runs can be
greatly reduced. A robust Design of Experiments can save project time and uncover hidden issues in
the process. The hidden issues are generally associated with the interactions of the various factors. In
the end, teams will be able to identify which factors impact the process the most and which ones have
the least influence on the process output.

When to Utilize Design of Experiments (DOE)?


Experimental design or Design of Experiments can be used during a New Product / Process
Introduction (NPI) project or during a Kaizen or process improvement exercise. DOE is generally
used in two different stages of process improvement projects.

 During the “Analyze” phase of a project, DOE can be used to help identify the Root Cause
of a problem. With DOE the team can examine the effects of the various inputs (X) on the
output (Y). DOE enables the team to determine which of the Xs impact the Y and which
one(s) have the most impact.
 During the “Improve” phase of a project, DOE can be used in the development of a
predictive equation, enabling the performance of what-if analysis. The team can then test
different ideas to assist in determining the optimum settings for the Xs to achieve the best Y
output.

Some knowledge of statistical tools and experimental planning is required to fully understand DOE
methodology. While there are several software programs available for DOE analysis, to properly
apply DOE you need to possess an understanding of basic statistical concepts.

Components of Experimental Design

Consider the following diagram of a cake-baking process. There are three aspects of the process that
are analyzed by a designed experiment:

1. Factors, or inputs to the process

Factors can be classified as either controllable or uncontrollable variables. In this case, the
controllable factors are the ingredients for the cake and the oven that the cake is baked in. The
controllable variables will be referred to throughout the material as factors. Note that the ingredients
list was shortened for this example – there could be many other ingredients that have a significant
bearing on the end result (oil, water, flavoring, etc). Likewise, there could be other types of factors,
such as the mixing method or tools, the sequence of mixing, or even the people involved. People are
generally considered a Noise Factor (see the glossary) – an uncontrollable factor that causes
variability under normal operating conditions, but we can control it during the experiment using
blocking and randomization. Potential factors can be categorized using the Fishbone Chart (Cause &
Effect Diagram) available from the Toolbox.

2. Levels, or settings of each factor in the study

Examples include the oven temperature setting and the particular amounts of sugar, flour, and eggs
chosen for evaluation.

3. Response, or output of the experiment

In the case of cake baking, the taste, consistency, and appearance of the cake are measurable
outcomes potentially influenced by the factors and their respective levels. Experimenters often desire
to avoid optimizing the process for one response at the expense of another. For this reason, important
outcomes are measured and analyzed to determine the factors and their settings that will provide the
best overall outcome for the critical-to-quality characteristics – both measurable variables and
assessable attributes.
Acceptance Sampling Plan
An inspection of a product or service that determines whether or not the product will be accepted. For
example, a furniture manufacturer would use an acceptance sampling plan to make acceptance
decisions related to the type and quality of wood and similar raw materials they purchase for inclusion
into their finished products.

Meaning of Acceptance Sampling or Sampling Inspection

One method of controlling the quality of a product is 100% inspection which requires huge
expenditure in terms of time, money and labour. Moreover due to boredom and fatigue involved in
repetitive inspection process, there exists a possibility to overlook and some defective products may
pass the inspection point.

Also when the quality of a product is tested by destructive testing (e.g., life of a candle or testing of
electrical fuses) then 100% inspection shall destroy all the products.

The alternative is statistical sampling inspection methods. Here from the whole lot of products/items
to be inspected, some items are selected for inspection.

Acceptance Sampling Plan


An inspection of a product or service that determines whether or not the product will be accepted. For
example, a furniture manufacturer would use an acceptance sampling plan to make acceptance
decisions related to the type and quality of wood and similar raw materials they purchase for inclusion
into their finished products.

Meaning of Acceptance Sampling or Sampling Inspection

One method of controlling the quality of a product is 100% inspection which requires huge
expenditure in terms of time, money and labour. Moreover due to boredom and fatigue involved in
repetitive inspection process, there exists a possibility to overlook and some defective products may
pass the inspection point.

Also when the quality of a product is tested by destructive testing (e.g., life of a candle or testing of
electrical fuses) then 100% inspection shall destroy all the products.

The alternative is statistical sampling inspection methods. Here from the whole lot of products/items
to be inspected, some items are selected for inspection.

If that sample of items conforms to be desired quality requirements then the whole lot is accepted, if it
does not, the whole lot is rejected. Thus the sample items are considered to be the representative of
the whole lot. This method of acceptance or rejection of a sample is called Acceptance Sampling.

In general acceptance sampling method proves to be economical and is used under the assumption
when the quality characteristics of the item are under control and relatively homogeneous.

Classification of Acceptance Sampling Plan

Depending upon the type of inspection acceptance sampling may be classified in two ways:
(i) Acceptance sampling on the basis of attributes i.e. GO and NOT GO gauges, and

(ii) Acceptance sampling on the basis of variables.

In acceptance sampling by attributes, no actual measurement is done and the inspection is done by
way of GO & NOT GO gauges. If the product conforms to the given specifications it is accepted,
otherwise rejected. The magnitude of error is not important in this case.

For example if cracks is the criteria of inspection/the products with cracks will be rejected and
without cracks accepted the shape and size of the cracks shall not be measured and considered.

In acceptance sampling by variables, the actual measurements of dimensions are taken or physical and
chemical testing of the characteristics of sample of materials/products is done. If the results are as per
specifications the lot is accepted otherwise rejected.

Advantages of Acceptance Sampling Plan

(i) The method is applicable in those industries where there is mass production and the industries
follow a set production procedure.

(ii) The method is economical and easy to understand.

(iii) Causes less fatigue boredom.

(iv) Computation work involved is comparatively very small.

(v) The people involved in inspection can be easily imparted training.

(vi) Products of destructive nature during inspection can be easily inspected by sampling.

(vii) Due to quick inspection process, scheduling and delivery times are improved.

Limitations of Acceptance Sampling Plan

(i) It does not give 100% assurance for the confirmation of specifications so there is always some
likelihood/risk of drawing wrong inference about the quality of the batch/lot.

(ii) Success of the system is dependent on, sampling randomness, quality characteristics to be tested,
batch size and criteria of acceptance of lot.

Terms Used in Acceptance Sampling

Following terms are generally used in acceptance sampling:

1. Acceptable Quality Level (AQL)


It is the desired quality level at which probability of a acceptance is high. It represents maximum
proportion of defectives which the consumer finds acceptable or it is the maximum percent defectives
that for the purpose of sampling inspection can be considered satisfactory.

2. Lot Tolerance Percent Defective (LTPD) or Reject able Quality Level (RQL)

It is the quality level at which the probability of acceptance is low and below this level the lots are
rejected. This prescribes the dividing line between good and bad lots. Lots at this quality level are
considered to be poor.

3. Average outgoing Quality (A.O.Q)

Acceptance sampling plans provides the assurance that the average quality level or percent defectives
actually going to consumers will not exceed certain limit. Fig demonstrates the concept of average
outgoing quality related with actual percent defectives being produced.

The AOQ curve indicates that as the actual percent defectives in a production process increases,
initially the effect is for the lots to be passed for acceptance even though the number of defectives has
gone up and the percent defectives going to the consumer increases.

If this upward trend continues, the acceptance plan beings to reject lots and when lots are rejected,
100% inspection is followed and defective units are replaced by good ones. The net effect is to
improve the average quality of the outgoing products since the rejected lots which to be ultimately
accepted contain all non-defective items (because of 100% inspection).

4. Operating Characteristic Curve or O.C. Curve

Operating characteristic curve for a sampling plan is a graph between fraction defective in a lot and
the probability of acceptance. In practice the performance of acceptance sampling for distinguishing
defectives and acceptable or good and bad lots mainly depends upon the sample size (n) and the
number of defectives permissible in the sample.

The O.C. curve shown in Fig. is the curve of a 100 percent inspection plan is said to be an ideal curve,
because it is generated by and acceptance plan which creates no risk either for producer or the
consumer. Fig. Shows the O.C. curve that passes through two stipulated points i.e. two pre-agreed
points AQL and LTPD by the producer and the consumer.
Usually the producer’s and consumer’s risks are agreed upon Fig. and explicitly recorded in
quantitative terms.

This leads to following two types of risks:

The merit of any sampling plan depends on the relationship of sampling cost to risk. As the cost of
inspection go down the cost of accepting defectives increases.

Characteristics of O.C. Curve

(i) The larger the sample size and acceptance number steeper will be the slope of O.C. curve.

(ii) The O.C. curve of the sampling plans with acceptance number greater than zero are superior to
those with acceptance number as zero.

(iii) Fixed sample size tends towards constant quality production.

Quality Cost
Cost of Quality (COQ) is a measure that quantifies the cost of control/conformance and the cost of
failure of control/non-conformance. In other words, it sums up the costs related to prevention and
detection of defects and the costs due to occurrences of defects.

Definition by ISTQB

Cost of quality: The total costs incurred on quality activities and issues and often split into prevention
costs, appraisal costs, internal failure costs and external failure costs.

Definition by QAI
Money spent beyond expected production costs (labor, materials, equipment) to ensure that the
product the customer receives is a quality (defect free) product. The Cost of Quality includes
prevention, appraisal, and correction or repair costs.

Quality Cost
Cost of Quality (COQ) is a measure that quantifies the cost of control/conformance and the cost of
failure of control/non-conformance. In other words, it sums up the costs related to prevention and
detection of defects and the costs due to occurrences of defects.

Definition by ISTQB

Cost of quality: The total costs incurred on quality activities and issues and often split into prevention
costs, appraisal costs, internal failure costs and external failure costs.

Definition by QAI

Money spent beyond expected production costs (labor, materials, equipment) to ensure that the
product the customer receives is a quality (defect free) product. The Cost of Quality includes
prevention, appraisal, and correction or repair costs.

Quality costs are categorized into four main types. Theses are:

 Prevention costs
 Appraisal costs
 Internal failure costs and
 External failure costs.

These four types of quality costs are briefly explained below:

(i) Prevention costs

It is much better to prevent defects rather than finding and removing them from products. The costs
incurred to avoid or minimize the number of defects at first place are known as prevention costs.
Some examples of prevention costs are improvement of manufacturing processes, workers training,
quality engineering, statistical process control etc.

(ii) Appraisal costs

Appraisal costs (also known as inspection costs) are those cost that are incurred to identify defective
products before they are shipped to customers. All costs associated with the activities that are
performed during manufacturing processes to ensure required quality standards are also included in
this category. Identification of defective products involve the maintaining a team of inspectors. It may
be very costly for some organizations.

(iii) Internal failure costs

Internal failure costs are those costs that are incurred to remove defects from the products before
shipping them to customers. Examples of internal failure costs include cost of rework, rejected
products, scrap etc.

(iv) External failure costs

If defective products have been shipped to customers, external failure costs arise. External failure
costs include warranties, replacements, lost sales because of bad reputation, payment for damages
arising from the use of defective products etc. The shipment of defective products can dissatisfy
customers, damage goodwill and reduce sales and profits.

FORMULA / CALCULATION

Cost of Quality (COQ) = Cost of Control + Cost of Failure of Control


where

Cost of Control = Prevention Cost + Appraisal Cost

and

Cost of Failure of Control = Internal Failure Cost + External Failure Cost

NOTES

 In its simplest form, COQ can be calculated in terms of effort (hours/days).


 A better approach will be to calculate COQ in terms of money (converting the effort into
money and adding any other tangible costs like test environment setup).
 The best approach will be to calculate COQ as a percentage of total cost. This allows for
comparison of COQ across projects or companies.
 To ensure impartiality, it is advised that the Cost of Quality of a project/product be calculated
and reported by a person external to the core project/product team (Say, someone from the
Accounts Department).
 It is desirable to keep the Cost of Quality as low as possible. However, this requires a fine
balancing of costs between Cost of Control and Cost of Failure of Control. In general, a
higher Cost of Control results in a lower Cost of Failure of Control. But, the law of
diminishing returns holds true here as well.

Process failure Mode and effect analysis (PFMEA)


Introduction to Process Failure Mode and Effects Analysis (PFMEA)

Manufacturing and Process Engineers envision a process is free of errors. Unfortunately, errors and
especially errors propagated when people are present can be quite catastrophic. Process Failure Mode
and Effects Analysis (PFMEA) looks at each process step to identify risks and possible errors from
many different sources. The sources most often considered are:

 Man
 Methods
 Material
 Machinery
 Measurement
 Mother Earth (Environment)

What is Process Failure Mode and Effects Analysis (PFMEA)?

PFMEA is a methodical approach used for identifying risks on process changes. The Process FMEA
initially identifies process functions, failure modes their effects on the process. If there are design
inputs, or special characteristics, the effect on end user is also included. The severity ranking or
danger of the effect is determined for each effect of failure. Then, causes and their mechanisms of the
failure mode are identified. The assumption that the design is adequate keeps the focus on the process.
A high probability of a cause drives actions to prevent or reduce the impact of the cause on the failure
mode. The detection ranking determines the ability of specific tests to confirm the failure mode /
causes are eliminated. The PFMEA also tracks improvements through Risk Priority Number (RPN)
reductions. By comparing the before and after RPN, a history of improvement and risk mitigation can
be chronicled.

Why Perform Process Failure Mode and Effects Analysis (PFMEA)?

Risk is the substitute for failure on new processes. It is a good practice to identify risks for each
process step as early as possible. The main goal is to identify risk prior to tooling acquisition.
Mitigation of the identified risk prior to first article or Production Part Approval Process (PPAP) will
validate the expectation of superior process performance.

Risks are identified on new technology and processes, which if left unattended, could result in failure.
The PFMEA is applied when:

 There is a new technology or new process introduced


 There is a current process with modifications, which may include changes due to updated
processes, continuous Improvement, Kaizen or Cost of Quality (COQ).
 There is a current process exposed to a new environment or change in location (no physical
change made to process)

How to Perform Process Failure Mode and Effects Analysis (PFMEA)?

There are five primary sections of the Process FMEA. Each section has a distinct purpose and a
different focus. The PFMEA is completed in sections at different times within the project timeline, not
all at once. The Process FMEA form is completed in the following sequence:

PFMEA Section 1 (Quality-One Path 1)

Process Name / Function

The Process Name / Function column permits the Process (PE) or Manufacturing Engineer (ME) to
describe the process technology that is being analyzed. The process can be a manufacturing operation
or an assembly. The function is the “Verb-Noun” that describes what the process operation does.
There may be many functions for any one process operation.

Requirement

The requirements, or measurements, of the process function are described in the second column. The
requirements are either provided by a drawing or a list of special characteristics. A Characteristics
Matrix, which is form of Quality Function Deployment (QFD), may be used and will link
characteristics to their process operations. The requirement must be measurable and should have test
and inspection methods defined. These methods will later be placed on the Control Plan. The first
opportunity for recommended action may be to investigate and clarify the requirements and
characteristics of the product with the design team and Design FMEA.

Failure Mode

Failure Modes are the anti-functions or requirements not being met. There are 5 types of Failure
Modes:

(i) Full Failure

(ii) Partial Failure

(iii) Intermittent Failure

(iv) Degraded Failure

(v) Unintentional Failure

Effects of Failure

The effects of a failure are focused on impacts to the processes, subsequent operations and possibly
customer impact. Many effects could be possible for any one failure mode. All effects should appear
in the same cell next to the corresponding failure mode. It is also important to note that there may be
more than one customer; both internal and external customers may be affected.

Severity

The Severity of each effect is selected based on both Process Effects as well as Design Effects. The
severity ranking is typically between 1 through 10.

Typical Severity for Process Effects (when no Special Characteristics / design inputs are given) is as
follows:

 2-4: Minor Disruption with rework / adjustment in stations; slows down production (does not
describe a lean operation)
 5-6: Minor disruption with rework out of station; additional operations required (does not
describe a lean operation)
 7-8: Major disruption, rework and/or scrap is produced; may shutdown lines at customer or
internally within the organization
 9-10: Regulatory and safety of the station is a concern; machine / tool damage or unsafe work
conditions

Typical Severity for Design Effects (when Special Characteristics / design inputs are given) is as
follows:

 2-4: Annoyance or squeak and rattle; visual defects which do not affect function
 5-6: Degradation or loss of a secondary function of the item studied
 7-8: Degradation or loss of the primary function of the item studied
 9-10: Regulatory and / or Safety implications

The highest severity is chosen from the many potential effects and placed in the Severity Column.
Actions may be identified to can change the design direction on any failure mode with an effect of
failure ranked 9 or 10. If a recommended action is identified, it is placed in the Recommended
Actions column of the PFMEA.

Classification

Classification refers to the type of characteristics indicated by the risk. Many types of special
characteristics exist in different industries. These special characteristics typically require additional
work, either design error proofing, process error proofing, process variation reduction (Cpk) or
mistake proofing. The Classification column designates where the characteristics may be identified
and later transferred to a Control Plan.

PFMEA Section 2 (Quality-One Path 2)

Potential Causes / Mechanisms of Failure

Causes are defined for the Failure Mode and should be determined for their impact on the Failure
Mode being analyzed. Causes typically follow the Fishbone / Ishikawa Diagram approach, with the
focus of cause brainstorming on the 6M’s: Man, Method, Material, Machine, Measurement and
Mother Earth (Environment). Use of words like bad, poor, defective and failed should be avoided as
they do not define the cause with enough detail to make risk calculations for mitigation.
Current Process Controls Prevention

The prevention strategy used by a manufacturing or process team may benefit the process by lowering
occurrence or probability. The stronger the prevention, the more evidence the potential cause can be
eliminated by process design. The use of verified process standards, proven technology (with similar
stresses applied), Programmable Logic Controllers (PLC), simulation technology and Standard Work
help are typical Prevention Controls.

Occurrence

The Occurrence ranking is an estimate based on known data or lack of it. The Occurrence in Process
FMEAs can be related to known / similar technology or new process technology. A modification to
the ranking table is suggested based on volumes and specific use.

Typical Occurrence rankings for new process technology (similar to DFMEA Occurrence Ranking)
are as follows:

 1: Prevented causes due to using a known design standard


 2: Identical or similar design with no history of failure
 This ranking is often used improperly. The stresses in the new application and a sufficient
sample of products to gain history are required to select this ranking value.
 3-4: Isolated failures
 Some confusion may occur when trying to quantify “isolated”
 5-6: Occasional failures have been experienced in the field or in development / verification
testing
 7-9: New design with no history (based on a current technology)
 10: New design with no experience with technology

Typical Occurrence rankings for known / similar technology are as follows:

 1: Prevented through product / process design; error proofed


 2: 1 in 1,000, 000
 3: 1in 100,000
 4: 1 in 10,000
 5: 1 in 2,000
 6: 1 in 500
 7: 1 in 100
 8: 1 in 50
 9: 1 in 20
 10: 1 in 10

Actions may be directed against causes of failure which have a high occurrence. Special attention
must be placed on items with Severity 9 or 10. These severity rankings must be examined to assure
that due diligence has been satisfied.

PFMEA Section 3 (Quality-One Path 3)

Current Process Controls Detection

The activities conducted to verify the product meets the specifications detailed by the product or
process design are placed in the Current Process Controls Detection column. Examples are:
 Error proofing devices (cannot make nonconforming product)
 Mistake proofing devices (cannot pass nonconforming product)
 Inspection devices which collect variable data
 Alarms for unstable process parameters
 Visual inspection

Detection Rankings

Detection Rankings are assigned to each method or inspection based on the type of technique used.
Each detection control is given a detection ranking using a predetermined scale. There is often more
than one test / evaluation technique per Cause-Failure Mode combination. Listing all in one cell and
applying a detection ranking for each is the best practice. The lowest of the detection rankings is then
placed in the detection column. Typical Process Controls Detection Rankings are as follows:

 1: Error (Cause) has been fully prevented and cannot occur


 2: Error Detection in-station, will not allow a nonconforming product to be made
 3: Failure Detection in-station, will not allow nonconforming product to pass
 4: Failure Detection out of station, will not leave plant / pass through to customer
 5-6: Variables gage, attribute gages, control charts, etc., requires operator to complete the
activity
 7-8: Visual, tactile or audible inspection
 9: Lot sample by inspection personnel
 10: No Controls

Actions may be necessary to improve inspection or evaluation capability. The improvement will
address the weakness in the inspection and evaluation strategy. The actions are placed in the
Recommended Actions Column.

PFMEA Section 4

Risk Priority Number (RPN)

The Risk Priority Number (RPN) is the product of the three previously selected rankings, Severity *
Occurrence * Detection. RPN thresholds must not be used to determine the need for action. RPN
thresholds are not permitted mainly due to two factors:

 Poor behavior by design engineers trying to get below the specified threshold
 This behavior does not improve or address risk. There is no RPN value above which an action
should be taken or below which a team is excused of one.
 “Relative Risk” is not always represented by RPN

Recommended Actions

The Recommended Actions column is the location within the Process FMEA that all potential
improvements are placed. Completed actions are the purpose of the PFMEA. Actions must be detailed
enough that it makes sense if it stood alone in a risk register or actions list. Actions are directed
against one of the rankings previously assigned. The objectives are as follows:

 Eliminate Failure Modes with a Severity 9 or 10


 Lower Occurrence on Causes by error proofing, reducing variation or mistake proofing
 Lower Detection on specific test improvements
Responsibility and Target Completion Date

Enter the name and date that the action should be completed by. A milestone name can substitute for a
date if a timeline shows the linkage between date and selected milestone.

PFMEA Section 5

Actions Taken and Completion Date

List the Actions Taken or reference the test report which indicates the results. The Process FMEA
should result in actions which bring higher risks items to an acceptable level of risk. It is important to
note that acceptable risk is desirable and mitigation of high risk to lower risk is the primary goal.

Re-Rank RPN

The new (re-ranked) RPN should be compared with the original RPN. A reduction in this value is
desirable. Residual risk may still be too high after actions have been taken. If this is the case, a new
action line would be developed. This is repeated until an acceptable residual risk has been obtained.

Process Failure Mode and Effects Analysis (PFMEA) Services

The Process FMEA Services available from Quality-One are PFMEA Consulting, PFMEA Training
and PFMEA Support, which may include Facilitation, Auditing or Contract Services. Our experienced
team of highly trained professionals will provide a customized approach for developing your people
and processes based on your unique PFMEA needs. Whether you need Consulting to assist with a
plan to deploy PFMEA, Training to help understand and drive improvement or hands-on Project
Support for building and implementing your PFMEA process, Quality-One can support you! By
utilizing our experienced Subject Matter Experts (SME) to work with your teams, Quality-One can
help you realize the value of Process FMEA in your organization.

SERVQUAL Model of Measuring Service Quality


The SERVQUAL Model is an empiric model by Zeithaml, Parasuraman and Berry to
compare service quality performance with customer service quality needs. It is used to do a gap
analysis of an organization’s service quality performance against the service quality needs of its
customers. That’s why it’s also called the GAP model.

It takes into account the perceptions of customers of the relative importance of service attributes. This
allows an organization to prioritize.

There are five core components of service quality:


1. Tangibles: physical facilities, equipment, staff appearance, etc.
2. Reliability: ability to perform service dependably and accurately.
3. Responsiveness: willingness to help and respond to customer need.
4. Assurance: ability of staff to inspire confidence and trust.
5. Empathy: the extent to which caring individualized service is given.

The four themes that were identified by the SERVQUAL developers were numbered and labelled
as:

1. Consumer expectation – Management Perception Gap (Gap 1):

Management may have inaccurate perceptions of what consumers (actually) expect. The reason for
this gap is lack of proper market/customer focus. The presence of a marketing department does
not automatically guarantee market focus. It requires the appropriate management processes, market
analysis tools and attitude.

2. Service Quality Specification Gap (Gap 2):

There may be an inability on the part of the management to translate customer expectations into
service quality specifications. This gap relates to aspects of service design.

3. Service Delivery Gap (Gap 3):

Guidelines for service delivery do not guarantee high-quality service delivery or performance. There
are several reasons for this. These include: lack of sufficient support for the frontline staff, process
problems, or frontline/contact staff performance variability.

4. External Communication Gap (Gap 4):

Consumer expectations are fashioned by the external communications of an organization. A realistic


expectation will normally promote a more positive perception of service quality. A service
organization must ensure that its marketing and promotion material accurately describes the service
offering and the way it is delivered

5. These four gaps cause a fifth gap (Gap 5)

Which is the difference between customer expectations and perceptions of the service actually
received Perceived quality of service depends on the size and direction of Gap 5, which in
turn depends on the nature of the gaps associated with marketing, design and delivery of services.
So, Gap 5 is the product of gaps 1, 2, 3 and 4. If these four gaps, all of which are located below the
line that separates the customer from the company, are closed then gap 5 will close.

How to measure Service Quality?

1. Mystery Shopping

This is a popular technique used for retail stores, hotels, and restaurants, but works for any other
service as well. It consists out of hiring an ‘undercover customer’ to test your service quality – or
putting on a fake moustache and going yourself, of course.

The undercover agent then assesses the service based on a number of criteria, for example those
provided by SERVQUAL. This offers more insights than simply observing how your employees
work. Which will probably be outstanding — as long as their boss is around.

2. Post Service Rating

This is the practice of asking customers to rate the service right after it’s been delivered.

With Userlike’s live chat, for example, you can set the chat window to change into a service rating
view once it closes. The customers make their rating, perhaps share some explanatory feedback, and
close the chat.

Something similar is done with ticket systems like Help Scout, where you can rate the service
response from your email inbox.

It’s also done in phone support. The service rep asks whether you’re satisfied with her service
delivery, or you’re asked to stay on the line to complete an automatic survey. The latter version is so
annoying, though, that it kind of destroys the entire service experience.

Different scales can be used for the post service rating. Many make use of a number-rating from 1 –
10. There’s possible ambiguity here, though, because cultures differ in how they rate their
experiences.

3. Follow-Up Survey

With this method you ask your customers to rate your service quality through an email survey – for
example via Google Forms. It has a couple advantages over the post-service rating.

For one, it gives your customer the time and space for more detailed responses. You can send a
SERVQUAL type of survey, with multiple questions instead of one. That’d be terribly annoying in a
post-service rating.

It also provides a more holistic overview of your service. Instead of a case-by-case assessment, the
follow-up survey measures your customers’ overall opinion of your service.

It’s also a useful technique if you didn’t have the post service rating in place yet and want a quick
overview of the state of your service quality.

4. In-App Survey

With an in-app survey, the questions are asked while the visitor is on the website or in the app, instead
of after the service or via email. It can be one simple question – e.g. ‘how would you rate our
service’ – or it could be a couple of questions.

Convenience and relevance are the main advantages. SurveyMonkey offers some great tools for
implementing something like this on your website.

5. Customer Effort Score (CES)

This metric was proposed in an influential Harvard Business Review article. In it, they argue that
while many companies aim to ‘delight’ the customer – to exceed service expectations – it’s
more likely for a customer to punish companies for bad service than it is for them to reward
companies for good service.

While the costs of exceeding service expectations are high, they show that the payoffs are marginal.
Instead of delighting our customers, so the authors argue, we should make it as easy as possible for
them to have their problems solved.

That’s what they found had the biggest positive impact on the customer experience, and what they
propose measuring.

6. Social Media Monitoring

This method has been gaining momentum with the rise of social media. For many people, social
media serve as an outlet. A place where they can unleash their frustrations and be heard.

And because of that, they are the perfect place to hear the unfiltered opinions of your customers – if
you have the right tools. Facebook and Twitter are obvious choices, but also review platforms like
TripAdvisor or Yelp can be very relevant. Buffer suggests to ask your social media followers for
feedback on your service quality.

Two great tools to track who’s talking about you are Mention and Google Alerts.

7. Documentation Analysis

With this qualitative approach you read or listen to your respectively written or recorded service
records. You’ll definitely want to go through the documentation of low-rated service deliveries, but
it can also be interesting to read through the documentation of service agents that always rank high.
What are they doing better than the rest?

The hurdle with the method isn’t in the analysis, but in the documentation. For live chat and email
support it’s rather easy, but for phone support it requires an annoying voice at the start of the call:
“This call could be recorded for quality measurement”.

8. Objective Service Metrics

These stats deliver the objective, quantitative analysis of your service. These metrics aren’t enough
to judge the quality of your service by themselves, but they play a crucial role in showing you the
areas you should improve in.

 Volume per channel. This tracks the amount of inquiries per channel. When combined with
other metrics, like those covering efficiency or customer satisfaction, it allows you to decide
which channels to promote or cut down.
 First response time. This metric tracks how quickly a customer receives a response on her
inquiry. This doesn’t mean their issue is solved, but it’s the first sign of life – notifying
them that they’ve been heard.
 Response time. This is the total average of time between responses. So let’s say your email
ticket was resolved with 4 responses, with respective response times of 10, 20, 5, and 7
minutes. Your response time is 10.5 minutes. Concerning reply times, most people reaching
out via email expect a response within 24 hours; for social channels it’s 60 minutes. Phone
and live chat require an immediate response, under 2 minutes.
 First contact resolution ratio. Divide the number of issues that’s resolved through a single
response by the number that required more responses. Forrester research showed that first
contact resolutions are an important customer satisfaction factor for 73% of customers.
 Replies per ticket. This shows how many replies your service team needs on average to
close a ticket. It’s a measure of efficiency and customer effort.
 Backlog Inflow/Outflow. This is the number of cases submitted compared to the number of
cases closed. A growing number indicates that you’ll have to expand your service team.
 Customer Success Ratio. A good service doesn’t mean your customers always finds what
they want. But keeping track of the number that found what they looked for versus those that
didn’t, can show whether your customers have the right ideas about your offerings.
 ‘Handovers’ per issue. This tracks how many different service reps are involved per
issue. Especially in phone support, where repeating the issue is necessary, customers hate
HBR identified it as one of the four most common service complaints.
 Things Gone Wrong. The number of complaints/failures per customer inquiry. It helps you
identify products, departments, or service agents that need some ‘fixing’.
 Instant Service / Queuing Ratio. Nobody likes to wait. Instant service is the best service.
This metric keeps track of the ratio of customers that were served instantly versus those that
had to wait. The higher the ratio, the better your service.
 Average Queueing Waiting Time. The average time that queued customers have to wait to
be served.
 Queueing Hang-ups. How many customers quit the queueing process. These count as a lost
service opportunity.
 Problem Resolution Time. The average time before an issue is resolved.
 Minutes Spent Per Call. This can give you insight on who are your most efficient operators.

Some of these measures are also financial metrics, such as the minutes spent per call and number of
handovers. You can use them to calculate your service costs per service contact. Winning the award
for the world’s best service won’t get you anywhere if the costs eat up your profits.

Some service tools keep track of these sort of metrics automatically, like Talkdesk for phone and User
like for live chat support. If you make use of communication tools that aren’t dedicated to service,
tracking them will be a bit more work.

One word of caution for all above mentioned methods and metrics: beware of averages, they will
deceive you. If your dentist delivers a great service 90% of the time, but has a habit of binge drinking
and pulling out the wrong teeth the rest of the time, you won’t stick around long.

A more realistic image shapes up if you keep track of the outliers and standard deviation as well.
Measure your service, aim for a high average, and improve by diminishing the outliers.

You might also like