0% found this document useful (0 votes)
9 views26 pages

3314 HakanEmirDurmaz EdaÖztürk RecepKaanİsbilir Project

The document discusses collecting and analyzing data related to quality control in two contexts. The first involves measuring cake slice weights to ensure consistency. Data on slice weights from 20 subgroups was collected. The second context involves counting burnt chips in Lays packets to address quality issues. Data was collected over a month from random packet samples.

Uploaded by

emirdurmaz200131
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views26 pages

3314 HakanEmirDurmaz EdaÖztürk RecepKaanİsbilir Project

The document discusses collecting and analyzing data related to quality control in two contexts. The first involves measuring cake slice weights to ensure consistency. Data on slice weights from 20 subgroups was collected. The second context involves counting burnt chips in Lays packets to address quality issues. Data was collected over a month from random packet samples.

Uploaded by

emirdurmaz200131
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

INDE3314 QUALITY PLANNING AND

CONTROL SPRING 2023


INSTRUCTOR: Sezgin Çağlar AKSEZER
Study of The Quality Control Process
Hakan Emir Durmaz 19INDE1038
Eda Öztürk 20INDE1008
Recep Kaan İşbilir 218IE219
1. INTRODUCTION
Variable Data

Ensuring product quality in a bakery is vital for maintaining customer satisfaction and
achieving business success. Our project is motivated by the need to consistently deliver cake
slices of uniform weight, as this directly affects customer perception, cost control, and
compliance with industry standards. We have chosen the weight of cake slices as our critical
quality characteristic because variations in slice weight can lead to customer dissatisfaction
and increased operational costs. By monitoring and controlling this characteristic, we aim to
enhance product consistency and optimize our production processes.

The cake baking and slicing process involves several stages, from mixing ingredients to
baking and finally slicing the cakes. Controllable inputs in this process include the quantities
of ingredients, mixing times, baking temperatures, and slicing techniques. Uncontrollable
inputs may include variations in ingredient quality, ambient temperature, and humidity levels.
The output of this process is the final cake slices, which need to consistently meet specific
weight criteria to ensure uniformity and quality.

In this project, we will utilize theoretical techniques from statistics and quality control,
integrated with classical project management tools such as Six Sigma, to define, formulate,
analyze, solve, and propose improvements for maintaining the desired weight consistency in
cake slices. The use of a statistical software package like Minitab is essential for conducting
an in-depth analysis and deriving actionable insights for quality improvement.

Attribute Data

The primary motivation for this project is to enhance the quality of Lays chips by reducing the
occurrence of burnt chips in each packet. Consistent product quality is crucial for maintaining
customer satisfaction, brand reputation, and market competitiveness. Burnt chips are a
common complaint among consumers and can significantly affect their overall perception of
the product. By addressing this issue, the company can improve customer loyalty, reduce
wastage, and increase profitability. We have chosen to focus on the presence of burnt chips in
Lays packets as the critical quality characteristic for several reasons: burnt chips negatively
impact the taste and overall eating experience, leading to customer dissatisfaction and
potential loss of repeat business; ensuring a consistent product quality across all packets is
essential for maintaining brand trust and loyalty; and reducing the incidence of burnt chips
can decrease rework and scrap rates, improving overall operational efficiency and cost-
effectiveness.

The production process for Lays chips involves several stages: sourcing, cleaning, peeling,
and slicing potatoes to the desired thickness; frying the slices in oil at high temperatures until
they reach the desired crispiness; and seasoning and packaging the fried chips into individual
packets. Controllable inputs include frying temperature, which must be precisely controlled to
avoid over-frying and burnt chips; frying time, which needs to be optimized to ensure the
chips are thoroughly cooked but not burnt; and the quality of oil, which requires regular
monitoring and replacement to ensure it is not degraded. Uncontrollable inputs include
variations in potato quality and moisture content, as well as environmental factors like
humidity and temperature in the production environment, which can impact the frying
process. The primary output of this process is packets of Lays chips ready for consumer
distribution, while by-products include oil residues, potato peels, and scrap chips.
Management activities encompass quality control checks at various production stages to
detect and remove burnt chips, and training employe

2. DATA COLLECTION

Variable Data

The purpose of data collection is to ensure the consistent weight of cake slices, thereby
increasing customer satisfaction and optimizing business efficiency. The data collection process
involved measuring the weight of each cake slice. These data are crucial for determining and
improving the quality of products offered to customers. The steps in data collection were as
follows: Firstly, a predetermined number of cake slices (20 subgroups) were randomly selected,
with a specified quantity (5 samples per subgroup). Then, each slice was weighed using a
calibrated scale.
Weight of cake slices(g)
Sample Size
Sample
number 1 2 3 4 5
1 46 42 41 45 42
2 45 40 42 44 45
3 42 45 39 45 45
4 45 43 44 42 44
5 45 39 40 45 42
6 40 44 43 45 45
7 45 42 44 45 46
8 45 48 44 42 43
9 42 43 45 46 47
10 45 44 43 39 45
11 42 45 45 43 44
12 43 40 40 43 44
13 46 44 43 42 45
14 42 45 45 43 45
15 45 40 39 38 41
16 43 44 43 43 39
17 44 41 42 44 45
18 43 45 43 44 44
19 45 45 41 38 38
20 38 49 48 41 48

Attribute Data

We collected data on the occurrence of burnt chips in Lays packets to understand the extent of
the problem, identify root causes, and implement corrective actions to improve product
quality. By quantifying the frequency and distribution of burnt chips, we can make data-
driven decisions to enhance the frying process and ensure consistent product quality.

We employed a systematic approach to data collection involving random sampling of packets


from different production batches over a specified period, conducting visual inspections of
each sampled packet to count the number of burnt chips, and recording the number of burnt
chips per packet along with batch details, production date, and time. Data was collected over a
one-month period, covering various shifts and production runs to account for any variations
due to different operators, time of day, or other environmental factors.
Steps in Collecting Data:

1. Preparation: Define the sampling plan, including the


number of packets to be sampled per batch and the frequency of sampling.
2.Random Sampling: Randomly select packets from the production line at
predefined intervals to ensure a representative sample.
3.Inspection Process: Open each sampled packet and visually inspect the contents,
counting and recording the number of burnt chips.
4.Data Entry: Enter the collected data into a spreadsheet or database, ensuring
accurate recording of each sample's details.
5.Verification Perform periodic checks to verify the accuracy of data entry and
ensure consistency in the inspection process.

Data Manipulation Necessities:

To ensure the reliability and validity of the data, several data manipulation steps were
necessary: standardization, by establishing clear guidelines for identifying burnt chips to
minimize inspector variability; data cleaning, by reviewing and correcting any obvious errors
or inconsistencies in the recorded data; normalization, by adjusting the data to account for any
known systematic biases, such as differences between shifts or production lines; and
aggregation, by summarizing the data to identify overall trends and patterns, such as the
average number of burnt chips per packet and any variations between batches. By carefully
planning and executing the data collection process, we aimed to gather accurate and
representative data to inform our analysis and subsequent quality improvement initiatives.
3STATISTICAL ANALYSES

1. BASIC STATISTICS

Descriptive Statistics

Descriptive Statistics Outputs for


Variable Data:

Statistics

Variable N N* Mean SE Mean StDev Variance Minimum Q1 Median Q3 Maximum


n1 20 0 43,55 0,467215 2,08945 4,36579 38 42 44,5 45 46
n2 20 0 43,4 0,586695 2,62378 6,88421 39 41,25 44 45 49
n3 20 0 42,7 0,508351 2,27342 5,16842 39 41 43 44 48
n4 20 0 42,85 0,524530 2,34577 5,50263 38 42 43 45 46
n5 20 0 43,85 0,549042 2,45539 6,02895 38 42,25 44,5 45 48
Variable Range
n1 8
n2 10
n3 9
n4 8
n5 10

When looking at the average, the highest weight belongs to the fifth
sample (43.85). The least weight was on the second sample (42,7).
Standard deviation measures the spread of measurements around the
mean. According to these data, the standard deviations of the samples
range from 2.09 to 2.62. The highest standard deviation belongs to the
second sample (2.62), while the lowest belongs to the first sample
(2.09).
The range represents the difference between the highest and lowest
values in the dataset.
According to these data, the ranges of the samples range from 8 to 10.
The lowest range belongs to the first sample (8), while the highest
ranges belong to the second and fifth samples (10)
3.2 STATISTICAL PROCESS CONTROL

Variable Data

At this stage, using the Minitab software, a control chart and histogram were
generated. These analyses were conducted to examine the weight of cake slices
data. For variable data, an X̄ -R control chart was utilized to monitor the
process's central tendency (X̄ ) and variation (R). The X̄ control chart displays
the average weight of cake slices for each sample group, while the R control
chart illustrates the range of weights within each sample group. These charts
aid in assessing the stability and consistency of the process, as well as
identifying potential issues.

3.2.1 Histogram
This histogram graph is belonging to the variable data. It is
normally distributed.
Also here is the Cause and Effect
Diagram for the attribute
A Cause and Effect Diagram, also known as a Fishbone
Diagram or Ishikawa Diagram, is a tool used to systematically
identify and analyze the root causes of a specific problem or
effect. It is widely used in quality management and process
improvement initiatives. Here are the key aspects of a Cause
and Effect Diagram:
Purpose:
-Identify Root Causes: Helps in identifying the potential root
causes of a problem, rather than just the symptoms.
-Organize Information: Provides a structured way to
organize potential causes into categories for easier
analysis.
-Facilitate Discussion: Encourages team collaboration and
brainstorming to uncover all possible causes.

Structure:
-Head: Represents the problem or effect being analyzed. This
is typically written at the head of the "fish."
-Bones: Main categories of potential causes, which branch off
the main line like the ribs of a fish. Common categories include:
- Machine (Equipment): Issues related to machinery or
equipment.

3.2.3 Control Charts


A control chart is a graphical tool used to monitor the variation
of a process over time. It typically includes a central line
representing the mean, an upper control limit (UCL), and a
lower control limit (LCL). These lines are derived from
historical data and reflect the natural variability of the process.
Control charts enable the assessment of process stability and
predictability, providing insights into the reliability and
consistency of quality inspections.
Forour project, we constructed control charts for both data
sets. The u chart was selected for
its suitability in calculating defects per unit in attributes,
particularly with variable sample sizes.

For the variable


data set:
The P chart

Exponentialy Weighted Moving Average Chart


Here is the Exponentially Weighted Moving Average (EWMA) chart
for the proportion of nonconformities over the days. The chart
includes:EWMA Line: This line represents the smoothed proportion
of nonconformities.
Center Line: The overall mean proportion of nonconformities.
Upper Control Limit (UCL) and Lower Control Limit (LCL): These
lines help identify any points that are out of control, indicating
potential issues in the process.
To create the EWMA chart, we will use a typical smoothing constant
(lambda) of 0.2.
Out of Control Action Plan Flow Chart
fort he Attribute data

Data Finding

Record of Statistics

Control Chart Interpretation

NO
Is there an assignable Edit Data to correct
cause? entry
NO
Was Our Result Effected? Retain Data

NO
Does the Package need more control? Check other variables

Check to improve condition

Docmuent and report every action


R Chart
X Chart
UCL=10,68
UCL=46,182
Center Line=5,05
Center Line=43,27
LCL=0
LCL=40,357

One point more than 3,00 standard deviations from center line. Test
failed at point 20 at R chart.

3.3 STATISTICAL ANALYSES


Phase I Control Charts for the Variable Control Charts:

Quality charts will differ based on the data type. Since it is a variable
data type it is usually to monitor both the mean value of the quality
characteristic and its variability. The process means can be monitored
with x̄ . On the other hand, process variability is monitored by either
the R control chart or the S control chart. So, once the process means
and standard deviation are known control charts can be conducted.
Also, if they are not known we have to estimate them. If there is no
assignable cause for an out-of-control point that means the point is
false. To find whether the control limits are enough or not they should
be examined for assignable causes:
•If assignable causes are found, discard points from calculations
and decide the new trial control limits.
• Check until all points plot within control.
• Use the trial control limits that result.
•If there is no assignable cause, there are two options which are to
remove the point as if an assignable cause had been discovered,
update the limits, and retain the point and consider limits appropriate
for control.
Even if there is no out-of-control point, patterns may occur, and this
may cause some sort of problems.

PROCESS CAPABILITY ANALYSIS

Process capability analysis is a set of statistical techniques used to


determine how well a process can produce output within specified
limits. For the dataset of cake slice weights, the process capability
ratio:
We defined the upper specification limit (USL) and lower
specification limit (LSL). We assumed that the specification limits
are:
USL = 48
grams LSL
= 38 grams

From R- chart estimated sigma = Rbar/


d2=5.05/2.326=2.1711
Cp=(48-38)/6*(2,1711)= 0,76766
Process capability ratio =(1/
0,76766)*100=130,266
Cp less than 1 indicate that the process is not capable and needs
improvement either by reducing the variation or by centering the
process mean closer to the midpoint of the specification limits.
Out of Control Action Plan
Flow Chart:
Cause-and-Effect Diagram (Fishbone Diagram) for Assignable
Causes

Before starting to think about a solution to a major problem,


investigating all the possible causes is a must. That way, the problem
can be completely tackled the first time, rather than addressing only a
portion of it and having the problem persist. Cause and Effect
Analysis is a good tool for accomplishing this.
Chance Causes
Chance causes (common causes) are the natural variations inherent in any process. They are
typically random and not due to any specific, identifiable factors. For the cake slicing process,
chance causes might include minor fluctuations in ingredient quality within acceptable limits,
slight variations in oven temperature that occur normally, and random human errors that are
occasional and minor. Chance causes are inevitable and represent the background noise of a
process. They do not indicate a process being out of control; rather, they reflect the inherent
variability that is always present. In a well-controlled process, the variation due to chance
causes should be minimal and within the control limits.

ERROR ANALYSES
In quality control, Type I and Type II errors are two well-known ideas that are related to
hypothesis testing. The likelihood of these two errors is inversely proportional to the test's
level of significance and power.
Type 1 Error for a u-Chart

A u-chart is used to monitor the number of defects per unit in a


sample. Similar to p-charts, the control limits for u-charts are typically
set at ±3 sigma from the average number of defects per unit, which
accounts for about 99.73% of the data under normal distribution.

For a u-chart, the type 1 error (α) is also approximately 0.0027 (or
0.27%) for a single point outside the control limits. This is because:

-The control limits are designed such that there is a 99.73%


probability that a point will fall within these limits if the process is in
control.
-Thus, there is a 0.27% chance that a point will fall outside the
control limits purely by random variation, leading to a false positive
or type 1 error.

Here are the type 1 error calculations for the p-chart and the EWMA
chart:
p.Chart

For a p-chart, the type 1 error is based on the control limits set at ±3
sigma. The probability of a Type 1 error (α) for a single point is
approximately 0.0027 (or 0.27%) because:

-The control limits are set such that 99.73% of points fall within the
control limits under normal distribution (±3 sigma).
- Thus, the chance of a point falling outside the control limits is
0.27%.

EWMA Chart

For an EWMA chart, the type 1 error is influenced by the choice of the
smoothing constant (λ) and the control limits. The control limits are
typically set to ensure a desired false alarm rate.

- With λ = 0.2 and control limits set at ±3 sigma, the type 1


error for an EWMA chart is also approximately 0.0027 (or 0.27%) for
individual points.
-The exact type 1 error can vary slightly based on the specific
parameters and calculations used for the EWMA chart.

In both cases, the type 1 error is a measure of how likely it is to


incorrectly signal an out-of- control process when the process is
actually in control.

Summary of Type 1 Errors for Our Charts

- p-Chart: Approximately 0.27%


- EWMA Chart: Approximately 0.27% (with λ = 0.2 and control limits at
±3 sigma)
- u-Chart: Approximately 0.27%

These type 1 error rates indicate the likelihood of incorrectly identifying


a process as out of control when it is actually in control, assuming the
process data follows a normal distribution.

Type 2 error for u chart

The OC curve for the u chart can be generated using the equation
below where the equation is
Poisson with λ = n*u:

To calculate the Type II error (β) for a u-chart, which is the probability of
not detecting a shift in the process when one actually exists, we need
to define a specific shift size in the mean number of defects per unit
that you are interested in detecting. This is commonly expressed as a
multiple of the baseline defect rate

The calculation of the Type II error involves understanding the


distribution of the process after the shift and the sensitivity of the
control chart at detecting such shifts. Here’s the general approach:
1. Define the Shift Size: Decide on the magnitude of the shift in terms
of the baseline defect rate (e.g., 1.5 times the baseline if you're looking
for a 50% increase).
2. Calculate New Average (µ'): This is the baseline defect rate multiplied by the shift
size.

3. Control Limits Calculation: These remain based on the original baseline defect
rate.

4.Error Calculation: Calculate the probability that the new process mean (after the
shift) still falls within the original control limits.

Here, the main statistical tool used is the normal approximation of the Poisson
distribution (since u-charts assume defects follow a Poisson distribution). I'll perform
this calculation for a specific example, such as detecting a 50% increase in the
defect rate. Let's proceed with this.

The calculated Type II error (β) for a u-chart, when looking to detect a 50% increase
in the defect rate, is approximately 0.464. This means there is about a 46.4%
chance that the chart will fail to detect a shift in the process where the defect rate
has increased by 50%.

Type 2 Error

β=φ(L-k√n)-φ(-L-k√n)= =φ(3+0,734√5)-

φ(-3-0,734√5)=1-0,912=0,088

Based on the quality control analyses conducted for cake slice weight in our project, the mean
weight is determined as 43.27 grams, with a target weight of 45 grams and a standard
deviation of 2.356. Given these data, the deviation amount (k) is calculated as -0.734.
Consequently, the probability of Type II error (Beta error) is found to be 0.088, or 8.8%. Type
II error occurs when a faulty process or product is mistakenly accepted. This indicates that
there is a deviation from the target in the weight of cake slices, but this deviation is not
detected by the control process. A 8.8% Type II error rate implies that the process is not
sufficiently sensitive in detecting defective products and needs improvement. Therefore, it is
important to reduce variations in the process and reassess control limits to decrease this error
rate. By doing so, we can enhance customer satisfaction and improve product quality.

1 1
ARL= = =1,096
1−𝛽 1−0,088

In our project, the Average Run Length (ARL) of 1.0965 indicates the average number of
observations required for the control chart to detect a process deviation or anomaly. A higher
ARL value suggests that the control chart is more sensitive and reliable in detecting deviations
as it requires fewer observations on average. However, with an ARL value close to 1, as in
this case, it indicates that the control chart is less effective in detecting process
deviations and may require improvement.

OC CURVE

The OC curve
1.2

0.8

0.6

0.4

0.2

0
0
.
1

0
.
2
5

0
.
5

0
.
7
5

1
.
2
5

1
.
5

2
.
5

3
.
Here is the simulated Operating Characteristic (OC) curve for the u-
chart. The curve shows the probability of not detecting various levels of
shifts in the process mean defect rate, from 0% to 200% of the baseline
defect rate u . The x-axis represents the multiple of the baseline defect
rate, and the y-axis shows the Type II error—the probability of not
detecting a shift.
This visualization helps assess how effectively the u-chart detects
shifts in the defect rate. The chart becomes more effective at
detecting larger increases in defects, as indicated by the decreasing
Type II error as the defect rate increases.

Here is our Excel data For attributes


CONCLUSİON

Upon analysis, we identified an out-of-control point in the R chart, signaling a


potential deviation or unexpected change in our production process. It
necessitates a detailed root cause analysis of this point and the identification of
appropriate corrective actions. Among the proposed improvement actions are
the utilization of a specific patterned cutting apparatus to ensure uniformly
sliced cake portions, training personnel on consistent ingredient measurement
and equipment usage, as well as standardizing cooking time and temperature.
Additionally, updating or enhancing control systems could enhance process
stability. A timeline for implementing improvement suggestions along with
regular reviews will facilitate monitoring and enhancing the process. Through
these measures, we aimed to prevent the
recurrence of out-of-control points and continuously strive for quality
improvement.

Despite setting the target cake slice weight at 45 grams, the average weight of
43.27 grams indicates a deviation in our production process, signifying non-
compliance with the set standards and a deficiency in quality control. Firstly,
it's crucial to acknowledge the potential
negative impact on customer satisfaction. Offering products that don't meet
expected standards can adversely affect brand reputation and customer loyalty.
To address this, a thorough review of the production process and reinforcement
of quality control measures is imperative. This entails scrutinizing the quality of
ingredients, refining labor processes, and optimizing equipment performance as
necessary. Moreover, bolstering quality assurance processes through continuous
training and personnel oversight will ensure product quality and preserve brand
reputation. Implementation of these measures will not only enhance adherence
to established standards but also uphold customer satisfaction and brand
integrity.

You might also like