0% found this document useful (0 votes)
20 views

Introduction - Sesh 1

This document provides an overview of a lesson on engineering data analysis. It discusses the following key points in 3 sentences: The lesson will discuss the role of statistics in engineering problem solving, how variability affects collected data for decision making, and planning surveys and experiments. The learning outcomes are for students to understand statistics principles, differentiate data collection methods, and apply planning and conducting of surveys and experiments. The document then provides details on statistical concepts like variables, data types, and levels of measurement to introduce students to foundational statistical topics for engineering data analysis.

Uploaded by

Eunice Estiller
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Introduction - Sesh 1

This document provides an overview of a lesson on engineering data analysis. It discusses the following key points in 3 sentences: The lesson will discuss the role of statistics in engineering problem solving, how variability affects collected data for decision making, and planning surveys and experiments. The learning outcomes are for students to understand statistics principles, differentiate data collection methods, and apply planning and conducting of surveys and experiments. The document then provides details on statistical concepts like variables, data types, and levels of measurement to introduce students to foundational statistical topics for engineering data analysis.

Uploaded by

Eunice Estiller
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Republic of the Philippines

Laguna State Polytechnic University


Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited

LECTURE NOTES
Course MATH 4 – Engineering Data Analysis
Sem/AY First Semester/2023-2024
Module No. 1
Lesson Title Introduction and Role of Statistics to Engineering, Obtaining Data
Week
3
Duration
Date Week 1-3
This lesson will discuss the role of statistics in the engineering problem-solving
Description process. Discuss how variability affects the data collected and used for making
of the engineering decisions, the difference between enumerative and analytical studies.
Lesson Planning and conducting surveys as well as experiments will be emphasized in this
lesson.

Learning Outcomes
Intended Students should be able to meet the following intended learning outcomes:
Learning • To make the students understand the principles of statistics, the role of
Outcomes statistics to engineering
• To differentiate the three methods of collecting data, this is retrospective study,
observational study and designed experiment.
• To understand the planning & conducting surveys and experiments
Targets/ At the end of the lesson, students should be able to:
Objectives • differentiate retrospective study, observational study and designed experiment
• know the current situation of the Engineering problems in support the results
of analysing data and make final recommendations for a decisions
• apply the planning & conducting surveys and experiments in the actual
situation

Student Learning Strategies

Lecture Guide
1. Learning Guide Questions:
1. What are the principles of statistics? What is the role of statistics to
Engineering? What are the different steps in the engineering
method?
2. What are the three methods of collecting data? What are the
example and application of the methods?
3. What are the different steps in planning & conducting surveys and
experiments in the actual situation

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited

Introduction and Review Principles of Statistics


• Statistics
It is the science of conducting studies that collect, organize, summarize, analyse
and draw conclusions from data.
Scope of Statistics
Statistics is concerned with using scientific methods in:
-Collecting:
Data relating to certain events or physical phenomena. Most datasets involve
numbers

-Organizing:
All collected data will be arranged in logical and chronicle order for viewing
and analyses. Datasets are normally organized in either ascending or
descending order.

-Summarizing:
Summarizing the data to offer an overview of the situation

-Presenting:
Develop a comprehensive way to present the dataset

-Analyzing:
To analyse the dataset for the intended application
Statistics is the science of decision making in a world full of uncertainties
Example of uncertainty in the world:
-A person’s daily routine
-The fate of human being
-Person’s career life
-The rise and fall of stock market
-The weather
-Global economy
-World politics

Uncertainties in engineering and technology:


-Academic performance of this class
-In design engineering uncertainties in design methodologies, material
properties, fabrication techniques
-Quality of the products
-Market and sales of new and existing products

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited

Two Main branches of Statistics

1. Descriptive Statistics
-collection and organization of data
-uses the data to provide descriptions of the population, either through
numerical calculations or graphs or tables

2. Inferential Statistics
-makes inference and predictions about a population based on a sample
of data taken from the population in question
-consist of generalizing from the samples to populations, performing
hypothesis testing, determining relationships among variables, and
making predictions

Population- refers to the groups of aggregates of people, objects, materials,


events, or things of any form. In order to save money statistician may study
only a part of the population called sample
Sample- is a subgroup of the population, taken from the population to
represent a population characteristics or traits
Parameters- measures of the population
Statistic- measures of the sample
Variables

Variables-is a characteristics that takes two or more values across individuals


Independent Variables- cause, predictor
Dependent Variables- effect, output, value being predicted
Qualitative variables-represent differences in quality, character, or kind but
not in amount
Quantitative variables-are numerical in nature and can be ordered or ranked
Discrete variables- whose values can be counted using integral values
Continuous variables-assume any numerical value over an interval or
intervals
Level Of Measurement

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited
Nominal-use numbers for the purpose of identifying name or membership in a
group or category.
Ordinal- connote ranking or inequalities, numbers represent “greater than” or
“less than” measurements such as preferences or rankings.
Interval- indicate an actual amount and there is equal unit of measurement
separating each score, specifically equal intervals.
Ratio- are similar to interval data, but has an absolute zero and multiples are
meaningful. These are the highest level of measurement

Data -is a collection of values for a particular variables

- factual information(as measurements or statistics) used as a basis for


reasoning, discussion, or calculation (e.g. the data is plentiful and easily
available). Information output by a sensing device or organ that includes both
useful and irrelevant or redundant information and must be processed to be
meaningful. Information in numerical form that can be digitally transmitted or
processed.

Types of Data

Primary Data- are data collected directly by the researcher himself. These are
first hand or original sources

This can be collected through:

-by direct observation or measurement;

-by interview using sets of question called questionnaires or rating scales as


guides in collecting objective and measurable data;

-by mail of recording or recording forms via ordinary or special mails, courier
services, e – mail and fax to reach out distant data providers;

-by experimentation to find out cause and effect of a certain phenomenon; and

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited

-by registration such as registry of births, deaths, marriages

Secondary data -are information taken from published or unpublished


materials previously gathered by other researchers or agencies such as book,
newspapers, magazines, journals, published and unpublished thesis and
dissertations

Sampling Design/Methods

1. Probability Sampling
a. Each of the units in the target population has the chance of being
included in the sample
b. Greater possibility of representative sample of the population
c. Conclusion derived from data gathered can be generalized for the
whole population

Types of Probability Sampling

1.Simple random Sampling- is the sampling technique where the sample is


obtained from the population randomly.
2.Systematic Sampling- the sample are taken from a systematic order of
appearance
in a given sequence or arrangement.
3.Stratified Sampling- the population is divided into different strata or
groups or its representative size is taken proportionally in the population
4. Cluster Sampling-the population is formed into different cluster (Area
sampling)
5. Multi Stage Sampling-this is usually used for national, regional, provincial
or country level studies

2. Non Probability Sampling


a. No way that each of the units in the target population has the same
chance of being included in the sample.
b. No assurance that every unit has some chance of being included.
c. Conclusion derived from data gathered is limited only itself.

Types of Non Probability


1. Accidental or Convenience Sampling- it is obtained when the researcher
selects whatever sampling units are conveniently available
2.Purposive Sampling-under this scheme, the sampling units are selected
subjectively by the researcher, who attempts to obtain a sample that appears to
be representative of the population.
3.Quota Sampling – In this method, the researcher determines the sampling
size which should be filled up
- Ex. Barangay official
4.Snowball Sampling-this type of sampling that starts with the known sources

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited
of information, who or which will in turn give other sources of information.
5.Networking Sampling-this is used to find socially devalued urban
populations such as addicts, alcoholics, child abusers and criminals, because
they are usually “hidden from outsiders”

What Do Engineers Do?

An engineer is someone who solves problems of interest to society with


the efficient application of scientific principles by:
-refining existing products
-designing new products or processes

Engineering methods -Creative Process

1. Develop a clear and concise description of the problem.


2. Identify, at least tentatively, the important factors that affect this problem or
that may play a role in its solution.
3. Propose a model for the problem, using scientific or engineering knowledge
of the phenomenon being studied. State any limitations or assumptions
of the model.
4. Conduct appropriate experiments and collect data to test or validate the
tentative model or conclusions made in steps 2 and 3.
5. Refine the model on the basis of the observed data.
6. Manipulate the model to assist in developing a solution to the problem.
7. Conduct an appropriate experiment to confirm that the proposed solution to
the problem is both effective and efficient.
8. Draw conclusions or make recommendations based on the problem
solution.

The steps in the engineering method are shown in Fig. 1-1. Notice that the
engineering method features a strong interplay between the problem, the
factors that may influence its solution, a model of the phenomenon, and
experimentation to verify the adequacy of the model and the proposed
solution to the problem. Steps 2–4 in Fig. 1-1 are enclosed in a box, indicating
that several cycles or iterations of these steps may be required to obtain the

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited
final solution. Consequently, engineers must know how to efficiently plan
experiments, collect data, analyse and interpret the data, and understand how
the observed data are related to the model they have proposed for the
problem under study.
The field of statistics deals with the collection, presentation, analysis,
and use of data to make decisions, solve problems, and design products
and processes. Because many aspects of engineering practice involve
working with data, obviously some knowledge of statistics is important to any
engineer. Specifically, statistical techniques can be a powerful aid in designing
new products and systems, improving existing designs, and designing,
developing, and improving production processes. It is the science of learning
information from data.

Experiment & Processes are not Deterministic

-Statistical techniques are useful for describing and understanding


variability.
-By variability, we mean successive observations of a system or
phenomenon do not produce exactly the same result.
-Statistics gives us a framework for describing this variability and for
learning about potential sources of variability.

An Engineer Example of Variability-1

An engineer is designing a nylon connector to be used in an automotive


engineer is considering establishing the design specification on wall thickness
at 3/32 inch, but is somewhat uncertain about the effect of this decision on the

connector may fail when it is installed in an engine. Eight prototype units are
produced and their pull-off forces measured (in pounds):

12.6, 12.9, 13.4, 12.3, 13.6, 13.5, 12.6, 13.1

An Engineer Example of Variability-2

-The dot diagram is very useful plot for displaying a small body of data-
say up to about 20 observations
-This plot allows us to see easily two features of the data; the location,
or the middle, and the scatter or variability

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited

An Engineer Example of Variability-3

-The engineer considers an alternate design and eight prototypes are built and
pull-off force measured.

-The dot diagram can be used to compare two sets of data.

Figure 1-2 presents a dot diagram of these data. The dot diagram is a very
useful plot for displaying a small body of data—say, up to about 20
observations. This plot allows us to see easily two features of the data; the
location, or the middle, and the scatter or variability. When the number of
observations is small, it is usually difficult to identify any specific patterns in
the variability, although the dot diagram is a convenient way to see any
unusual data features. The need for statistical thinking arises often in the
solution of engineering problems. Consider the engineer designing the
connector. From testing the prototypes, he knows that the average pull-off
force is 13.0 pounds. However, he thinks that this may be too low for the
intended application, so he decides to consider an alternative design with a
greater wall thickness, 18 inch. Eight prototypes of this design are built, and the
observed pull-off force measurements are 12.9, 13.7, 12.8, 13.9, 14.2, 13.2,
13.5, and 13.1. The average is 13.4. Results for both samples are plotted as dot
diagrams in Fig. 1-3, page 3. This display gives the impression that increasing
the wall thickness has led to an increase in pull-off force. However, there are
some obvious questions to ask. For instance, how do we know that another
sample of prototypes will not give different results? Is a sample of eight
prototypes adequate to give reliable results? If we use the test results obtained
so far to conclude that increasing the wall thickness increases the strength,
what risks are associated with this decision? For example, is it possible that the
apparent increase in pull-off force observed in the thicker prototypes is only
due to the inherent variability in the system and that increasing the thickness
of the part (and its cost) really has no effect on the pull-off force?

Two direction of reasoning

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited

Often, physical laws (such as Ohm’s law and the ideal gas law) are applied to
help design products and processes. We are familiar with this reasoning from
general laws to specific cases. But it is also important to reason from a specific
set of measurements to more general cases to answer the previous questions.
This reasoning is from a sample (such as the eight connectors) to a population
(such as the connectors that will be sold to customers). The reasoning is
referred to as statistical inference. See Fig. 1-4. Historically, measurements
were obtained from a sample of people and generalized to a population, and
the terminology has remained. Clearly, reasoning based on measurements from
some objects to measurements on all objects can result in errors (called
sampling errors). However, if the sample is selected properly, these risks can
be quantified and an appropriate sample size can be determined.

Collecting Engineering data

Three ways of collecting data on the impacts of factors on a response in a


system:

1. Retrospective Study
This would use either all or a sample of the historical process data
archived over some period of time.

2. Observational Study
Collect relevant data from current operations without disturbing with
the system. The engineer observes the process or population,
disturbing it as little as possible, and records the quantities of interest.
Because these studies are usually conducted for a relatively short time
period, sometimes variables that are not routinely measured can be
included.

3. Designed Experiments
Disturb the system and observe the impacts. The engineer makes

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited
deliberate or purposeful changes in the controllable variables of the
system or process, observes the resulting system output data, and then
makes an inference or decision about which variables are responsible
for the observed changes in output performance.

Example : Distillation column

Engineers were interested in the concentration of acetone in the output stream


from a distillation column.

In particular, how this response was affected by three factors:


-Reboil temperature;
-Condenser temperature;
-Reflux rate

Example : Distillation column, retrospective study

Collect data from operational records

Possible issues
-Missing data: records are often incomplete;
-Incompatible data: response may be hourly average, temperatures may be
instantaneous.
-Some factors may not have changed much, so we cannot detect their impact;
-Some factors may vary together, so we cannot separate their impacts

Most importantly, we do not know what else might have been changing, and
influencing the response.

Example : Distillation column, observational study

Collect data from current operations

Some improvement
-Data collection is more intensive than historical records, so no missing data
and variables can be measured on compatible time scales.
-But some factors may still not have changed much, and other factors may still
vary together, so we cannot detect or separate their impacts.
With more intensive effort, we can sometimes monitor other factors that might
influence the response.

Example : Distillation column, designed experiment

Engineers choose two levels of each factor, a low level labelled “-“ and a high
level labelled “+”

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited
All possible combinations of these lead to 23 = 8 treatments

If all other aspects of the distillation process are controlled, any differences in
the response for different treatments can be attributed to the differences in the
factor levels.

Advantages of the designed experiment:


-All factors are varied, so all effects can be identified;
-Factors vary independently, so all effects can be identified with specific
factors.
-If all treatments are used, we can identify interactions: when the level of one
factor changes the effect of another factor.

Planning and Conducting Surveys

Surveys- It is a way to ask a lot of people a few well-constructed questions. It is


a series of unbiased questions that the subject must answer.

Advantages of surveys
They are efficient ways of collecting information from a large number of
people, they are relatively easy to administer, a wide variety of information can
be collected and they can be.

Disadvantages of surveys
It arise from the fact that they depend on the subjects motivation, honesty,
memory and ability to respond. Moreover, answer choices to survey questions
could lead to vague data.

Conducting a survey

-There are various methods for administering a survey.


-It can be done as a face to face interview or a phone interview where the
researcher is questioning the subject.
-A different option is to have a self-administered survey where the subject can
complete a survey on a paper and mail it back, or complete the survey online.
-Face to face interviews are generally give good response rates, fewer
misunderstanding, and overall more complete, at the cost of time and
responses can be a bit biased depending on the interviewer’s appearance or
attitude.

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited
-Self-administered surveys are more convenient and can accommodate a large
number of samples because of the fact that it is done on paper or online. Gives
more anonymity to the respondents therefore making them choose honest
responses, but at the cost of completeness and response rates.

Five Simple Steps for Conducting Surveys

1. Identify the audience


The research done before conducting a survey is crucial to the survey’s
success. If you’re trying to sell a product to the wrong audience or not asking
the right questions, you won’t get the results you need. If you have a specific
audience in mind, the survey can be tailored to get the answers you need to
know. It’s also important to look for surveys that have previously been done
and are similar to the one you’re pursuing so you can be unique and provide an
interesting angle for a reporter that may cover your data.

2. Find a survey provider


Surveys are a useful tool that can be relatively inexpensive, depending on
what you want to accomplish and the provider you choose. At Walker Sands,
we use a combination of survey providers, depending on what’s best for our
client. Survey Monkey is a very cost-efficient, general provider. Forrester
Research is more credible, but is also much more expensive. Likewise, a
general audience is less expensive than gearing a survey toward IT managers at
Fortune 500 companies. We can help tailor your survey to get the results you
want with a provider you can afford.

3. Conduct the survey


When it comes to the length of a survey, short and sweet is best. Be sure to
exhaust every possible choice to the question you’re asking to get the most
rewarding results. The moment you send your survey also impacts the results.
If you send a survey to IT managers at 9 p.m. on Sunday, they’re less likely to
respond after the flood of emails Monday morning. Send it another day of the
week during business hours to generate more responses.

4. Create context for the survey


You have the data, now what should you do with it? It’s important to have
target publications in mind before you conduct the survey. Where do you want
your results to be published? Who do you want to share them with? It also
helps to compile the data into a chart or infographic. That makes the reporter’s
job easier and increases the likelihood that they will use your data in their
publication.

5. Evaluate your research


Case studies allow you to revisit the efficiency of a survey. Did it solve the
problem proposed at the beginning of the survey? Is it detailed enough to
make an impression on the industry? When you answer these questions, you
can track how often your research is shared and where. This allows you to
improve how you conduct future surveys, change your message or even start

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited
over with a completely new survey.

Planning and Conducting Experiments

Introduction to Design Experiment

“Why do we Experiments?”
Experiments are the basis of all theoretical predictions. Without experiments,
there would be no results, and without any tangible data, there is no basis for
any scientist or engineer to formulate a theory. The advancement of culture
and civilization depends on experiments which bring about new technology (P.
Cuadra)

Experiment – is a series of tests conducted in a systematic manner to


increase the understanding of an existing process or to explore a new product
or process.

Design of Experiments (DOE)


It is a tool to develop an experimentation strategy that maximizes learning
using a minimum of resources. It is widely used in many fields with broad
application across all the natural and social sciences. It is extensively used by
engineers and scientists involved in the improvement of manufacturing
processes to maximize yield and decrease variability. Often times, engineers
also work on products or processes where no scientific theory or principles are
directly applicable. Experimental design techniques become extremely
important in such situations to develop new products and processes in a cost-
effective and confident manner.
DOE helps in:

-identifying relationships between cause and effect.


-providing an understanding of interactions among causative factors.
-determining the levels at which to set the controllable factors (product
dimension, alternative materials, alternative designs, etc.) in order to optimize
reliability.
-Minimizing experimental error (noise)
-Improving the robustness of the design or process to variation.

Stages of Design of Experiments (DOE)

1. Planning
It is important to carefully plan for the course of experimentation before
embarking upon the process of testing and data collection. A few of the
considerations to keep in mind at this stage are a thorough and precise
objective identifying the need to conduct the investigation, assessment of time
and resources available to achieve the objective and integration of prior
knowledge to the experimentation procedure. A team composed of individuals
from different disciplines related to the product or process should be used to
identify possible factors to investigate and the most appropriate response(s) to

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited
measure. A team approach promotes synergy that gives a richer set of factors
to study and thus a more complete experiment. Carefully planned experiments
always lead to increased understanding of the product or process. Well
planned experiments are easy to execute and analyze. Botched experiments, on
the other hand, may result in data sets that are inconclusive and may be
impossible to analyze even when the best statistical tools are available.

2. Screening
Screening experiments are used to identify the important factors that affect
the process under investigation out of the large pool of potential factors. These
experiments are carried out in conjunction with prior knowledge of the
process to eliminate unimportant factors and focus attention on the key factors
that require further detailed analyses. Screening experiments are usually
efficient designs requiring few executions, where the focus is not on
interactions but on identifying the vital few factors.

3. Optimization
Once attention has been narrowed down to the important factors affecting the
process, the next step is to determine the best setting of these factors to
achieve the desired objective. Depending on the product or process under
investigation, this objective may be to either increase yield or decrease
variability or to find settings that achieve both at the same time.
4. Robustness Testing
Once the optimal settings of the factors have been determined, it is
important to make the product or process insensitive to variations that are
likely to be experienced in the application environment. These variations
result from changes in factors that affect the process but are beyond the
control of the analyst. Such factors (e.g. humidity, ambient temperature,
variation in material, etc.) are referred to as noise or uncontrollable factors. It
is important to identify such sources of variation and take measures to ensure
that the product or process is made insensitive (or robust) to these factors.

5. Verification
This final stage involves validation of the best settings by conducting a few
follow-up experimental runs to confirm that the process functions as desired
and all objectives are met.

Three Basic Principle of design experiments

1. Randomization
Allocation of the experimental material and the order of the runs of the
experiment performed are randomly determined. Statistical methods require
that observations (or errors) be independently disturbed random variables.
Randomization make this assumption valid. Randomization average out
effects of extraneous factors. Randomization can be done by computer
programs or random number tables.

2. Replication

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited
Replication means independent repeat run of each factor combination.
Experimenter can obtain the estimate of experimental error. This estimate of
error is the basic unit of measurement for determining whether observed
differences in the data are really statistically significant. If simple mean is used
to estimate the true mean, then variance of sample mean = (variance of the
observations)/no. of replication. Increase in replications would give better
estimates of mean.

3. Blocking
It helps in improving the precision of the experiment. It is used to reduce or
eliminate the variability transmitted from nuisance factor-factors that may
influence the response variable but in which we are not interested. Blocking
means putting similar experimental material in one block. And applying
treatments in each block. Two batches of raw material for the hardness testing
experiment.

Strategy of Experimentation

The general approach of planning and conducting the experiment is called


Strategy of Experimentation. Let us consider example of preparation of curd
from milk. Some of the factors that influence the preparation of curd are as
follows;

1. The temperature of the milk


2. Quantity of curd culture added to milk
3. Ph value of the curd
4. Fat of milk
5. Pot used for curd
6. Room temperature
7. Seasons winter, summer, monsoon
8. Timing of the day morning, evening
9. The list can be extended

Guidelines for Designing an Experiments

1. Recognition of and statement of the problem


It is necessary to develop all ideas about the objectives of the experiment.
Team approach is useful. Some of the reasons for running an experiment are
a.) Factor screening- to find most influential factors having impact on
response variable.
b.) Optimization – to find settings or levels of the important factors that
result in desirable values of response variable
c.) Confirmation- to verify some theory or past experience. Testing
effectiveness of new substitute material.
d.) Discovery- to find new material
e.) Robustness- to find the conditions under which response variable
seriously degrade

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited
2. Selection of the response variable
It should give required information. The measurement system capability is
important

3. Choice of factors, levels and ranges


The important factors having most influence are called design factors or
nuisance factors. These are classified as controllable, uncontrollable and noise
factors. The levels of controllable factor are set by experimenter. It is important
to minimize the variability transmitted by noise factors.
Cause and effect diagram, fishbone diagram, process knowledge will be helpful
in deciding levels.

4. Choice of experimental design


It depends on the previous steps. There standard designs available. One can
choose among them that best suits our experiment. Software are also available
for deciding the design to be used. Model is also determined, it is the empirical
relation between factors and response variable

5. Performing the experiment


Take utmost care to execute experiment as per plan. any mistake will lead to
increase in error.

6. Statistical Analysis of the data


It assures that the conclusions are objective. Use graphical methods and
empirical model.

7. Conclusions and recommendations


Draw practical conclusions and recommend the action. Experimentation is a
n iterative procedure. Conduct series of small experiments instead of
comprehensive experiment.

Example: Manufacturing a car


Productivity= Annual Revenue/ Annual Cost

Factors that affect the demand of a car as follows:


• Mileage of a car
• Convenience of driving
• Aesthetic of the car
• Selling price of the car
• Size of the population
• Income level of the people
• Number of competing brands
• Location of consumers
The objective of company is to identify the optimum level of production of car
so as to increase the productivity

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna
ISO 9001:2015 Certified
Level I Institutionally Accredited

LSPU SELF-PACED LEARNING MODULE: Engineering Data Analysis (Math 4)


Prepared by: Josephine A. Villamin Enhanced By: Engr. Raphael B. Fabroada

You might also like