0% found this document useful (0 votes)
83 views

Error Analysis

The document discusses various sources of error in experimental measurements including: - Incomplete or vague definitions that can lead to systematic errors - Failure to account for factors like environmental conditions that can cause systematic errors - Instrument limitations like finite resolution that can produce random errors - Not calibrating instruments properly which can introduce systematic bias - Estimating whether errors are systematic or random to properly assess measurements It emphasizes the importance of carefully designing experiments to control for sources of error, taking multiple measurements to assess random variations, and considering error budgets when evaluating discrepancies between measurements.

Uploaded by

Harsh Khilawala
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views

Error Analysis

The document discusses various sources of error in experimental measurements including: - Incomplete or vague definitions that can lead to systematic errors - Failure to account for factors like environmental conditions that can cause systematic errors - Instrument limitations like finite resolution that can produce random errors - Not calibrating instruments properly which can introduce systematic bias - Estimating whether errors are systematic or random to properly assess measurements It emphasizes the importance of carefully designing experiments to control for sources of error, taking multiple measurements to assess random variations, and considering error budgets when evaluating discrepancies between measurements.

Uploaded by

Harsh Khilawala
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Error Analysis

• Introduction
• Average error, r.m.s error, probable error and error
propagation
• Significant digit and figures
• Uncertainties in Measurements: Measuring Errors,
Uncertainties, Parent and Sample Distributions, Mean and
Standard Deviation of Distributions,

• Numericals

What is an error?
• Error is a measure of the lack of certainty in a value.
• Error means the inevitable uncertainty that attends all
measurements.
HOW ERRORS ARISE IN MEASUREMENT (SYSTEMS)?
A measurement under ideal conditions has no errors.

A systematic (clearly defined process) and systemic (all


encompassing) approach is needed to identify every source of error
that can arise in a given measuring system. It is then necessary to
decide their magnitude and impact on the prevailing operational
conditions.

Measurement (system) errors can only be defined in relation to the


solution of a real specific measurement task.

If the errors of measurement (systems) given in technical


documentation are specified, then one has to decide how that
information relates to which
• measurand
• input
• elements of the measurement system
• auxiliary means
• measurement method
• output
• kind of reading
• environmental conditions.
Common Sources of Error in Physics Lab Experiments
This vague phrase does not describe the source of error clearly.
Incomplete definition (may be systematic or random) - One reason that it is
impossible to make exact measurements is that the measurement is not always clearly
defined. For example, if two different people measure the length of the same rope, they
would probably get different results because each person may stretch the rope with a
different tension. The best way to minimize definition errors is to carefully consider
and specify the conditions that could affect the measurement.

Failure to account for a factor (usually systematic) - The most challenging part of
designing an experiment is trying to control or account for all possible factors except
the one independent variable that is being analyzed. For instance, you may
inadvertently ignore air resistance when measuring free-fall acceleration, or you may
fail to account for the effect of the Earth's magnetic field when measuring the field of a
small magnet. The best way to account for these sources of error is to brainstorm with
your peers about all the factors that could possibly affect your result. This brainstorm
should be done before beginning the experiment so that arrangements can be made to
account for the confounding factors before taking data. Sometimes a correction can be
applied to a result after taking data, but this is inefficient and not always possible.

Environmental factors (systematic or random) - Be aware of errors introduced by


your immediate working environment. You may need to take account for or protect
your experiment from vibrations, drafts, changes in temperature, electronic noise or
other effects from nearby apparatus.
Instrument resolution (random) - All instruments have finite precision that limits the
ability to resolve small measurement differences. For instance, a meter stick cannot
distinguish distances to a precision much better than about half of its smallest scale
division (0.5 mm in this case).

Failure to calibrate or check zero of instrument (systematic) - Whenever possible,


the calibration of an instrument should be checked before taking data. If a calibration
standard is not available, the accuracy of the instrument should be checked by
comparing with another instrument that is at least as precise, or by consulting the
technical data provided by the manufacturer. When making a measurement with a
micrometer, electronic balance, or an electrical meter, always check the zero reading
first. Re-zero the instrument if possible, or measure the displacement of the zero
reading from the true zero and correct any measurements accordingly. It is a good idea
to check the zero reading throughout the experiment.
Physical variations (random) - It is always wise to obtain multiple measurements over
the entire range being investigated. Doing so often reveals variations that might otherwise
go undetected. If desired, these variations may be cause for closer examination, or they
may be combined to find an average value.

Parallax (systematic or random) - This error can occur whenever there is some distance
between the measuring scale and the indicator used to obtain a measurement. If the
observer's eye is not squarely aligned with the pointer and scale, the reading may be too
high or low (some analog meters have mirrors to help with this alignment).

Instrument drift (systematic) - Most electronic instruments have readings that drift over
time. The amount of drift is generally not a concern, but occasionally this source of error
can be significant and should be considered.

Lag time and hysteresis (systematic) - Some measuring devices require time to reach
equilibrium, and taking a measurement before the instrument is stable will result in a
measurement that is generally too low. The most common example is taking temperature
readings with a thermometer that has not reached thermal equilibrium with its
environment. A similar effect is hysteresis where the instrument readings lag behind and
appear to have a "memory" effect as data are taken sequentially moving up or down
through a range of values. Hysteresis is most commonly associated with materials that
become magnetized when a changing magnetic field is applied.

Estimation of Error
• When attempting to estimate the error of a measurement, it is often
important to determine whether the sources of error are systematic or
random. A single measurement may have multiple error sources, and these
may be mixed systematic and random errors.

• To identify a random error, the measurement must be repeated a small


number of times. If the observed value changes apparently randomly with
each repeated measurement, then there is probably a random error. The
random error is often quantified by the standard deviation of the
measurements. Note that more measurements produce a more precise
measure of the random error.

• To detect a systematic error is more difficult. The method and apparatus


should be carefully analysed. Assumptions should be checked. If possible, a
measurement of the same quantity, but by a different method, may disclose
the existence of a systematic error. A systematic error may be specific to the
experimenter. Having the measurement repeated by a variety of
experimenters would test this.
Discrepancy
How to use uncertainties in experimental reports?

First, if two measurements of the same quantity disagree, we say there is a


discrepancy. Numerically, we define the discrepancy between two measurement as
their difference:
More specifically, each of the two measurements consists of a best estimate and an
uncertainty, and we define the discrepancy as the difference between the two best
estimates.
For example, if two students measure the same resistance as follows
Student A: 15 ± 1 ohms
and
Student B: 25 ± 2 ohms,
their discrepancy is discrepancy = 25-15 = 10 ohms.

Recognize that a discrepancy may or may not be significant. The two measurements
just discussed are illustrated in Figure (a), which shows clearly that the discrepancy
of 10 ohms is significant because no single value of the resistance is compatible with
both measurements. Obviously, at least one measurement is incorrect, and some
careful checking is needed to find out what went wrong.
Figure (a): Two measurements of the
same resistance. Each measurement
includes a best estimate, shown by a black
dot, and a range of probable values, shown
by a vertical error bar. The discrepancy
(difference between the two best
estimates) is 10 ohms and is significant
because it is much larger than the
combined uncertainty in the two
measurements. Almost certainly, at least
one of the experimenters made a mistake.

Suppose, on the other hand, two other students had reported these results:

Student C: 16 ± 8 ohms and Student D: 26 ± 9 ohms.

Here again, the discrepancy is 10 ohms,


but in this case the discrepancy is
insignificant because, as shown in Figure
(b), the two students' margins of error
overlap comfortably and both
measurements could well be correct.
Figure (b): Two different measurements of
the same resistance. The discrepancy is
again 10 ohms, but in this case it is
insignificant because the stated margins of
error overlap. There is no reason to doubt
either measurement (although they could
be criticized for being rather imprecise).
Accuracy and Precision:

 Accuracy refers to the closeness of a measured value to a standard or


known value.
For example, if in lab you obtain a weight measurement of 3.2 kg for a
given substance, but the actual or known weight is 10 kg, then your
measurement is not accurate. In this case, your measurement is not
close to the known value.

 Precision refers to the closeness of two or more measurements to each


other.
Using the example above, if you weigh a given substance five times,
and get 3.2 kg each time, then your measurement is very precise.
Precision is independent of accuracy. You can be very precise but
inaccurate, as described above. You can also be accurate but imprecise.

 For example, if on average, your measurements for a given substance are


close to the known value, but the measurements are far from each other,
then you have accuracy without precision.

 A good analogy for understanding accuracy and precision is to imagine a


basketball player shooting baskets. If the player shoots with accuracy, his
aim will always take the ball close to or into the basket. If the player
shoots with precision, his aim will always take the ball to the same
location which may or may not be close to the basket. A good player will
be both accurate and precise by shooting the ball the same way each time
and each time making it in the basket.
Accuracy Precision
Definition The degree of A state of strict
conformity and exactness — how often
correctness of something is strictly
something when exact.
compared to a true or
absolute value.
Measurements Single factor or Multiple measurements
measurement or factors are needed
Relationship Something can be Results can be precise
accurate on occasion without being accurate.
as a fluke. For Alternatively, results
something to be can be precise AND
consistently and accurate.
reliably accurate, it
must also be precise.
Uses Day to day life etc. Physics, chemistry,
engineering, statistics,
and so on.

Rules for Rounding Off Numbers

Rule One. Determine what your rounding digit is and look to


the right side of it. If the digit is 0, 1, 2, 3, or 4 do not change the
rounding digit. All digits that are on the right-hand side of the
requested rounding digit will become 0.

Rule Two. Determine what your rounding digit is and look to


the right of it. If the digit is 5, 6, 7, 8, or 9, your rounding digit
rounds up by one number. All digits that are on the right-hand
side of the requested rounding digit will become 0.
Error bar

When representing data as a graph, we represent uncertainty in the data points by


adding error bars. We can see the uncertainty range by checking the length of the
error bars in each direction.

Error bars only need to be used when the uncertainty in one or both of the plotted
quantities are significant. Error bars are not required for trigonometric and
logarithmic functions.

To add error bars to a point on a graph, we simply take the uncertainty range
(expressed as "± value" in the data) and draw lines of a corresponding size above
and below or on each side of the point depending on the axis the value
corresponds to.

What is an error bar and how can you estimate one?

An error bar tells you how closely your measured result should be matched by true
result.

An error bar estimates how confident you are in your own measurement or result.

When plotting points (x, y) with known uncertainties on a graph, we plot the
average, or mean, value of each point and indicate its uncertainty by means of
“error bars.” Figure 1 shows how the upper error bar at ymax and the lower error
bar at ymin are plotted. If the quantity x also has significant uncertainty, one adds
horizontal error bars (a vertical error bar rotated 90°) with the rightmost error bar
at position xmax and the leftmost error bar at position xmin.

Figure 1 Diagram of error bars showing uncertainties in the value of the x- and y-
coordinates at point (xavg , yavg).
Significant Figures (digits):
Significant figures are the digits in a value that are known with some degree of
confidence. As the number of significant figures increases, the more certain the
measurement. As precision of a measurement increases, so does the number of
significant figures.

RULES FOR SIGNIFICANT FIGURES

1 All non zero digits are significant. 549 has three significant figures
1.892 has four significant figures
2 Zeros between non zero digits are 4023 has four significant figures
significant. 50014 has five significant figures
3 Zeros to the left of the first non zero 0.000034 has only two significant figures.
digit are not significant. (This is more easily seen if it is written as
3.4x10-5) 0.001111 has four significant
figures.
4 Trailing zeros (the right most zeros) 400. has three significant figures
are significant when there is a 2.00 has three significant figures
decimal point in the number. For 0.050 has two significant figures
this reason it is important to give
consideration to when a decimal
point is used and to keep the
trailing zeros to indicate the actual
number of significant figures.
5 Trailing zeros are not significant in 470,000 has two significant figures
numbers without decimal points. 400 or 4x102 indicates only one significant
figure. (To indicate that the trailing zeros
are significant a decimal point must be
added. 400. has three significant digits and
is written as 4.00x102 in scientific
notation.)
6 Exact numbers have an infinite If you count 2 pencils, then the number of
number of significant digits but pencils is 2.000...
they are generally not reported. The number of centimeters per inch (2.54)
Defined numbers also have an has an infinite number of significant digits,
infinite number of significant digits. as does the speed of light (299792458
m/s).
Fractional Uncertainties
The uncertainty  in a measurement,

(measured ) =  ± ,

indicates the reliability or precision of the measurement.

However, the uncertainty  by itself does not tell the whole story. An uncertainty
of one inch in a distance of one mile would indicate an unusually precise
measurement, whereas an uncertainty of one inch in a distance of three inches
would indicate a rather crude estimate.
Obviously, the quality of a measurement is indicated not just by the uncertainty 
but also by the ratio of dx  , which leads us to consider the fractional
uncertainty,



fractional uncertainty =
|  |

(The fractional uncertainty is also called the relative uncertainty.) In this definition,
the symbol | | denotes the absolute value  .
Random Error and Systematic Error
All experimental uncertainty is due to either random errors or systematic
errors.

Random errors are statistical fluctuations (in either direction) in the measured
data due to the precision limitations of the measurement device.

Random errors usually result from the experimenter's inability to take the
same measurement in exactly the same way to get exact the same number.
Or
Random errors in experimental measurements are caused by unknown and
unpredictable changes in the experiment.
Or
Random errors cause a measurement to be as often larger than the true value,
as it is
smaller.

These changes may occur in the measuring instruments or in the environmental


conditions.
Examples of causes of random errors are:

• electronic noise in the circuit of an electrical instrument,


• irregular changes in the heat loss rate from a solar collector due to changes
in the wind.

Systematic errors, by contrast, are reproducible inaccuracies that are


consistently in the same direction.
Or
Systematic errors in experimental observations usually come from the
measuring instruments.
Or
Systematic errors cause the measurement always to differ from the truth in the
same way.

They may occur because:


• There is something wrong with the instrument or its data handling system,
or
• The instrument is wrongly used by the experimenter.

Systematic errors are often due to a problem which persists throughout the
entire experiment.
The main differences between these two error types are:

Random errors are (like the name suggests) completely random. They are
unpredictable and can’t be replicated by repeating the experiment again.

Systematic Errors produce consistent errors, either a fixed amount (like 1 lb) or a
proportion (like 105% of the true value). If you repeat the experiment, you’ll get
the same error.

Preventing Errors

Random error can be reduced by:


• Using an average measurement from a set of measurements, or
• Increasing sample size.

It’s difficult to detect — and therefore prevent — systematic error. In order to


avoid these types of error, know the limitations of your equipment and understand
how the experiment works. This can help you identify areas that may be prone to
systematic errors.
Mean / Average
The mean is the average of all numbers and is sometimes called the arithmetic
mean. To calculate mean, add together all of the numbers in a set and then divide
the sum by the total count of numbers.

Suppose, we make N measurements of the quantity x (all using the same


equipment and procedures) and find the N values
 ,  , … … , 

Once again, the best estimate for x is usually the average of  ,  , … … ,  . That
is,
 = ̅ ,
where
 +  + ⋯ +  ∑ 
̅ = =
 

In the last line, I have introduced the useful sigma notation, according to which

  =   =  +  + ⋯ +  ;

Standard deviation (SD)
The standard deviation of the measurements is an estimate of the average
uncertainty of the individual measurements.

The standard deviation is given by


1 1
 =  " =   − ̅ ,
−1 −1

The mean ̅ is our best estimate of the quantity x, it is natural to consider the
difference  − ̅ = " .

This difference, often called the deviation (or residual) of  from x, tells us how
much the ith measurement  differs from the average ̅ . If the deviations
" =  − ̅ are all very small, our measurements are all close together and
presumably very precise. If some of the deviations are large, our measurements are
obviously not so precise.

Notice that the deviations are not (of course) all the same size; di is small if the ith
measurement xi happens to be close to ̅ , but di is large if xi is far from ̅ . Notice
also that some of the di are positive and some negative because some of the xi are
bound to be higher than the average ̅ , and some are bound to be lower.

To estimate the average reliability of the measurements, we might naturally try


averaging the deviations di. The definition of the average ̅ ensures that
" =  − ̅ is sometimes positive and sometimes negative in just such a way
that "̅ may be zero.

The average of the deviations is not a useful way to characterize the reliability of
the measurements.

The best way to avoid this annoyance is to square all the deviations, which will
create a set of positive numbers, and then average these numbers.

Now, we obtain average of the " ,


1
  =  "

  is known as the variance of the measurements.

If we then take the square root of the result, we obtain a quantity with the same
units as x itself. This number is called the standard deviation  .
The Standard Deviation of the Mean (SDOM) / Standard Error

If  ,  , … … ,  are the results of measurements of the same quantity then, as we


have seen, our best estimate for the quantity x is their mean ̅ . We have also seen
that the standard deviation  characterizes the average uncertainty of the separate
measurements  ,  , … … ,  .

The uncertainty in the final answer xbest = ̅ is given by the standard deviation 
divided by . This quantity is called the standard deviation of the mean (SDOM),
or standard error, and is denoted  ̅ :


 ̅ = .

Thus, based on the N measured values  ,  , … … ,  , we can state our final
answer for the value of x as
$%&'( )*  = xbest ± ,

where xbest = ̅ mean of  ,  , … … ,  , and  is the standard deviation of the


mean,

   ̅  .

Rounding off to the correct number of significant figures
during mathematical calculations
How Many Digits to Use….
The question of the greatest practical importance is how many digits to include in
your final answer. This is important because, the number of digits you include in
your answer shows the reader the precision of the data leading to the answer, and the
accuracy of the answer. It might be useful to read this section again after reading
through the following sections which explain how to determine the number of
significant digits.
Addition and Subtraction
When adding and subtracting numbers, the rules of significant figures require that
the number of places after the decimal point in the answer is less than or equal to
the number of decimal places in every term in the sum. (Treat subtraction as adding
the same number with a negative sign in front of it.) If some of the numbers have no
digits after the decimal point, use the same basic rule, but don't record any digits to
the right of the last digit in the least significant number. Hopefully, some examples
will clarify these rules.

2355.2342 15600.00 15600 13.7


+ 23.24 + 172.49 + 172.49 + 1.3
--------- -------- --------- -----
2378.47 15772.49 15800 15.0

Multiplication and Division


When multiplying and dividing numbers, the number of significant digits you use is
simply the same number of significant figures as is the number with the fewest
significant figures. Some examples:

13.1 13.10 13.100 15310 1.00


x 2.25 x 2.25 x 2.2500 x 2.3 x 10.04
------- ------ -------- ------- -------
29.5 29.5 29.475 35000 10.0

Why Multiplication and Addition Have Different Rules?


When you add two numbers, you add their uncertainties, more or less. If one of the
numbers is smaller than the uncertainty of the other, it doesn't make much of a
difference to the value (and hence, uncertainty) of the final result. Thus it is the
location of the digits, not the amount of digits that is important.

When you multiply two numbers, you more or less multiply the uncertainties. Thus it
is the percentage by which you are uncertain that is important -- the uncertainty in
the number divided by the number itself. This is given roughly by the number of
digits, regardless of their placement in terms of powers of ten. Hence the number of
digits is what is important.

You might also like