Introduction To Measurement Uncertainty and Error Analysis
Introduction To Measurement Uncertainty and Error Analysis
1
Precision
refers
to
the
number
of
significant
figures
in
a
measurement.
In
general,
the
better
the
resolution
of
our
tools
or
the
more
samples
we
can
measure,
the
better
our
precision.
Accuracy
meanwhile
refers
to
how
“realistic”
our
measurements
are.
Systematic
errors
like
poorly
calibrated
instruments
mean
that
we
might
achieve
results
with
good
precision
but
poor
accuracy.
Measured − Accepted
%Error = ×100% ,
Accepted
can
be
useful
to
convey
the
accuracy
of
our
result,
showing
how
closely
it
matches
accepted
values
or
theoretical
predictions.
Additionally,
our
uncertainty
(±0.05m/s2)
is
deemed
“insignificant”
since,
within
the
range
of
our
uncertainty,
our
result
overlaps
the
accepted
value.
Given
our
apparently
successful
result,
we
might
therefore
extend
our
study
to
objects
of
other
masses,
or
go
further
and
study
objects
subject
to
additional
forces,
like
air
resistance.
If,
on
the
other
hand,
we
had
measured
the
acceleration
due
to
gravity
to
be
7.5±0.7m/s2
then
we
must
report
that
our
result
is
“inconsistent”
with
the
commonly
accepted
value
of
the
acceleration
due
to
gravity.
In
this
second
case,
the
discrepancy
between
our
result
and
the
accepted
value
in
our
gravity
example
is
“significant”
because
our
result
is
quite
different
than
the
expected
value,
even
at
the
extremes
of
our
range
in
uncertainty.
The
percentage
error
when
compared
against
the
accepted
value
is
approximately
23%.
This
situation
would
probably
lead
us
to
re-‐evaluate
our
calculations
(were
mistakes
made?),
our
methodology
(does
our
experiment
have
flaws
we
overlooked?),
and
our
tools
(are
systematic
problems
affecting
our
results,
e.g.
slow/fast
timers,
or
poorly
calibrated
length
scales?).
We
may
need
to
rethink
our
experiment.
Or
perhaps
we
find
ourselves
in
the
interesting
situation
that
our
experiment
is
correct
and
we
have
discovered
a
flaw
in
the
theory
we’re
investigating
(many
unexplained
observations
and
results
have
eventually
led
to
historic
scientific
revolutions!).
Finally
we
should
consider
a
case
where
our
experiment
tells
us
that
acceleration
due
to
gravity
is,
say,
11±2m/s2.
In
this
case
our
result
is
“consistent”
with
the
accepted
value
of
9.80665m/s2,
however
our
percentage
error
(~12%)
and
uncertainty
(±2m/s2)
are
both
relatively
large.
In
some
scenarios,
depending
on
the
experimental
conditions,
this
might
be
a
completely
appropriate
and
acceptable
outcome,
but
it
might
also
indicate
that
improvements
in
methodology
or
instrumentation
are
required
to
better
our
precision
and
confirm
the
accuracy
of
our
result.
We
would
then
need
to
consider
whether
efforts
at
improving
our
experiment
would
be
warranted
and/or
successful.
2.
Commonalities
Between
Experiments
While
most
of
our
experiments
will
touch
on
a
variety
of
unique
and
apparently
unrelated
topics,
they
will
generally
have
in
common
that
we
will
compare
a
result
to
an
accepted
value
or
validate
a
result
predicted
by
theory,
as
in
the
example
scenarios
above.
This
serves
as
an
effective
way
to
learn
to
experiment,
and
eventually
students
may
find
themselves
involved
in
research
investigating
entirely
unknown
and
new
science.
3.
Measurements
and
Uncertainty
Let
us
now
begin
to
formalize
our
approach
to
dealing
with
experimental
uncertainties,2
mathematically
speaking.
We
have
three
important
tasks
ahead:
2
Note,
in
an
experimental
context,
the
term
“uncertainty”
is
often
used
interchangeably
with
the
term
“error.”
Neither
term
refers
to
an
error
on
the
part
of
the
researcher,
but
rather
to
the
quantifiable
limitations
of
the
experimental
tools
or
methods.
We
will
usually
use
the
term
“uncertainty”
in
this
document,
but
you
might
encounter
both
in
your
studies.
1) We
must
establish
how
uncertainty
arises
in
experiments.
We’ll
see
that
in
most
experiments
there
are
various
factors
contributing
to
uncertainty,
and
depending
on
the
scenario,
some
may
be
significant
while
others
might
be
negligible.
2) We
must
learn
to
quantify
uncertainty
in
our
experiments.
To
some
extent
this
will
require
common
sense,
both
to
minimize
uncertainties
where
possible
and
to
assign
them
“reasonable”
values.
We’ll
address
the
three
main
types
of
uncertainties
as
we
proceed.
3) We
must
finally
determine
how
to
correctly
carry,
or
“propagate,”
these
uncertainties
through
our
calculations.
Our
experiments
will
often
require
directly
measuring
two
(or
more)
properties
of
a
system
in
order
to
calculate
a
third
dependent
property
(e.g.
by
hanging
a
known
mass
from
a
spring,
k=F/x
we
can
“indirectly”
determine
the
spring’s
constant,
k,
if
we
can
“directly”
measure
the
displacement,
x,
caused
by
the
known
force,
F=mg).
We
will
need
to
know
how
the
uncertainties
in
the
measured
values
carry
through
to
affect
the
final
result.
Notation
In
your
classes
you
should
notice
that
you
encounter
examples
almost
always
dealing
in
numbers
that
are
exactly
defined.
As
such,
the
solution
to
a
problem
usually
has
a
single
“correct”
answer:
a+b=c,
or
xy=z.
This
is
a
convenient
way
to
teach
theory
and
to
practice
calculations.
However,
this
tendency
to
deal
in
exact
numbers
risks
that
students
develop
an
unrealistically
“clean”
perception
of
the
science
they’re
studying.
In
the
practical
reality
of
experimental
physics
we
must
deal
with
the
fact
that
measurements
are
never
known
with
absolute
certainty.
No
matter
the
effort
applied,
even
the
best
experimentalist
must
contend
with
unavoidable
limitations
in
how
confidently
they
can
express
a
measurement.
Therefore
when
we
measure
a
mass,
we
require
a
notation
to
capture
both
our
measurement
and
also
this
“uncertainty”
in
our
measurement.
In
real
terms
we
must
express
our
result
not
as
an
exact
value
“m,”
but
rather
as
a
best
effort
at
measurement
of
m,
with
an
uncertainty
recorded
to
indicate
our
confidence
in
our
measurement.
That
is,
the
preferred
form
for
recording
m
would
be
m
=
m(best
measurement)
±
Δm,
or,
even
more
succinctly,
we
simply
record
the
mass
as
m±Δm.
Similarly,
lengths
would
be
expressed
as
L±ΔL,
times
expressed
a
t±Δt,
momentum
as
p±Δp,
etc.
This
serves
as
a
useful
notation
when
dealing
with
individual
measurements
and
also
with
larger
sets
of
data,
and
in
all
labs
this
will
be
our
preferred
way
of
recording
any
measurement.
4.
Sources
of
Uncertainty
Uncertainty
in
experiments
arises
from
three
basic
sources:
reading
uncertainties,
random
uncertainties,
and
systematic
uncertainties.
As
a
scientist,
our
job
is
to
recognize
where
each
type
enters
into
an
experiment,
and
to
minimize
their
influence
to
the
best
of
our
ability.
Ideally
we
can
make
two
out
of
the
three
negligibly
small
compared
against
their
more
influential
counterpart
(that
is,
we
identify
which
of
the
three
sources
of
uncertainty
most
affects
our
measurement).
We
then
simply
quantify
the
source
of
uncertainty
and
propagate
it
through
any
calculations
to
arrive
at
our
final
result.
4.1
Reading
Uncertainty
Frequently
we
encounter
situations
where
our
instrumentation
is
a
limiting
factor
in
our
ability
to
make
a
measurement.
Because
most
instruments
have
an
incremented
scale,
when
a
measurement
falls
between
the
smallest
marked
increments
(“ticks”)
it’s
often
appropriate
and
acceptable
to
estimate,
or
“interpolate,”
this
final
digit.
For
example,
if
a
ruler
has
millimeter
markings,
the
length
of
an
object
might
be
observed
to
be
slightly
greater
than,
say,
45mm
but
definitely
less
than
46mm.
In
this
case
we
can
look
closely
and,
by
our
best
effort,
we
decide
that
the
object’s
length
is
very
close
to
45.4mm.
However,
we’re
also
cognizant
that
because
we’re
essentially
estimating
our
final
digit
we
may
be
slightly
in
error,
and
so
we
include
in
our
recorded
measurement
an
estimate
of
our
uncertainty.
By
common
convention,
we
often
choose
to
use
half
the
value
of
the
smallest
increment
on
our
measuring
instrument
as
our
estimate
of
the
uncertainty.
So
in
the
case
of
our
ruler
measurement
with
millimetre
markings,
we
would
Figure
1.
record
our
result
as
45.4±0.5mm.
Measurement
interpolation.
Note
though
while
the
“half
the
smallest
division”
convention
is
generally
a
good
rule
of
thumb,
we
may
still
encounter
cases
where
it
would
overestimate
our
uncertainty
significantly.
This
tends
to
occur
if
our
instrument’s
divisions
are
widely
spaced.
If
a
ruler
provided
only
centimeter
divisions
for
instance,
then
we
might
quite
reasonably
estimate
our
measurement
to
an
uncertainty
of
±0.1cm
rather
than
the
more
conventional
±0.5cm.
To
use
±0.5cm
simply
by
convention
would
in
this
case
be
overly
imprecise,
which
would
affect
any
subsequent
calculations.
We
always
need
to
apply
good
judgment
in
order
to
achieve
best
results.
Parallax
When
reading
measuring
instruments
with
scale
markings
we
must
also
be
careful
to
read
from
directly
in
front
of
the
instrument’s
scale
in
order
to
avoid
“parallax.”
Parallax
can
occur
if
the
experimenter
moves
their
head
to
either
side
of
the
plane
perpendicular
to
the
scale
and
the
object.
This
can
cause
a
visual
angular
misalignment
between
the
object
and
the
measuring
scale,
such
that
the
experimenter
misreads
or
mis-‐interpolates
their
measurement.
Digital
Displays
Figure
2.
Parallax
adds
reading
error.
Electronic
measuring
instruments
frequently
come
equipped
Measuring
tools
should
be
viewed
with
digital
readouts
or
will
display
results
to
a
computer.
As
"straight
on."
such,
we
need
to
be
able
to
record
our
results
correctly
when
using
these
tools.
One
needs
to
be
somewhat
careful
in
deciding
on
the
appropriate
uncertainty
to
record
when
using
a
digital
display.
For
example,
if
an
ammeter
display
reads,
say,
10.51mA
then
it
could
be
reasonable
to
believe
the
worst
case
uncertainty
would
be
plus
or
minus
the
smallest
digit,
i.e.
10.51±0.01mA.
However,
this
may
not
always
be
the
case.
Depending
on
the
equipment,
a
more
representative
recording
of
the
measurement
might
actually
be
10.510±0.005mA
(you
would
need
to
check
the
instrument
labeling
or
manual
to
confirm
the
applicable
convention).
And
with
some
instrumentation,
you
might
find
a
label
indicating
that
the
reading
is
“accurate
to
±1%
of
full
scale,”
(such
that
in
the
case
of
our
ammeter,
if
the
full
scale
covers
20mA,
then
our
uncertainty
would
be
1%
of
20mA,
or
0.2mA,
leading
to
a
reading
of
10.5±0.2mA).
Despite
these
variations
in
reading,
with
a
little
effort
you’ll
be
able
to
identify
the
most
appropriate
way
to
record
your
measurement
and
its
uncertainty.
Finally
we
mention
that
in
some
cases
the
reading
from
a
digital
display
may
be
“noisy”
and
fluctuate
to
some
degree.
(This
can
and
does
occur
for
analog
displays
as
well,
as
when
a
needle
on
the
display
jumps
about
erratically).
This
can
be
a
result
of
random
variations
influencing
the
experiment
or
the
measuring
instrument
itself,
such
as
electrical
noise,
thermal
fluctuations,
etc.
We’ll
see
next
that
random
fluctuations,
if
non-‐negligible,
may
require
us
to
increase
our
estimates
of
uncertainty
to
values
larger
than
those
justified
by
“reading
uncertainty”
alone
(that
is,
our
“half
the
smallest
division”
convention
or
“±1
digit”
on
a
digital
display
may
significantly
underestimate
the
realistic
value
of
our
uncertainty).
We’ll
also
see
that
for
such
cases,
if
we
can
repeat
the
measurement
several
times,
statistical
analysis
will
provide
a
more
methodical
mathematical
way
of
determining
our
measurement
and
uncertainty.
4.2
Random
Uncertainty
While
“reading
uncertainty”
is
applicable
and
useful
when
the
object
or
property
we
are
measuring
is
reasonably
well
defined,
in
many
cases
the
“thing”
we
are
measuring
is
naturally
“fuzzy.”
In
another
sense,
if
we
have
difficulty
defining
when
or
where
our
measurement
should
start,
repeating
the
measurement
may
give
slightly
varying
results;
there
is
an
imprecision
in
our
technique
that
is
difficult
to
eliminate,
so
despite
best
efforts,
some
of
our
measurements
are
slightly
too
large
or
too
small.
In
such
cases
the
instrument
reading
uncertainty
may
become
negligible
compared
to
larger
“random”
fluctuations
in
the
object
or
property
being
measured.
As
such,
where
random
uncertainties
dominate
the
measurement
we
must
determine
how
to
assign
a
more
appropriate
best
estimate
of
uncertainty.
There
are
a
few
cases
where
such
random
uncertainties
become
dominant
in
a
measurement.
Problem
of
definition
Consider
the
difficulty
in
using
a
lens
to
focus
a
sharp
image
of
an
object
onto
a
screen.
According
to
the
“thin
lens
equation”
we
expect
that
1 1 1
+ =
d d' f
where
d
is
the
distance
between
object
and
lens,
d’
is
the
distance
between
image
and
lens,
and
f
is
the
lens
focal
length.
If
d
is
large,
1/d~0
and
therefore
f~d’
so
the
image
location
should
equal
the
lens’s
focal
length.
However
in
practice,
the
image
formed
by
the
lens
may
look
focused
and
sharp
as
the
position
of
the
screen
is
adjusted
over
many
millimetres.
This
can
be
a
case
where
the
image
distance
is
inherently
difficult
to
define,
so
we
encounter
an
unavoidable
randomness
in
our
selection
of
where
we
believe
we
observe
the
sharpest
image
location.
As
such,
even
though
we
might
measure
the
distance
between
lens
and
image
using
a
ruler
with
millimeter
markings,
a
larger
uncertainty
of
±0.5cm
or
even
more
might
be
the
appropriate
and
more
representative
value.
(You
can
see
the
difficulty
for
yourself
by
using
a
magnifying
lens
to
focus
the
image
from
your
window
onto
a
paper
screen
in
a
darkened
room.)
Similarly,
consider
the
case
where
we
are
trying
to
measure
the
period
of
a
pendulum.
Using
a
stopwatch
we
count
out
the
time
required
for
ten
oscillations,
and
then
divide
accordingly.
We
might
discover
that
the
time
required
for
ten
oscillations
is
12.62s
and
conclude
that
the
period
of
a
single
oscillation
is
1.262±0.005s,
and
we
might
be
quite
happy
with
our
result.
However,
if
we
then
repeat
the
measurement
perhaps
we
would
find
that
in
our
second
attempt,
ten
oscillations
take
12.28
s,
so
that
one
oscillation
takes
1.228±0.005s.
Of
course
we
notice
the
“significant”
discrepancy
between
our
results
(they’re
quite
close,
but
their
uncertainties
don’t
overlap)
and
become
concerned.
On
closer
consideration
we
probably
realize
that
our
ability
to
start
the
stopwatch
at
exactly
the
beginning
of
an
oscillation
and
to
stop
at
exactly
the
end
of
an
oscillation
are
somewhat
error-‐prone.
There
is
in
fact
a
degree
of
randomness
that
affects
how
well
we
start
and
stop
our
timer.
We
repeat
the
measurement
several
more
times
and
observe
periods
of
1.262,
1.228,
1.211,
and
1.254
s.
We
realize
we
can
average
these
readings
such
that
our
most
likely
oscillation
period
is
T
=
[1.262+1.228+1.211+1.254]/4
=
1.239s
and
we
could
reasonably
estimate
the
uncertainty
to
be
±0.003s
based
on
the
range
we
see
in
our
value,
giving
a
best
measurement
of
T=1.24±0.03s
per
oscillation.
(Keep
in
mind
that
the
stopwatch
may
not
represent
the
only
significant
source
of
error
in
such
an
experiment,
and
we
should
remain
mindful
and
observant
lest
we
overlook
another.
Perhaps
the
way
in
which
we
set
the
pendulum
into
oscillation,
its
amplitude,
or
even
air
friction,
influence
the
result.
The
possibilities
are
broad
and
very
experiment-‐dependent.)
Statistical
Determination
of
Uncertainty
In
such
experiments
or
studies
where
there
are
many
samples
available
to
measure,
or
where
the
same
measurement
can
be
made
many
times,
statistical
analysis
allows
us
a
more
mathematical
approach
for
calculating
our
best
measurement
and
uncertainty
value
than
the
interpolation
or
best
estimates
we’ve
opted
to
use
thus
far.
Luckily,
in
many
cases
we’ll
encounter
in
physics,
we
can
indeed
repeat
our
measurements.
If
we
can
take
the
same
measurement
several
times,
random
uncertainties
will
cause
some
of
our
measurements
to
be
slightly
too
large
or
slightly
too
small.
The
set
of
measurements
will
exhibit
a
“distribution”
about
the
most
probable
“best
value.”
As
a
result,
when
averaged,
we
expect
the
random
uncertainties
to
cancel
out,
providing
a
very
good
and
representative
final
result.
We
can
calculate
the
average,
or
“mean”
value,
x ,
as
follows:
n
x1 + x2 +... + xn
∑ xi
x= = i .
n n
We
can
gain
a
measure
of
the
average
variance
of
our
data
from
the
mean
by
calculating
the
“standard
deviation,”
σ ,
according
to:
Figure
3:
Probability
distribution
for
a
set
of
measurements
subject
to
random
n uncertainties,
showing
mean
value
and
standard
deviation.
∑(x − x )i
2
σ= i
.
(n −1)
This
standard
deviation
indicates
the
average
separation
of
a
data
point
from
the
mean.
Note
that
as
the
number
of
data
points
recorded
increases,
this
average
value
should
remain
relatively
constant.
Finally,
as
our
measure
of
uncertainty
in
the
mean
value
of
our
data
set,
we
calculate
the
“standard
deviation
of
the
mean,”
σ m ,
given
by:
σ
σm = .
n
Here
we
note
that
as
we
increase
the
number
of
measurements
recorded,
n,
the
standard
deviation
of
the
mean
decreases
(thus
the
precision
with
which
we
know
x
increases).
So
by
taking
more
measurements,
we
are
able
to
increase
our
overall
precision.
Note
though
that
while
useful
to
a
point,
the
technique
can
become
time
consuming
and
yields
diminishing
returns,
as
for
instance,
to
increase
σ m
by
a
factor
of
10
we
must
increase
the
number
of
measurements
by
100.
So,
in
summary,
given
a
set
of
data
from
a
repeatable
measurement
subject
to
random
variations,
our
best
result
is
given
by:
σ
x ± Δx = x ± σ m = x ± .
n
Given
the
option,
we
should
always
repeat
a
measurement
several
times
and
use
the
mean
and
standard
deviation
of
the
mean
as
our
values
of
best
measurement
and
uncertainty,
respectively.
This
provides
the
most
rigorous
method
available
to
confidently
and
precisely
determine
our
result,
whereas
when
forced
to
interpolate
or
estimate
the
risk
of
significantly
over-‐
or
under-‐estimating
our
measurement
is
much
greater.
4.3
Systematic
Uncertainty
Systematic
uncertainties
are,
unfortunately,
often
the
most
difficult
to
identify
and
account
for
in
an
experiment.
Typical
examples
would
be
calibration
errors
in
our
instruments:
for
instance,
a
clock
that
runs
too
fast,
or
a
ruler
whose
markings
are
stretched
or
scaled
incorrectly.
To
acquire
a
set
of
measurements
using
a
stopwatch
that
consistently
runs
too
slowly
for
instance,
we
would
of
course
still
encounter
the
random
fluctuations
that
govern
the
precision
of
our
estimate
(or
statistical
calculation)
of
uncertainty.
But
rather
than
acquiring
a
set
of
data
distributed
about
the
true
value,
all
measurements
would
skew
slightly
to
smaller
readings
than
should
properly
be
the
case.
The
result
might
be
a
precise
looking
result,
but
one
that
is
ultimately
inaccurate.
Similarly,
subtle
effects
like
air
pressure
or
temperature
changes
in
the
laboratory
might
have
significant
influence
on
the
outcome
of
measurements,
unbeknownst
to
the
experimenter.
Even
experimenter
bias
might
contribute
to
introduce
systematic
uncertainty.
Think
of
parallax,
described
earlier
as
the
tendency
to
misread
an
instrument’s
scale
if
viewed
from
an
angle.
Normally
the
experimenter
would
take
readings
from
directly
in
front
of
the
scale,
but
could
be
forgiven
for
introducing
small
random
variations
if
the
head
or
eye
position
shifts
slightly
left
or
right
(with
relatively
equal
tendency).
To
consistently
read
the
scale
from
an
angle
always
left
of
straight
on
would
tend
to
skew
the
readings.
This
bias,
or
ones
like
it,
should
be
identified
and
corrected.
Overall,
it
can
be
quite
difficult
to
compensate
for
systematic
uncertainties.
Better-‐calibrated
tools
(or
recalibrating
the
tools
available
against
certified
calibration
standards)
can
improve
results,
as
can
attention
to
details,
like
parallax,
that
might
be
surprisingly
influential.
5.
Propagation
of
Uncertainty
As
you
can
see,
recording
a
good
and
reliable
measurement,
though
not
necessarily
difficult,
does
require
awareness
of
where
one
risks
over-‐
or
under-‐estimating
uncertainty.
Interpolating
or
estimating
random
variations
are
sometimes
necessary,
but
if
done
badly
can
significantly
mis-‐
represent
the
actual
uncertainty,
giving
a
result
with
unrealistic
precision.
Statistical
analysis
provides
the
most
methodical
and
mathematically
rigorous
approach
available
for
identifying
a
reliable
and
precise
mean
value
(best
measurement)
and
standard
deviation
of
the
mean
(uncertainty).
However,
systematic
uncertainties,
if
not
identified
and
minimized,
could
still
render
a
highly
precise
effort
at
a
measurement
completely
inaccurate.
Now
that
we’re
aware
of
these
possible
pitfalls
though,
we
can
endeavor
to
be
both
precise
and
accurate,
which
is
worthwhile
if
we
want
our
experiments
to
be
successful.
So
what
we
have
achieved
thus
far
is
the
knowledge
needed
to
acquire
a
set
of
good
measurements.
We
should
be
able
to
minimize
or
eliminate
avoidable
sources
of
error,
and
we
can
therefore
record
our
results
with
realistic
uncertainties.
5.1
Uncertainty
Propagation
for
Addition
and
Subtraction
Often
in
experiments
we’ll
be
taking
direct
measurements
and
performing
calculations
to
determine
(indirectly)
a
property
of
interest.
Whereas
in
our
“clean”
textbook
examples,
a+b=c
or
xy=z,
we
must
now
develop
a
formalism
for
handling
the
more
realistic
calculations
where,
for
instance,
(a ± Δa) + (b ± Δb) = (c ± Δc)
and
(x ± Δx)(y ± Δy) = (z ± Δz) .
We
begin
by
deriving
Δc
for
the
case
of
addition
of
two
numbers
that
include
uncertainties.
If
we
consider
the
extremes
a+Δa
and
b+Δb,
then
we
can
regroup
as
follows:
(a + Δa) + (b + Δb) = (a + b) + (Δa + Δb)
.
= c + Δc
Similar
consideration
applies
to
the
opposite
extreme
where
we
use
a-‐Δa
and
b-‐Δb.
(a − Δa) + (b − Δb) = (a + b) − (Δa + Δb)
.
= c − Δc
Thus,
in
the
case
of
addition,
(a ± Δa) + (b ± Δb) = (a + b) ± (Δa + Δb)
= (c ± Δc)
and
we
see
clearly
that
our
uncertainties
can
be
expressed
as
Δc ~ Δa + Δb .
An
identical
result
can
be
derived
for
subtraction,
where
(a ± Δa) − (b ± Δb) = (c ± Δc) .
We
should
note
that
this
expression
“slightly”
overestimates
Δc
because
statistically
our
average
uncertainties
Δa
and
Δb
will
be
somewhat
less
than
the
limiting
value
we
used
in
our
illustration.
We
instead
use
the
expression
Δc = Δa 2 + Δb 2 .
We
can
see
that
the
expression
is
mathematically
equivalent
to
the
Pythagorean
theorem
governing
the
lengths
of
the
sides
of
right
angle
triangles.
We
can
therefore
appreciate
how
the
calculated
uncertainty,
Δc,
is
always
less
than
the
sum
of
the
added
uncertainties
(i.e.
the
hypotenuse
is
always
less
than
the
sum
of
the
sides
of
a
right
triangle).
Figure
4.
When
uncertainties
are
5.2
Uncertainty
Propagation
for
Multiplication
and
Division
added,
the
resulting
uncertainty
is
Whereas
in
the
case
of
addition
and
subtraction,
when
dealing
with
calculated
by
uncertainties
we
dealt
with
“absolute”
uncertainties,
in
the
case
of
“quadrature.”
multiplication
and
division,
we
need
to
deal
in
“fractional”
uncertainties.
Fractional
uncertainties
are
expressed
in
the
form
Δa/|a|,
where
|a|
is
the
“absolute
value”
of
a.
We
follow
a
similar
approach
to
our
derivation
of
the
propagation
of
uncertainty
in
the
case
of
addition.
Consider
the
quotient
(x ± Δx)(y ± Δy) = (z ± Δz) .
As
before,
we
consider
the
uncertainty
extremes,
where
x
is
given
by
x+Δx
and
y
is
given
by
y+Δy.
" Δx %" Δy %
(x + Δx)(y + Δy) = xy $$1+ ''$$1+ ''
# x &# y &
" Δx Δy ΔxΔy %
= xy $$1+ + + '
# x y x y '&
" " Δx Δy %%
~ xy $$1+ $$ + ''''
# # x y &&
(Since
Δx
and
Δy
should
be
small,
ΔxΔy~0.
)
Similarly,
considering
the
limiting
case
of
x-‐Δx
and
y-‐Δy,
# Δx &# Δy &
(x − Δx)(y − Δy) = xy %%1− ((%%1− ((
$ x '$ y '
# Δx Δy ΔxΔy &
= xy %%1− − + (
$ x y x y ('
# # Δx Δy &&
~ xy %%1− %% + ((((
$ $ x y ''
Combining
the
two
results
and
substituting
z
for
xy,
then
we
conclude
that,
" " Δx Δy %%
(x ± Δx)(y ± Δy) = z $$1± $$ + ''''
# # x y &&
such
that
Δz " Δx Δy %
= $ + '' .
z $# x y &
Finally,
we
mention
that
by
using
the
extremes
of
our
uncertainty
range
(i.e.
x+Δx,
x-‐Δx)
in
our
derivation,
the
formula
given
above
again
“slightly”
overestimates
Δz.
A
slightly
more
precise
version
is
given
by
2 2
Δz " Δx % " Δy %
= $ ' + $ ' .
z # x & # y &
Using
a
similar
derivation,
the
same
result
is
also
found
to
apply
in
the
case
of
division.
5.3
Summary
of
Propagation
of
Uncertainty
Addition
and
Subtraction:
For
(a ± Δa) + (b ± Δb) = (c ± Δc)
or
(a ± Δa) − (b ± Δb) = (c ± Δc) ,
Δc
is
given
by
Δc = Δa 2 + Δb 2 .
Multiplication
and
Division:
For
(x ± Δx)
(x ± Δx)(y ± Δy) = (z ± Δz)
or
= z ± Δz ,
(y ± Δy)
Δz
is
given
by
2 2
Δz " Δx % " Δy %
= $ ' + $ ' .
z # x & # y &
We
now
have
the
important
tools
required
to
navigate
uncertainty
propagation
in
most
scenarios
we’ll
encounter
in
our
experiments.
Before
we
proceed
though
we
should
take
a
moment
to
review
how
we
carry
significant
figures
through
our
calculations.
6.
Significant
Figures
and
Calculation
Methods
Involving
Uncertainties
6.2
Calculations,
Taking
into
Consideration
Significant
Digits
and
Uncertainty
Propagation
It
is
particularly
important
to
properly
account
for
significant
figures
when
performing
calculations,
so
that
results
are
not
stated
with
too
great
or
too
little
precision.
Rule
1:
Significant
Figures
when
Adding
or
Subtracting
When
adding
or
subtracting
numbers,
the
position
of
the
least
significant
digit
in
the
result
is
the
same
as
the
position
of
the
last
significant
digit
in
the
least
precise
number
being
added
or
subtracted.
Simple
Examples:
4.983 1.2873
+3.587 −0.484
3.14159 − 2.718 + 42.0 = 42.4
8.570 0.803
Experimental
Example:
Say
we
want
to
add
the
masses
4.51±0.04
kg,
575.62±0.09
g,
and
2.1±0.5
kg.
First,
convert
to
similar
units,
and
then
proceed
with
the
calculation
as
follows,
4.51 Δm = (Δm1 )2 + (Δm2 )2 + (Δm3 )2
0.57562 = (0.04)2 + (0.00009)2 + (0.5)2
+2.1
= (1.6 ⋅10 −3 ) + (8.1⋅10 −9 ) + (0.25)
7.18562
= 0.2516000081
= 0.501
= 0.5
The
last
significant
digit
in
each
value
in
the
summation
is
indicated
by
the
underscore.
We
see
that
since
2.1
kg
is
the
least
precise
value
in
our
calculation,
its
least
significant
figure
determines
the
overall
number
of
significant
figures
in
our
final
result.
As
such,
we
round
up
and
report
that
m
=
7.2±0.5
kg.
Rule
2:
Significant
Figures
when
Multiplying
and
Dividing
When
multiplying
or
dividing
a
set
of
numbers,
the
final
result
should
include
only
as
many
significant
digits
as
the
number
in
the
set
with
the
fewest
significant
digits.
Simple
examples:
1.25 × 2.0 = 2.5
(Not
2.50!)
4.882
= 2.441 = 2.4
(Round
off
to
two
significant
figures.)
2.0
1.72
5.286 ⋅ = 9.091738... = 9.09
(Round
off
to
three
significant
figures.)
1.00002
Experimental
Example:
Say
we
intend
to
divide
force
F
=
8.24±0.05
N
by
mass
m
=
2.251±0.006
kg
in
order
to
determine
acceleration
in
accordance
with
Newton’s
Second
Law,
where
a = F / m .
Then
2 2
Δa " ΔF % " Δm %
= $ ' +$ '
a # F & # m &
2 2
" 0.05 % " 0.006 %
= $ ' +$ '
F # 8.24 & # 2.251 &
a=
m = (3.68⋅10 −5 ) + (7.10 ⋅10 −6 )
8.24
=
= 0.0066
2.251
Δa = a(0.0066)
= 3.6605
= 3.6605(0.0066)
= 0.024
Since
we
have
three
significant
figures
in
a (i.e.
a =3.66),
we
round
Δa
to
0.02.
Therefore
a = 3.66 ± 0.02 .
Note
that
in
our
calculation
of
a ,
the
number
of
significant
digits
retained
in
the
result
is
the
same
as
the
number
of
significant
digits
of
the
least
precise
value
entered
in
the
calculation
(in
this
case,
8.24
N,
which
has
only
three
significant
digits
versus
the
four
of
m ).
6.3
Scientific
Notation
(aka
“Standard
Form”)
Scientific
notation
makes
use
of
powers
of
ten
to
adjust
the
position
of
the
decimal
point.
It
has
two
advantages:
i) Convenience
and
compactness.
The
mass
of
an
electron
is
9.11×10-‐31
kg.
Compare
this
value
with
0.00000000000000000000000000000000000911
kg.
Try
entering
the
latter
into
a
calculator!
ii) Elimination
of
ambiguity
regarding
precision.
Suppose
we
see
the
statement:
“The
average
distance
between
Earth
and
Sun
is
150,000,000
km”.
Are
the
zeros
significant
digits
or
are
some
of
them
used
only
as
“spacers”?
By
writing
the
value
as
1.50×108
km,
we
are
clearly
implying
a
precision
of
three
significant
figures,
if
such
is
actually
warranted.
References
J.R. Taylor, An Introduction to Error Analysis (University Science Books, Mill Valley, California, 1982).
Experiment:
Identification
of
Metal
by
Determination
of
Mass,
Volume,
and
Density
In
this
experiment
you
will
determine
the
mass,
volume,
and
density
of
several
metal
blocks
using
a
variety
of
measuring
instruments
and
techniques.
You
will
pay
particular
attention
to
recording
results,
documenting
your
uncertainties,
and
propagating
any
uncertainties
through
calculations.
Theory
Metals
and
metal
alloys
(mixtures
of
pure
metals)
often
have
well
defined
densities.
By
measuring
the
volume
and
mass
of
a
piece
of
metal,
density,
ρ,
can
be
calculated
according
to
m
ρ= ,
V
where
m
is
mass
and
V
is
volume.
When
the
shape
of
a
piece
of
metal
has
a
well-‐defined
geometry,
the
volume
may
be
determined
using
rulers
or
similar
tools
to
measure
critical
dimensions
necessary
for
volume
calculation.
When
the
shape
of
a
piece
of
metal
does
not
have
a
well-‐defined
geometry,
it
may
still
be
possible
to
determine
its
volume
using
alternative
methods.
Submersion
in
water
can
be
used
to
determine
metal’s
volume
indirectly
by
measuring
the
volume
of
the
water
that
is
displaced
during
the
submersion
process.
Apparatus
2
metal
blocks
ruler
triple
beam
balance
Vernier
calipers
graduated
cylinder
beaker
Procedure
Two
blocks
of
different
metals
will
be
provided.
Ensure
that
you
can
distinguish
which
is
which.
Determination
of
Mass
The
mass
of
each
block
will
be
determined
using
a
triple
beam
balance.
Ensure
that
you
read
and
correctly
record
the
appropriate
uncertainty
value.
Determination
of
Volume
using
Ruler
and
Vernier
Caliper
Determine
the
volume
of
block
1
using
a
standard
ruler
or
meter
stick
(1mm
divisions),
taking
care
to
ensure
that
the
uncertainty
for
each
measurement
is
recorded
correctly.
Determine
the
volume
of
block
2
using
a
Vernier
caliper.
If
unsure
as
to
how
to
read
a
Vernier
caliper,
see
Appendix.
Take
care
to
ensure
that
the
uncertainty
for
each
measurement
is
recorded
correctly.
Determination
of
Volume
by
Displacement
of
Water
Submerge
block
1
in
a
beaker
of
water,
such
that
the
water
covers
the
top
of
the
block.
Mark
the
height
of
the
water
level
using
a
piece
of
masking
tape
and
a
pen.
Remove
the
block
from
the
beaker
of
water.
Fill
a
graduated
cylinder
to
a
level
of
60
or
100
mL.
Record
the
volume
of
water
in
the
graduated
cylinder,
being
careful
to
correctly
record
uncertainty.
Carefully
pour
water
from
the
graduated
cylinder
back
into
the
beaker
until
the
water
rises
to
the
original
water
level
when
the
block
was
submerged.
Record
the
amount
of
water
remaining
in
the
graduated
cylinder,
and
use
your
results
to
calculate
the
volume
of
the
block.
Repeat
for
block
2.
Results
Based
on
your
measurements,
determine
the
density
of
each
block
as
determined
from
each
measurement
method.
Provide
a
table
showing
your
results.
In
your
report,
provide
examples
for
each
type
of
calculation
required
to
determine
the
block
densities.
Identify
the
type
of
metal
each
block
is
made
of
by
referencing
the
table
of
accepted
densities
listed
in
the
Appendix.
Provide
a
calculation
of
Percentage
Error.
In
your
discussion,
comment
on
the
strengths
and
weaknesses
of
each
technique.
Appendix
I
–
Reading
a
Vernier
Caliper:
Vernier
calipers
are
a
clever
modification
on
a
ruler
that
allows
for
higher
precision
readings
than
the
ruler’s
scale
by
itself
can
provide.
They
are
particularly
useful
for
measuring
the
dimensions
of
precision
machined
components.
By
design,
as
the
jaws
of
the
calipers
slide
over
a
distance
of
1
mm,
the
“Vernier
scale’s”
markings
align
sequentially
with
the
ruler
markings
above,
such
that
if
the
jaws
are
at
0.2
mm
then
the
2nd
Vernier
scale
marking
will
be
aligned
with
the
ruler
marking
above,
and
if
the
jaws
are
0.5
mm
the
5th
Vernier
marking
will
be
aligned
with
the
ruler
marking
above,
etc.
We
demonstrate
with
a
series
of
examples:
(Note
that
the
zeros
on
the
ruler
and
the
Vernier
scale
align,
and
also
that
the
10th
marking
on
the
Vernier
scale
corresponds
to
the
39th
mm
on
the
ruler.
This
“scaled”
grating
results
in
sequential
shift
in
marking
alignment
as
the
jaws
are
slid
open
or
closed.)
As
we
slide
the
Vernier
scale
(say
we’re
measuring
the
diameter
of
a
fibre
optic
cable)
we
might
have
the
following:
Notice
that
it
would
be
quite
difficult
to
read
the
exact
distance
the
Vernier
scale’s
zero
has
shifted
right
of
the
ruler’s
zero
position
by
eye
alone.
We
could
estimate
the
reading
to
be
somewhere
around
0.1
mm
to
perhaps
0.4
mm.
Reasonable,
but
we
can
do
better…
As
we
clamp
the
fibre
in
our
caliper
jaw,
we
see
that
the
Vernier
scale
actually
slid
along
the
ruler
such
that
its
“3”
aligns
nicely
with
a
marking
on
the
ruler.
None
of
the
other
Vernier
scale
markings
align
very
well
with
the
ruler
markings
above.
As
such,
this
tells
us
that
the
fibre
diameter
is
0.30
mm
(that
is,
the
zero
on
the
vernier
scale
has
moved
3/10ths
of
1
mm
from
the
zero
on
the
ruler
scale).
And
since
we
assign
uncertainty
of
half
of
the
smallest
division,
which
is
0.05
mm
on
our
Vernier
scale,
the
uncertainty
is
±0.03
mm
(rounded
up
from
0.025).
So
the
measurement
of
the
fibre
diameter
is
therefore
0.30±0.03
mm.
Try
another…
Again,
you
can
see
that
it
would
be
difficult
to
estimate
directly
by
eye
that
the
Vernier
scale’s
0
is
0.50
mm
from
the
ruler’s
0
mm
marking,
but
we
see
quite
clearly
that
the
5
on
the
Vernier
scale
aligns
with
a
marking
on
the
ruler
above
(whereas
the
other
Vernier
markings
aren’t
well
aligned
with
the
ruler
markings),
so
the
reading
is
0.50±0.03
mm.
Another
example…
The
reading
would
be
5.40±0.03
mm
(perhaps
even
5.37±0.03
mm,
since
it’s
a
little
hard
to
say
whether
3.5
or
4
align
better
with
the
ruler
markings
above).
Try
some
on
your
own…
Solution:
_____
±
_____
mm
Solution:
_____
±
_____
mm
Appendix
II:
Useful
conversion
factor:
1
mL
=
10-‐6
m3
Various
densities1:
Aluminum
2.702x103
kg/m3
Copper
8.92x103
kg/m3
Iron
7.86x103
kg/m3
Lead
11.3437x103
kg/m3
Tin
7.28x103
kg/m3
Brass
8.47x103
kg/m3
2
Acrylic
1.19x103
kg/m3
2
References
1The
Chemical
Rubber
Company
(1970).
Handbook
of
Chemistry
and
Physics.
51st
Ed.
Cleveland:
The