Instrumentation Chapter 2
Instrumentation Chapter 2
Calibration, Resolution,
Repeatability, Drift,
Note that:-In order to avoid such sensitivity drift, sophisticated instruments are
either kept at controlled temperature, or suitable in-built temperature
compensation schemes are provided inside the instrument.
Range (or span)
It defines the maximum and minimum values of the inputs or the outputs for
which the instrument is recommended to use.
Or to define in the other way, the range or span of an instrument defines the
minimum and maximum values of a quantity that the instrument is designed to
measure.
For example, for a temperature measuring instrument the input range may be
100-500 o C and the output range may be 4-20 mA.
Accuracy:
Accuracy indicates the closeness of the measured value with the actual or true
value, and
is expressed in the form of the maximum error (= measured value – true value)
as a percentage of full scale reading.
For example, if the accuracy of a temperature indicator, with a full scale range
of 0- 500 oC is specified as ± 0.5%, it indicates that the measured value will
always be within ± 2.5oC of the true value, if measured through a standard
instrument during the process of calibration.
But if it indicates a reading of 250 oC, the error will also be ± 2.5 oC, i.e. ± 1% of
the reading. Thus it is always better to choose a scale of measurement where
the input is near full-scale value.
But the true value is always difficult to get. We use standard calibrated
instruments in the laboratory for measuring true value of the variable.
The accuracy of an instrument is quantified by the difference of its readings
and the one given by the ultimate or primary standard.
Accuracy depends on inherent limitations of instrument and shortcomings in
measurement process.
As the degree of closeness with which the reading approaches the true value
of the quantity to be measured, accuracy can be expressed in following ways:
Point accuracy
Accuracy as percentage of scale span
Accuracy as percentage of true value
Point accuracy: Such accuracy is specified at only one particular point of scale.
It does not give any information about the accuracy at any other Point on the
scale.
Accuracy as percentage of scale span: When an instrument as uniform scale,
its accuracy may be expressed in terms of scale range.
Accuracy as percentage of true value: The best way to conceive the idea of
accuracy is to specify it in terms of the true value of the quantity being
measured.
Unit of accuracy:
Percentage of true value (% of T.V.) = (Measured value – True value) *100
True value
Percentage of Full Scale Deflection (% fsd) = (Measured value – True value) *100
Maximum Scale value
• Precision:
is defined as the ability of instrument to reproduce a certain set of readings
within given accuracy.
Also describes an instrument’s degree of random variations in its output when
measuring a constant quantity.
It is the measure of reproducibility i.e., given a fixed value of a quantity,
precision is a measure of the degree of agreement within a group of
measurements.
A precision instrument indicates that the successive reading would be very
close, or in other words, the standard deviation σe of the set of measurements
would be very small
The precision is composed of two characteristics:
1. Conformity:
Consider a resistor having true value as 2.385692, which is being measured by
an ohmmeter. But the reader can read consistently, a value as 2.4 Ohm due to
the non-availability of proper scale.
The error created due to the limitation of the scale reading is a precision error.
2. Number of significant figures:
The precision of the measurement is obtained from the number of significant
figures, in which the reading is expressed.
The significant figures convey the actual information about the magnitude &
the measurement precision of the quantity. The precision can be
mathematically expressed as:
•
Where P = precision
= Value of nth measurement
= Average value the set of measurement values
Note That:- Precision is often confused with accuracy. High precision does not
imply anything about measurement accuracy.
Accuracy Precision
Accuracy represents degree Precision represents degree
of correctness of the of repeatability of several
measured value w.r.t. true independent measurements
value. of desired input at the same
reference conditions.
Where =max (
Hysteresis
Careful observation of the output/input relationship of a block will sometimes
reveal different results as the signals vary in direction of the movement.
Mechanical systems will often show a small difference in length as the
direction of the applied force is reversed.
The same effect arises as a magnetic field is reversed in a magnetic material.
This characteristic is called hysteresis.
Hysteresis is defined as the magnitude of error caused in the output for a
given value of input, when this value is approached from opposite directions;
i.e. from ascending order & then descending order.
Causes are backlash, elastic deformations, magnetic characteristics, frictional
effects (mainly).
Hysteresis can be eliminated by taking readings in both direction and then
taking its arithmetic mean.
Instrument characteristic with hysteresis
Resolution:
If the input is slowly increased from some arbitrary input value, it will again
be found that output does not change at all until a certain increment is
exceeded. This increment is called resolution.
Threshold:
If the instrument input is increased very gradually from zero there will be
some minimum value below which no output change can be detected. This
minimum value defines the threshold of the instrument.
Stability:
It is the ability of an instrument to retain its performance throughout is
specified operating life
Reliability
Reliability is the probability that a device will adequately perform (as specified)
for a period of time under specified operating conditions.
Some sensors are required for safety or product quality, and therefore, they
should be very reliable.
Backlash
It is defined as the maximum distance or angle through which any part of
mechanical system may be moved in one direction without causing motion of
next part.
Can be minimized if components are made to very close tolerances.
Dynamic characteristics
The relationship between the system input and output when the measured
quantity (measurand) is varying rapidly.
Speed of response Dead time and dead
and measuring lag, zone,
Over shoot,
Speed of response:
It is defined as the rapidity with which a measurement system responds to
changes in the measured quantity.
Measuring lag:
It is time to respond to that particular instrument when we change the input.
This time is called Lag.
It is the retardation or delay in the response of a measurement system to
changes in the measured quantity
The measuring lags are of two types:
1. Retardation type
2. Time delay lag
a) Retardation type:
In this case the response of the measurement system begins immediately
after the change in measured quantity has occurred.
b) Time delay lag:
In this case the response of the measurement system begins after a dead
time, after the application of the input.
Fidelity:
It is defined as the degree to which a measurement system indicates
changes in the measurand quantity without dynamic error.
Dynamic error:
It is the difference between the true value of the quantity changing with
time & the value indicated by the measurement system if no static error
is assumed. It is also called measurement error.
Uncertainty of Instruments
The word “uncertainty” means doubt, and
“measurement uncertainty” means doubt about the validity of the result of
a measurement.
We refer to the uncertainty as the error in the measurement. Errors fall into
two categories:
1. Systematic Error –
errors resulting from measuring devices being out of calibration.
Such measurements will be consistently too small or too large.
These errors can be eliminated by pre-calibrating against a known, trusted
standard.
• Random Errors –
2.
errors resulting in the fluctuation of measurements of the same quantity
about the average.
The measurements are equally probable of being too large or too small.
These errors generally result from the fineness of scale division of a measuring
device.
The following general rules of thumb are often used to determine the
uncertainty in a single measurement when using a scale or digital measuring
device.
1. Uncertainty in a Scale Measuring Device is equal to the smallest increment
divided by 2.
Where
4 Measuring cylinder (100 cm3) 1 cm3 0.5 cm3 18.0 cm3, 18.5 cm3
5 Micrometer 1 cm3 0.01 mm 2.10 mm, 2.11 mm
6 Milliammeter (0 - 100 mA) 2 mA 1 mA 20 mA, 21 mA
Where the ratio of the uncertainty, , to our best estimate, , is referred to as the
fractional uncertainty.
Typically, the uncertainty is small compared to the measured value, so it is
convenient to multiply the fractional uncertainty by 100 and report the percent
uncertainty.
Percent Uncertainty = Fractional Uncertainty × 100
• If multiple trials are performed to measure X, the best estimate is the average
value, , of all the trials (the bar over the variable denotes an average value).
The average value is computed by summing up all the measured values and
then dividing by the number of trails, N.
Mathematically, we write this as
Note that:-When experimental or time limitations only permit one trial for a
measurement, then the best estimate is the most careful measurement you can
perform.
Even the best measurements have some degree of uncertainty associated with
them.
Uncertainty in measurements arises from many sources including the limited
precision of the measurement tool, the manner in which the measurement is
performed, and the person performing the measurement.
The inclusion of an estimate of the uncertainty when reporting an experimental
measurement permits others to determine how much confidence should be placed
in the measured value.
Additionally, measurements should also be accompanied by a statement of how the
uncertainty was determined.
Without uncertainties and an explanation of how they were obtained, measured
parameters have little meaning.
A simple approach for estimating the uncertainty in a measurement is to report the
limiting precision of the measurement tool.
For example, if a balance is calibrated to report masses to 0.1 g, then the actual
mass of a sample could be up to 0.05 g greater or less than the measured mass, and
the balance would still read out the same value.
Thus, the uncertainty associated with mass measurements using this balance
would be ± 0.05 g.
Note: This method for estimating the uncertainty of a measurement is a good choice
when only a single trial is performed.
• NOTE: A useful rule of thumb for reporting the uncertainty associated with a
measurement tool is to determine the smallest increment the device can
measure and divide that value by 2.
If on the other hand, the best estimate of a parameter is determined by
making repeated measurements and computing the average value from the
multiple trials, the uncertainty associated with each measurement can be
determined from the standard deviation .
Mathematically, the standard deviation can be expressed as
Thank You