0% found this document useful (0 votes)
94 views

Lec 1 Characteristics of Instruments

The document discusses the static characteristics of instruments that describe their performance under steady state conditions, including: - Accuracy and inaccuracy, which measure how close readings are to the correct value - Precision/repeatability, which describe how consistent readings are under the same conditions - Sensitivity, which measures the output change from a given input change It provides definitions and examples for each characteristic, noting that specifications only apply under calibration conditions. Dynamic characteristics describing transient response are also mentioned but not defined.

Uploaded by

Khalifa Eltayeb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views

Lec 1 Characteristics of Instruments

The document discusses the static characteristics of instruments that describe their performance under steady state conditions, including: - Accuracy and inaccuracy, which measure how close readings are to the correct value - Precision/repeatability, which describe how consistent readings are under the same conditions - Sensitivity, which measures the output change from a given input change It provides definitions and examples for each characteristic, noting that specifications only apply under calibration conditions. Dynamic characteristics describing transient response are also mentioned but not defined.

Uploaded by

Khalifa Eltayeb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

characteristics of instruments lec2

3. Static characteristics of instruments


If we have a thermometer in a room and its reading shows a temperature of 20C, then it does not really matter
whether the true temperature of the room is 19.5C or 20.5C. Such small variations around 20C are too small to
affect whether we feel warm enough or not. Our bodies cannot discriminate between such close levels of
temperature and therefore a thermometer with an inaccuracy of ±0.5C is perfectly adequate. If we had to measure
the temperature of certain chemical processes, however, a variation of 0.5C might have a significant effect on the
rate of reaction or even the products of a process.

A measurement inaccuracy much less than ±0.5C is therefore clearly required. Accuracy of measurement is thus
one consideration in the choice of instrument for a particular application. Other parameters such as sensitivity,
linearity and the reaction to ambient temperature changes are further considerations. These attributes are collectively
known as the static characteristics of instruments, and are given in the data sheet for a particular instrument. It is
important to note that the values quoted for instrument characteristics in such a data sheet only apply when the
instrument is used under specified standard calibration conditions. Due allowance must be made for variations in the
characteristics when the instrument is used in other conditions. The various static characteristics are defined in the
following paragraphs.
3.1 Accuracy and inaccuracy (measurement uncertainty)
The accuracy of an instrument is a measure of how close the output reading of the instrument is to the correct value.
In practice, it is more usual to quote the inaccuracy figure rather than the accuracy figure for an instrument.
Inaccuracy is the extent towhich a reading might be wrong, and is often quoted as a percentage of the full-scale (f.s.)
reading of an instrument. If, for example, a pressure gauge of range 0–10 bar has a quoted inaccuracy of ±1.0% f.s.
(±1% of full-scale reading), then the maximum error to be expected in any reading is 0.1 bar. This means that when
the instrument is reading 1.0 bar, the possible error is 10% of this value. For this reason, it is an important system
design rule that instruments are chosen such that their range is appropriate to the spread of values being measured, in
order that the best possible accuracy is maintained in instrument readings. Thus, if we were measuring pressures
with expected values between 0 and 1 bar, we would not use an instrument with a range of 0–10 bar. The term
measurement uncertainty is frequently used in place of inaccuracy.

3.2 Precision/repeatability/reproducibility
Precision is a term that describes an instrument’s degree of freedom from random errors. If a large number of
readings are taken of the same quantity by a high precision instrument, then the spread of readings will be very
small. Precision is often, though incorrectly, confused with accuracy. High precision does not imply anything about
measurement accuracy. A high precision instrument may have a low accuracy. Low accuracy measurements from a
high precision instrument are normally caused by a bias in the measurements, which is removable by recalibration.
The terms repeatability and reproducibility mean approximately the same but are applied in different contexts as
given below. Repeatability describes the closeness of output readings when the same input is applied repetitively
over a short period of time, with the same measurement conditions, same instrument and observer, same location
and same conditions of use maintained throughout. Reproducibility describes the closeness of output readings for
the same input when there are changes in the method of measurement, observer, measuring instrument, location,
conditions of use and time of measurement. Both terms thus describe the spread of output readings for the same
input. This spread is referred to as repeatability if the measurement conditions are constant and as reproducibility if
the measurement conditions vary. The degree of repeatability or reproducibility in measurements from an instrument
is an alternative way of expressing its precision. Figure 2.5 illustrates this more clearly. The figure shows the results
of tests on three industrial robots that were programmed to place components at a particular point on a table. The
target point was at the centre of the concentric circles shown, and the black dots represent the points where each
robot actually deposited components at each attempt. Both the accuracy and precision of Robot 1 are shown to be
low in this trial. Robot 2 consistently puts the component down at approximately the same place but this is the
wrong point. Therefore, it has high precision but low accuracy. Finally, Robot 3 has both high precision and high
accuracy, because it consistently places the component at the correct target position.

3.3 Tolerance
Tolerance is a term that is closely related to accuracy and defines the maximum error that is to be expected in some
value. Whilst it is not, strictly speaking, a static characteristic of measuring instruments, it is mentioned here because
the accuracy of some instruments is sometimes quoted as a tolerance figure. When used correctly, tolerance
describes the maximum deviation of a manufactured component from some specified value. For instance,
crankshafts are machined with a diameter tolerance quoted as so many microns (106 m), and electric circuit
components such as resistors have tolerances of perhaps 5%. One resistor chosen at random from a batch having a
nominal value 1000 W and tolerance 5% might have an actual value anywhere between 950 W and 1050 W.
3.4 Range or span
The range or span of an instrument defines the minimum and maximum values of a quantity that the instrument is
designed to measure.

3.5 Linearity
It is normally desirable that the output reading of an instrument is linearly proportional to the quantity being
measured. The Xs marked on Figure 2.6 show a plot of the typical output readings of an instrument when a sequence
of input quantities are applied to it. Normal procedure is to draw a good fit straight line through the Xs, as shown in
Figure 2.6 The non-linearity is then defined as the maximum deviation of any of the output readings marked X from
this straight line. Non-linearity is usually expressed as a percentage of full-scale reading.

3.6 Sensitivity of measurement


The sensitivity of measurement is a measure of the change in instrument output that occurs when the quantity being
measured changes by a given amount. Thus, sensitivity is the ratio:

The sensitivity of measurement is therefore the slope of the straight line drawn on Figure 2.6. If, for example, a
pressure of 2 bar produces a deflection of 10 degrees in a pressure transducer, the sensitivity of the instrument is 5
degrees/bar (assuming that the deflection is zero with zero pressure applied).
3.7 Threshold
If the input to an instrument is gradually increased from zero, the input will have to reach a certain minimum level
before the change in the instrument output reading is of a large enough magnitude to be detectable. This minimum
level of input is known as the threshold of the instrument. Manufacturers vary in the way that they specify threshold
for instruments. Some quote absolute values, whereas others quote threshold as a percentage of full-scale readings.
As an illustration, a car speedometer typically has a threshold of about 15 km/h. This means that, if the vehicle starts
from rest and accelerates, no output reading is observed on the speedometer until the speed reaches 15 km/h.

3.8 Resolution
When an instrument is showing a particular output reading, there is a lower limit on the magnitude of the change in
the input measured quantity that produces an observable change in the instrument output. Like threshold, resolution
is sometimes specified as an absolute value and sometimes as a percentage of f.s. deflection. One of the major
factors influencing the resolution of an instrument is how finely its output scale is divided into subdivisions. Using a
car speedometer as an example again, this has subdivisions of typically 20 km/h. This means that when the needle is
between the scale markings, we cannot estimate speed more accurately than to the nearest 5 km/h. This figure of 5
km/h thus represents the resolution of the instrument.

3.10 Dead space


Dead space is defined as the range of different input values over which there is no change in output value. Any
instrument that exhibits hysteresis also displays dead space, as marked on Figure 2.8. Some instruments that do not
suffer from any significant hysteresis can still exhibit a dead space in their output characteristics, however. Backlash
in gears is a typical cause of dead space, and results in the sort of instrument output characteristic shown in Figure
2.9. Backlash is commonly experienced in gearsets used to convert between translational and rotational motion
(which is a common technique used to measure translational velocity).
4. Dynamic characteristics of instruments

The static characteristics of measuring instruments are concerned only with the steady state reading that the
instrument settles down to, such as the accuracy of the reading etc. The dynamic characteristics of a measuring
instrument describe its behavior between the time a measured quantity changes value and the time when the
instrument output attains a steady value in response. As with static characteristics, any values for dynamic
characteristics quoted in instrument data sheets only apply when the instrument is used under specified
environmental conditions. Outside these calibration conditions, some variation in the dynamic parameters can be
expected. In any linear, time-invariant measuring system, the following general relation can be written between
input and output for time _t_ > 0:

where qi is the measured quantity, q0 is the output reading and a0 . . . an, b0 . . . bm are
constants. The reader whose mathematical background is such that the above equation appears daunting should not
worry unduly, as only certain special, simplified cases of it are applicable in normal measurement situations. The
major point of importance is to have a practical appreciation of the manner in which various different types of
instrument respond when the measured applied to them varies. If we limit consideration to that of step changes in
the measured quantity only, then equation (2.1) reduces to:

Further simplification can be made by taking certain special cases of equation (2.2), which collectively apply to
nearly all measurement systems.
4.1 Zero order instrument
If all the coefficients a1 . . . an other than a0 in equation (2.2) are assumed zero, then:
where K is a constant known as the instrument sensitivity as defined earlier. Any instrument that behaves according
to equation (2.3) is said to be of zero order type. Following a step change in the measured quantity at time t, the
instrument output moves immediately to a new value at the same time instant t, as shown in Figure 2.10. A
potentiometer, which measures motion, is a good example of such an instrument, where the output voltage changes
instantaneously as the slider is displaced along the potentiometer track.

2.3.2 First order instrument


If all the coefficients a2 . . . an except for a0 and a1 are assumed zero in equation (2.2) then:

Defining K D b0/a0 as the static sensitivity and _ D a1/a0 as the time constant of the system, equation (2.5) becomes:

If equation (2.6) is solved analytically, the output quantity q0 in response to a step change in qi at time t varies with
time in the manner shown in Figure 2.11. The time constant _ of the step response is the time taken for the output
quantity q0 to reach 63% of its final value. The liquid-in-glass thermometer (see Chapter 14) is a good example of a
first order instrument. It is well known that, if a thermometer at room temperature is plunged into boiling water, the
output e.m.f. does not rise instantaneously to a level indicating 100C, but instead approaches a reading indicating
100C in a manner similar to that shown in Figure 2.11. A large number of other instruments also belong to this first
order class: this is of particular importance in control systems where it is necessary to take account of the time lag
that occurs between a measured quantity changing in value and the measuring instrument indicating the change.
Fortunately, the time constant of many first order instruments is small relative to the dynamics of the process being
measured, and so no serious problems are created.
2.3.3 Second order instrument
If all coefficients a3 . . . an other than a0, a1 and a2 in equation (2.2) are assumed zero,
then we get:

Applying the D operator again , and rearranging:

It is convenient to re-express the variables a0, a1, a2 and b0 in equation (2.8) in terms of three parameters K (static
sensitivity), ω (undamped natural frequency) and _ (damping ratio), where:

Re-expressing equation (2.8) in terms of K, ω and _ we get:

This is the standard equation for a second order system and any instrument whose response can be described by it is
known as a second order instrument. If equation (2.9) is solved analytically, the shape of the step response obtained
depends on the value of the damping ratio parameter _. The output responses of a second order instrument for
various values of _ following a step change in the value of the measured quantity at time t are shown in Figure 2.12.
For case (A) where _ D 0, there is no damping and the instrument output exhibits constant amplitude oscillations
when disturbed by any change in the physical quantity measured. For light damping of _ D 0.2, represented by case
(B), the response to a step change in input is still oscillatory but the oscillations gradually die down. Further increase
in the value of _ reduces oscillations and overshoot still more, as shown by curves (C) and (D), and finally the
response becomes very overdamped as shown by curve (E) where the output reading creeps up slowly towards the
correct reading. Clearly, the extreme response curves (A) and (E) are grossly unsuitable for any measuring
instrument. If an instrument were to be only ever subjected to step inputs, then the design strategy would be to aim
towards a damping ratio of 0.707, which gives the critically damped response (C). Unfortunately, most of the
physical quantities that instruments are required to measure do not change in the mathematically convenient form of
steps, but rather in the form of ramps of varying slopes. As the form of the input variable changes, so the best value
for _ varies, and choice of _ becomes one of compromise between those values that are best for each type of input
variable behaviour anticipated. Commercial second order instruments, of which the accelerometer is a common
example, are generally designed to have a damping ratio (_) somewhere in the range of 0.6–0.8

5. Necessity for calibration


The foregoing discussion has described the static and dynamic characteristics of measuring instruments in some
detail. However, an important qualification that has been omitted from this discussion is that an instrument only
conforms to stated static and dynamic patterns of behavior after it has been calibrated. It can normally be assumed
that a new instrument will have been calibrated when it is obtained from an instrument manufacturer, and will
therefore initially behave according to the characteristics stated in the specifications. During use, however, its
behavior will gradually diverge from the stated specification for a variety of reasons. Such reasons include
mechanical wear, and the effects of dirt, dust, fumes and chemicals in the operating environment. The rate of
divergence from standard specifications varies according to the type of instrument, the frequency of usage and the
severity of the operating conditions. However, there will come a time, determined by practical knowledge, when the
characteristics of the instrument will have drifted from the standard specification by an unacceptable amount. When
this situation is reached, it is necessary to recalibrate the instrument to the standard specifications. Such recalibration
is performed by adjusting the instrument at each point in its output range until its output readings are the same as
those of a second standard instrument to which the same inputs are applied. This second instrument is one kept
solely for calibration purposes whose specifications are accurately known.

You might also like