Instrument-Characteristics
Instrument-Characteristics
The treatment of instrument characteristics can be divided into two distinct categories.
1. Static characteristic
2. Dynamic characteristic
STATIC CHARACTERISTICS
Some applications involve the measurement of quantities that are either constant or vary very slowly with
time. Under these circumstances it is possible to define a set of criteria that gives a meaningful description
of quality of measurement without interfering with dynamic descriptions that involve the use of
differential equations. These criteria are called Static characteristics. In general, static characteristics must
be considered when the system is used to a condition not varying with time.
The accuracy of an instrument is a measure of how close the output reading of the instrument is
to the correct or true value. In practice, it is more usual to quote the inaccuracy or measurement
uncertainty value rather than the accuracy value for an instrument. Inaccuracy or measurement
uncertainty is the extent to which a reading might be wrong and is often quoted as a percentage of the
full-scale (f.s.) reading of an instrument.
Because the maximum measurement error in an instrument is usually related to the full-scale
reading of the instrument, measuring quantities that are substantially less than the full-scale reading
means that the possible measurement error is amplified. For this reason, it is an important system design
rule that instruments are chosen such that their range is appropriate to the spread of values being
measured in order that the best possible accuracy is maintained in instrument readings. Clearly, if we are
measuring pressures with expected values between 0 and 1 bar, we would not use an instrument with a
measurement range of 0–10 bar.
Accuracy is usually expressed as the inaccuracy (error) and can appear in several forms.
Chapter 2
Example 1]
A pressure gauge with a measurement range of 0–10 bar has a quoted inaccuracy of ±1.0% FS.
(a) What is the maximum measurement error expected for this instrument?
(b) What is the likely measurement error expressed as a percentage of the output reading if this
pressure gauge is measuring a pressure of 1 bar, 5 bar, 10 bar?
2. Precision/Repeatability/Reproducibility
Precision is a term that describes an instrument’s degree of freedom from random errors. It is a
measure of the reproducibility of the measurements. If a large number of readings are taken of the same
quantity by a high-precision instrument, then the spread of readings will be very small. Precision is often,
although incorrectly, confused with accuracy. High precision does not imply anything about measurement
accuracy. A high-precision instrument may have a low accuracy. Low accuracy measurements from a high-
precision instrument are normally caused by a bias in the measurements, which is removable by
recalibration.
The terms repeatability and reproducibility mean approximately the same as precision but are
applied in different contexts. Repeatability describes the closeness of output readings when the same
input is applied repetitively over a short period of time, with the same measurement conditions, same
instrument and observer, same location, and same conditions of use maintained throughout.
Reproducibility describes the closeness of output readings for the same input when there are changes in
the method of measurement, observer, measuring instrument, location, conditions of use, and time of
measurement. Both terms thus describe the spread of output readings for the same input. This spread is
Chapter 2
referred to as repeatability if the measurement conditions are constant and as reproducibility if the
measurement conditions vary.
1. Conformity or consistency
2. Number of significant figures
Precision is used in measurements to describe the conformity or the reproducibility of results. A quantity
called precision index describes the spread or dispersion of repeated result about some central value.
High precision means a tight cluster of repeated results while low precision indicates a broad scattering
of results. But this should not lead us to the misconception that high precision indicates high degree of
accuracy since all the repetitions in result may be biased in the same way by some systematic effect that
produces same deviation of results from the true value.
An indication of the precision of the measurement is obtained from the number of significant figures in
which it is expressed. Significant figures convey actual information regarding the magnitude and the
measurement precision of a quantity. The more the significant figures, the greater the precision of
measurement.
3. Tolerance
Tolerance is a term that is closely related to accuracy and defines the maximum error that is to be
expected in some value. While it is not, strictly speaking, a static characteristic of measuring instruments,
it is mentioned here because the accuracy of some instruments is sometimes quoted as a tolerance value.
When used correctly, tolerance describes the maximum deviation of a manufactured component from
some specified value. For instance, crankshafts are machined with a diameter tolerance quoted as so
many micrometers (10-6 m), and electric circuit components such as resistors have tolerances of perhaps
5%.
4. Range or Span
The range or span of an instrument defines the minimum and maximum
values of a quantity that the instrument is designed to measure.
5. Linearity
It is normally desirable that the output reading of an instrument is linearly proportional to the quantity
being measured. Nonlinearity is then defined as the maximum deviation of any of the output readings
marked X from the straight line. Nonlinearity is usually expressed as a percentage of full-scale reading.
Chapter 2
Example 4] Suppose a liquid level from 5.5 to 8.6 m is linearly converted to pneumatic pressure from
3 to 15 psi. What pressure will result from a level of 7.2 meter? What level does a pressure of 4.7 psi
represent? [p=9.6 psi, l=5.9 m]
In case there is a deviation from linearity, then we expressed this value as “Per cent Linearity”.
Example 5] A 10,000 ohms variable resistance has a linearity of 0.1% and the movement of contact
arm is 320®. (a) determine the maximum position deviation in degrees and the resistance deviation
in ohms. (b) if the instrument is to be used as a potentiometer with a linear scale of 0 to 1.6V,
determine the maximum voltage error.
6. Static Sensitivity
The sensitivity of an instrument should be high and therefore the instrument should not have a range
greatly exceeding the value to be measured. However, some margin should be kept for any accidental
overloads. Deflection factor is the inverse of sensitivity.
Example 6]
The following resistance values of a platinum resistance thermometer were measured at a range of
temperatures. Determine the measurement sensitivity of the instrument in ohms/C.
307 200
314 230
321 260
328 290
𝟕
𝐒= 𝐨𝐡𝐦𝐬/°𝐂
𝟑𝟎
7. Threshold
If the input to an instrument is increased gradually from zero, the input will have to reach a certain
minimum level before the change in the instrument output reading is of a large enough magnitude to be
detectable. This minimum level of input (detected by the instrument) is known as the threshold of the
instrument. Manufacturers vary in the way that they specify threshold for instruments. Some quote
absolute values, whereas others quote threshold as a percentage of full-scale readings. As an illustration,
a car speedometer typically has a threshold of about 15 km/h. This means that, if the vehicle starts from
rest and accelerates, no output reading is observed on the speedometer until the speed reaches 15 km/h.
8. Resolution/Discrimination
When an instrument is showing a particular output reading, there is a lower limit on the magnitude of the
change in the input measured quantity that produces an observable change in the instrument output. Like
threshold, resolution is sometimes specified as an absolute value and sometimes as a percentage of FS
deflection. One of the major factors influencing the resolution of an instrument is how finely its output
scale is divided into subdivisions. Using a car speedometer as an example again, this has subdivisions of
typically 20 km/h. This means that when the needle is between the scale markings, we cannot estimate
speed more accurately than to the nearest 5 km/h. This value of 5 km/h thus represents the resolution of
the instrument.
The smallest increment in input (the quantity being measured) which can be detected with certainty by an
instrument is its resolution or discrimination. Resolution defines the smallest measurable input change
while threshold defines the smallest measurable input.
Chapter 2
All calibrations and specifications of an instrument are only valid under controlled conditions of
temperature, pressure, and so on. These standard ambient conditions are usually defined in the
instrument specification. As variations occur in the ambient temperature, certain static instrument
characteristics change, and the sensitivity to disturbance is a measure of the magnitude of this change.
Such environmental changes affect instruments in two main ways, known as zero drift and sensitivity drift.
Zero drift is sometimes known by the alternative term, bias.
Zero drift or bias describes the effect where the zero reading of an instrument is modified by a change in
ambient conditions. This causes a constant error that exists over the full range of measurement of the
instrument. The mechanical form of a bathroom scale is a common example of an instrument prone to
zero drift. It is quite usual to find that there is a reading of perhaps 1 kg with no one on the scale. If
someone of known weight 70 kg were to get on the scale, the reading would be 71 kg, and if someone of
known weight 100 kg were to get on the scale, the reading would be 101 kg. Zero drift is normally
removable by calibration. In the case of the bathroom scale just described, a thumbwheel is usually
provided that can be turned until the reading is zero with the scales unloaded, thus removing zero drift.
The typical unit by which such zero drift is measured is volts/C. This is often called the zero-drift
coefficient related to temperature changes. If the characteristic of an instrument is sensitive to several
environmental parameters, then it will have several zero drift coefficients, one for each environmental
parameter.
Sensitivity drift (also known as scale factor drift) defines the amount by which an instrument’s
sensitivity of measurement varies as ambient conditions change. It is quantified by sensitivity drift
coefficients that define how much drift there is for a unit change in each environmental parameter that
Chapter 2
the instrument characteristics are sensitive to. Many components within an instrument are affected by
environmental fluctuations, such as temperature changes: for instance, the modulus of elasticity of a
spring is temperature dependent. Figure above shows what effect sensitivity drift can have on the output
characteristic of an instrument. Sensitivity drift is measured in units of the form (angular degree/bar)/C.
If an instrument suffers both zero drift and sensitivity drift at the same time, then the typical modification
of the output characteristic is shown in Figure above.
Example 7]
A spring balance is calibrated in an environment at a temperature of 20C and has the following
deflection/load characteristic:
Load (kg) 0 1 2 3
Deflection (mm) 0 20 40 60
It is then used in an environment at a temperature of 30C, and the following deflection/ load
characteristic is measured:
Load (kg) 0 1 2 3
Deflection (mm) 5 27 49 71
Determine the zero drift and sensitivity drift per C change in ambient temperature.
10. Zero offset – the true relation of the zero output variable with the value of the
measurand.
11. Relaxation – time lag between the cause and effect of a physical phenomenon, given in
the form of a time constant.
Chapter 2
12. Hysteresis
Figure (right) illustrates the
output characteristic of an instrument
that exhibits hysteresis. If the input
measured quantity to the instrument is
increased steadily from a negative
value, the output reading varies in the
manner shown in curve A. If the input
variable is then decreased steadily, the
output varies in the manner shown in
curve B. The noncoincidence between
these loading and unloading curves is
known as hysteresis. Two quantities
are defined, maximum input hysteresis
and maximum output hysteresis, as
shown. These are normally expressed
as a percentage of the full-scale input or output reading, respectively.
Hysteresis can be expressed as the difference between the indication of a measuring instrument when
the value of the measured quantity is reached by increasing or decreasing of that quantity.
Dead space is defined as the range of different input values over which there is no change in output
value. Any instrument that exhibits hysteresis also displays dead space. However, some instruments that
do not suffer from any significant hysteresis can still exhibit a dead space in their output characteristics.
Backlash in gears is a typical cause of dead space and results in the sort of instrument output
characteristic. Backlash is commonly experienced in gear sets used to convert between translational and
rotational motion (which is a common technique used to measure translational velocity).
Dead time is defined as the time required by a measurement system to begin to respond to a change
in the measurand.
15. Magnification
Magnification means the increase of the magnitude of the output signal of the measuring devices to
many times to make the output reading more visible or readable.
16. Readability
Readability means how we can take the measurement quite easily, this is also an important factor of
the instrument.
17. Reliability
If you toss a coin ten times you might find, for example, that it lands heads uppermost six times out
of the ten. If, however, you toss the coin for a very large number of times then it is likely that it will land
heads uppermost half of the times. The probability of it landing heads uppermost is said to be half the
probability of a particular event occurring is defined as being when the total number of trials is very large.
The probability of the coin landing with either a heads or tails uppermost is likely to be 1, since every time
the coin is tossed this event will occur. A probability of I means a certainty that the event will take place
every time. The probability of the coin landing standing on edge can be considered to be zero, since the
number of occurrences of such an event is zero. The closer the probability is to 1 the more frequent an
event will occur; the closer it is to zero the less frequent it will occur.
Chapter 2
A high reliability system will have a low failure rate. Failure rate is the number of times during
some period of time that the system fails to meet the required level of performance, i.e.:
A failure rate of 0.4 per year means that in one year, if ten systems are observed, 4 will fail to
meet the required level of performance. If 100 systems are observed, 40 will fail to meet the required
level of performance. Failure rate is affected by environmental conditions. For example, the failure rate
for a temperature measurement system used in hot, dusty, humid, corrosive conditions might be 1.2 per
year, while for the same system used in dry, cool, non-corrosive environment it might be 0.3 per year.
With a measurement system consisting of a number of elements, failure occurs when just one of
the elements fails to reach the required performance. Thus in a system for the measurement of the
temperature of a fluid in some plant we might have a thermocouple, an amplifier and a meter. The failure
rate is likely to be highest for the thermocouple since that is the element in contact with the fluid while
the other elements are likely to be in the controlled atmosphere of a control room. The reliability of the
system might thus be markedly improved by choosing materials for the thermocouple which resist attack
by the fluid. Thus it might be in a stainless steel sheath to prevent fluid coming into direct contact with
the thermocouple wires.
Example 8]
The failure rate for a pressure measurement system used in factory A is found to be 1.0 per year while
the system used in factory B is 3.0 per year. Which factoiy has the most reliable pressure measurement
system?
The higher the reliability the lower the failure rate. Thus factory A has the more reliable system. The
failure rate of 1.0 per year means that if 100 instruments are checked over a period of a year, 100
failures will be found, i.e. on average each instrument is failing once. Tlie failure rate of 3.0 means that
if 100 instruments are checked over a period of a year, 300 failures will be found, i.e. instruments are
failing more than once in the year.
Chapter 2
Example 9]
A moving coil voltmeter has a uniform scale with 100 divisions, the FS reading is 200 V and 1/10 of a
scale divisions can be estimated with a fair degree of certainty. Determine the resolution of the
instrument in volt.
200
𝑠𝑐𝑎𝑙𝑒 𝑑𝑖𝑣𝑖𝑠𝑖𝑜𝑛 = =𝟐𝐕
100
1
𝑟𝑒𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 = 𝑥 2 = 𝟎. 𝟐 𝐕
10
Example 10]
A digital voltmeter has a read-out range from 0 to 9999 counts. Determine the resolution of the
instrument in volt when the FS reading is 9.999 V.
1
𝑠𝑐𝑎𝑙𝑒 𝑑𝑖𝑣𝑖𝑠𝑖𝑜𝑛 = =𝟏𝐕
9999
1
𝑟𝑒𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 = 𝑥 9.999 = 𝟏. 𝟎 𝐦𝐕
9999
Chapter 2
DYNAMIC CHARACTERISTICS
The static characteristics of measuring instruments are concerned only with the steady-state reading that
the instrument settles down to, such as accuracy of the reading. However, many measurements are
concerned with rapidly varying quantities, therefore we must examine the dynamic relations which exist
between the output and the input. This is normally done with the help of differential equations.
Performance criteria based upon dynamic relations constitute the Dynamic Characteristics.
The dynamic characteristics of a measuring instrument describe its behavior between the time a
measured quantity changes value and the time when the instrument output attains a steady value in
response. As with static characteristics, any values for dynamic characteristics quoted in instrument data
sheets only apply when the instrument is used under specified environmental conditions. Outside these
calibration conditions, some variation in the dynamic parameters can be expected.
When an input is applied to an instrument or a measurement system, the instrument or the system cannot
take up its final steady-state position immediately. On the other hand, the system goes through a transient
state before it finally settles to its final steady-state position.
Some measurements are made under conditions that sufficient time is available for the instrument or the
measurement system to settle to its final steady state conditions. Under such conditions the study of
behavior of the system under transient state is not much of importance; only steady state response of the
system need be considered. However, in many areas of measurement system applications it becomes
necessary to study the response of the system under both transient as well as steady state conditions. In
many applications, the transient response is more important than the steady state response.
It has been pointed out that the instruments and measuring systems do not respond to the input
immediately. This is on account of the presence of energy storage elements in the system. These energy
storage elements are electrical inductance and capacitance, mass, fluid, and thermal capacitances, etc.
the systems exhibit a characteristic sluggishness on account of presence of these elements. Furthermore,
pure delay in time is encountered when a system “waits” for some specific changes and reactions to take
place.
Invariably, measurement systems, especially in industrial, aerospace, and biological applications are
subjected to inputs which are not static but dynamic in nature, i.e., the inputs vary with time since the
input varies from instant to instant, so does the output. The behavior of the system under such conditions
is described by the dynamic response of the system such as:
1. Speed of response – this is the rapidity with which an instrument responds to changes I the
measured time.
2. Response time – defined as the time required by instrument or system to settle to its final steady
position after the application of the input. For portable instruments, it is the time taken by the
pointer to come to rest within ±0.3% of final scale length. For switchboard (panel) type of
instruments, it is the time taken by the pointer to come to rest within ±1% of its final scale length.
3. Lag – defined as the delay in the response of an instrument to a change in the measured quantity.
This is important when high speed measurements are required.
Chapter 2
a. Retardation type – the response of the instrument begins immediately after a change in
the measurand has occurred.
b. Time delay type – the response of the system begins after a ‘dead time’ after the
application of the input.
4. Fidelity – the ability of the system to reproduce the output in the same form as the input.
Example, a linearly varying quantity applied to a system and produces an output that also varies
linearly, then the system has 100% fidelity.
5. Dynamic error – it is the difference between the true value of the quantity changing with time
and the value indicated by the instrument if no static error is assumed.
where qi is the measured quantity, qo is the output reading, and ao . . . an , bo . . . bm are constants.
3. 2nd-order instruments - , ,
where K (static sensitivity), (undamped natural frequency), and (damping ratio),
Chapter 2
The output responses of a second-order instrument for various values of following a step change in the
value of the measured quantity at time t are shown in Figure. For case A, where = 0, there is no damping,
and the instrument output exhibits constant amplitude oscillations when disturbed by any change in the
physical quantity measured. For light damping of = 0.2, represented by case B, the response to a step
change in input is still oscillatory but the oscillations die down gradually. A further increase in the value of
reduces oscillations and overshoots still more, as shown by curves C and D, and finally the response
becomes very overdamped, as shown by curve E, where the output reading creeps up slowly toward the
correct reading. Clearly, the extreme response curves A and E are grossly unsuitable for any measuring
instrument. If an instrument were to be only ever subjected to step inputs, then the design strategy would
be to aim toward a damping ratio of 0.707, which gives the critically damped response (C). Unfortunately,
most of the physical quantities that instruments are required to measure do not change in the
mathematically convenient form of steps, but rather in the form of ramps of varying slopes. As the form
of the input variable changes, so the best value for varies, and choice of x becomes one of compromise
between those values that are best for each type of input variable behavior anticipated. Commercial
second-order instruments, of which the accelerometer is a common example, are generally designed to
have a damping ratio () somewhere in the range of 0.6–0.8.
The foregoing discussion has described the static and dynamic characteristics of measuring instruments
in some detail. However, an important qualification that has been omitted from this discussion is that an
instrument only conforms to stated static and dynamic patterns of behavior after it has been calibrated.
It can normally be assumed that a new instrument will have been calibrated when it is obtained from an
instrument manufacturer and will therefore initially behave according to the characteristics stated in the
specifications. During use, however, its behavior will gradually diverge from the stated specification for a
variety of reasons. Such reasons include mechanical wear and the effects of dirt, dust, fumes, and
chemicals in the operating environment. The rate of divergence from standard specifications varies
according to the type of instrument, the frequency of usage, and the severity of the operating conditions.
However, there will come a time, determined by practical knowledge, when the characteristics of the
instrument will have drifted from the standard specification by an unacceptable amount. When this
situation is reached, it is necessary to recalibrate the instrument back to standard specifications. Such
recalibration is performed by adjusting the instrument at each point in its output range until its output
readings are the same as those of a second standard instrument to which the same inputs are applied.
This second instrument is one kept solely for calibration purposes whose specifications are accurately
known.
Chapter 2
Calibration should be carried out using equipment which can be traceable back to national standards with
a separate calibration record kept for each measurement instrument. This record is likely to contain a
description of the instrument and its reference number, the calibration date, the calibration results, how
frequently the instrument is to be calibrated and probably details of the calibration procedure to be used,
details of any repairs or modifications made to the instrument, and any limitations on its use.
Chapter 2
Chapter 2
Chapter 2
Problem-Solving
1. A tungsten resistance thermometer with a range of -270 to +1100®C has a quoted inaccuracy of
±1.5%FS. What is the likely measurement error when it is reading a temperature of 950®C?
2. A manganin-wire pressure sensor has a measurement range of 0-20,000 bar and a quoted
inaccuracy of ±1% of full scale deflection. What is the maximum measurement error when the
instrument is reading a pressure of 15,000 bar?
3. A voltmeter is accurate to 98% of its full-scale reading. (a) if a voltmeter reads 175 V on the 300-
V range, what is the absolute error of the reading? (b) what is percentage of error of the reading
in (a)?
4. The measured value of a voltage is 111 V while its true value is 110 V. Calculate the relative
accuracy. [0.91%]
5. A pressure gauge with a measurement range of 0-30 bar has a quoted inaccuracy of ±0.5% of the
full-scale reading.
a. What is the maximum measurement error expected for this instrument?
b. What is the likely measurement error expressed as a percentage of the output reading if
this pressure gauge is measuring a pressure of 5 bar?
c. If the measurement error when reading a pressure of 5 bar is deemed to be too high,
what are the two main options for reducing the measurement error? Which of these two
options would you recommend and why would you recommend it?
6. The width of a room is measured 10 times by an ultrasonic rule and the following measurements
are obtained (units of meters): 4.292, 4.295, 4.296, 4.293, 4.292, 4.294, 4.293, 4.290, 4.294, and
4.291. The width of the same room is then measured by a calibrated steel tape that gives a reading
of 4.276 m, which can be taken as the correct value for the width of the room.
a. What is the measurement precision of the ultrasonic rule?
b. What is the maximum measurement inaccuracy of the ultrasonic rule?
7. If the average of a set of voltage readings is 30.15 V, compute the precision of one of the readings
that was equal to 29.9 V.
8. The diameter of a copper conductor varies over its length as shown in the table.
Reading # 1 2 3 4 5 6
Dia. (mm) 2.21 2.18 2.20 2.21 2.17 2.19
a. Calculate the precision of each measurement.
b. Calculate the average precision.
9. The output voltage of an amplifier was measured by six different students using the same
oscilloscope with the following results: 20.20, 19.90, 20.05, 20.10, 19.85, and 20.0 V. Which is the
most precise measurement?
10. A batch of steel rods is manufactured to a nominal length of 5 meters with a quoted tolerance of
±2%. What is the longest and shortest length of rod to be expected in the batch?
11. A packet of resistors bought in an electronics component shop gives the nominal resistance value
as 5000 Ω and the manufacturing tolerance as ±3%. If one resistor is chosen at random from the
packet, what is the minimum and maximum resistance value that this particular resistor is likely
to have?
Chapter 2
12. A 0-100 V voltmeter has 200 scale divisions which can read to ½ division. Determine the resolution
of the meter in volt. [0.25 V]
13. A moving coil ammeter has a uniformscale with 50 divisions and gives a full-scale reading of 5
A. The instrument can read up to ¼ of a scale division with a fair degree of certainty. Determine
the resolution of the instrument in mA. [25 mA]
14. What is the measurement range for a micrometer designed to measure diameters between 5.0
and 7.5 cm?
15. The dead zone of a certain pyrometer is 0.125% of the span. The calibration is 800®C to 1800®C.
What temperature change must occur before it is detected? [12.5®C]
16. A tungsten/5% rhenium-tungsten/26% rhenium thermocouple has an output emf as shown in the
table when its hot (measuring) junction is at the temperatures shown. Determine the sensitivity
of measurement for the thermocouple in mV/®C.
mV 4.37 8.74 13.11 17.48
®C 250 500 750 1000
17. The calibration of a pressure sensor with a range of 0e16 bar is checked by applying known inputs
to it in steps of 2 bar. The following measurements were obtained:
Input(bar) 0 2 4 6 8 10 12 14 16
Output(V) 0 .55 1.25 1.7 2.35 3.15 3.65 4.05 4.85
Draw a graph by plotting the data points on the graph paper provided and draw a best-fit straight
line through the data points. Use this graph to calculate: