Static and Dynamic Characteristics of Measuring Instruments
Static and Dynamic Characteristics of Measuring Instruments
Introduction
One of the most frequent tasks that an Engineer involved in the design, commissioning, testing,
purchasing, operation or maintenance related to industrial processes, is to interpret
manufacturer’s specifications for their own purpose. It is therefore of paramount importance
that one understands the basic form of an instrument specification and at least the generic
elements in it that appear in almost all instrument specifications.
Specifications of an instrument are provided by different manufacturers in different wrap and
quoting different terms, which sometimes may cause confusion. Moreover, there are several
application specific issues. Still, broadly speaking, these specifications can be classified into
three categories:
(i) static characteristics,
(ii) dynamic characteristics and
(iii) random characteristics
Sensitivity
It can be defined as the ratio of the incremental output and the incremental input. While
defining the sensitivity, we assume that the input-output characteristic of the instrument is
approximately linear in that range. Thus if the sensitivity of a thermocouple is denoted as 10 μV
/oC, it indicates the sensitivity in the linear range of the thermocouple voltage vs. temperature
characteristics. Similarly sensitivity of a spring balance can be expressed as 25 mm/kg (say),
indicating additional load of 1 kg will cause additional displacement of the spring by 25mm
Again sensitivity of an instrument may also vary with temperature or other external factors.
This is known as sensitivity drift. Suppose the sensitivity of the spring balance mentioned above
is 25 mm/kg at 20 oC and 27 mm/kg at 30oC. Then the sensitivity drift/-C is 0.2 (mm/kg)/oC. In
order to avoid such sensitivity drift, sophisticated instruments are either kept at controlled
temperature, or suitable in-built temperature compensation schemes are provided inside the
instrument.
Sensitivity, (K) = Δθο/Δθi
Δθο : change in output; Δθi : change in input
Sensitivity of the whole system is (k) = k1 x k2 x k3 x .. x kn
Linearity
Linearity is actually a measure of nonlinearity of the instrument. When we talk about
sensitivity, we assume that the input/output characteristic of the instrument to be
approximately linear. But in practice, it is normally nonlinear, as shown in Fig.1.1. The linearity
is defined as the maximum deviation from the linear characteristics as a percentage of the full
scale output. Thus,
∆𝑂
𝐿𝑖𝑛𝑒𝑟𝑖𝑡𝑦 =
𝑂𝑚𝑎𝑥 − 𝑂𝑚𝑖𝑛
Where, ∆𝑂 = max(∆𝑂1 , ∆𝑂2 )
Hysteresis
Hysteresis exists not only in magnetic circuits, but in instruments also. For example, the
deflection of a diaphragm type pressure gage may be different for the same pressure, but one
for increasing and other for decreasing, as shown in Fig.1.2. The hysteresis is expressed as the
maximum hysteresis as a full scale reading, i.e., referring fig.1.2,
𝐻
𝐻𝑦𝑡𝑒𝑟𝑒𝑠𝑖𝑠 = 𝑥 100
𝑂𝑚𝑎𝑥 − 𝑂𝑚𝑖𝑛
Example:
The accuracy specified for a pressure gauge of range 0-10kPa is 2%. Find the maximum error in
measurement in Pa if it gives a reading of 4.0 kPa.
Answer: ±0.2kPa.
Precision
Precision indicates the repeatability or reproducibility of an instrument (but does not indicate
accuracy). If an instrument is used to measure the same input, but at different instants, spread
over the whole day, successive measurements may vary randomly. The random fluctuations of
readings, (mostly with a Gaussian distribution) are often due to random variations of several
other factors which have not been taken into account, while measuring the variable. A precision
instrument indicates that the successive reading would be very close, or in other words, the
standard deviation σe of the set of measurements would be very small. Quantitatively, the
precision can be expressed as:
The difference between precision and accuracy needs to be understood carefully. Precision
means repetition of successive readings, but it does not guarantee accuracy; successive
readings may be close to each other, but far from the true value. On the other hand, an
accurate instrument has to be precise also, since successive readings must be close to the true
value (that is unique).
Repeatability
Ability of an instrument to give identical indications or responses for repeated applications of
the same value of the quantity measured under the same conditions of used. Good
repeatability does not guarantee accuracy. It is the ability of the measuring instrument to
repeat the same results for the measurements for the same quantity, when the measurements
are carried out
- by the same observer,
- with the same instrument,
- under the same conditions,
- without any change in location,
- without change in the method of measurement,
- the measurements are carried out in short intervals of time.
Reproducibility
The similarity of one measurement to another over time, where the operating conditions have
varied within the time span, but the input is restored.
In other words reproducibility is the closeness of the agreement between the results
of measurements of the same quantity, when individual measurements are carried out:
- by different observers,
- by different methods,
- using different instruments,
- under different conditions, locations, times etc.
Resolution
When the input is slowly increased from some non-zero value, it is observed that the
output does not change at all until a certain increment is exceeded; this increment is
called resolution.
It is the min. change in measured variable which produces an effective response of the
instrument.
Resonance
The frequency of oscillation is maintained due to the natural dynamics of the system.
Response
When the output of a device is expressed as a function of time (due to an applied input) the
time taken to respond can provide critical information about the suitability of the device. A
slow responding device may not be suitable for an application. This typically applies to
continuous control applications where the response of the device becomes a dynamic response
characteristic of the overall control loop. However in critical alarming applications where
devices are used for point measurement, the response may be just as important. The diagram
below shows response of the system to a step input.
1.2 Error
Through measurement, we try to obtain the value of an unknown parameter. However this
measured value cannot be the actual or true value. If the measured value is very close to the
true value, we call it to be a very accurate measuring system. But before using the measured
data for further use, one must have some idea how accurate is the measured data. So error
analysis is an integral part of measurement. We should also have clear idea what are the
sources of error, how they can be reduced by properly designing the measurement
methodology and also by repetitive measurements. Besides, for maintaining the accuracy the
readings of the measuring instrument are frequently to be compared and adjusted with the
reading of another standard instrument. This process is known as calibration.
Error Analysis
The term error in a measurement is defined as:
Error = Instrument reading – true reading
Error is often expressed in percentage as:
𝐼𝑛𝑠𝑡𝑟𝑢𝑚𝑒𝑛𝑡 𝑅𝑒𝑎𝑑𝑖𝑛𝑔 − 𝑇𝑟𝑢𝑒 𝑅𝑒𝑎𝑑𝑖𝑛𝑔
% 𝐸𝑟𝑟𝑜𝑟 =
𝑇𝑟𝑢𝑒 𝑅𝑒𝑎𝑑𝑖𝑛𝑔
The errors in instrument readings may be classified in to three categories as:
1. Gross errors
2. Systematic errors
3. Random Errors.
Gross errors arise due to human mistakes, such as, reading of the instrument value before it
reaches steady state, mistake of recording the measured data in calculating a derived
measured, etc. Parallax error in reading on an analog scale is also is also a source of gross error.
Careful reading and recording of the data can reduce the gross errors to a great extent.
Systematic errors are those that affect all the readings in a particular fashion. Zero error, and
bias of an instrument are examples of systematic errors.
On the other hand, there are few errors, the cause of which is not clearly known, and they
affect the readings in a random way. This type of errors is known as Random error.
There is an important difference between the systematic errors and random errors. In most of
the case, the systematic errors can be corrected by calibration, whereas the random errors can
never be corrected, the can only be reduced by averaging, or error limits can be estimated.
Figure 1.3: (a) single point calibration, (b) two point calibration, (c) three point calibration