0% found this document useful (0 votes)
37 views

Static and Dynamic Characteristics of Measuring Instruments

Uploaded by

hadeed shaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Static and Dynamic Characteristics of Measuring Instruments

Uploaded by

hadeed shaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Static and dynamic characteristics of measuring instruments

Introduction
One of the most frequent tasks that an Engineer involved in the design, commissioning, testing,
purchasing, operation or maintenance related to industrial processes, is to interpret
manufacturer’s specifications for their own purpose. It is therefore of paramount importance
that one understands the basic form of an instrument specification and at least the generic
elements in it that appear in almost all instrument specifications.
Specifications of an instrument are provided by different manufacturers in different wrap and
quoting different terms, which sometimes may cause confusion. Moreover, there are several
application specific issues. Still, broadly speaking, these specifications can be classified into
three categories:
(i) static characteristics,
(ii) dynamic characteristics and
(iii) random characteristics

1.1 Static Characteristics


Static characteristics refer to the characteristics of the system when the input is either held
constant or varying very slowly. The items that can be classified under the heading static
characteristics are mainly:

Range (or span)


It defines the maximum and minimum values of the inputs or the outputs for which the
instrument is recommended to use. For example, for a temperature measuring instrument the
input range may be 100-500 oC and the output range may be 4-20 mA.

Sensitivity
It can be defined as the ratio of the incremental output and the incremental input. While
defining the sensitivity, we assume that the input-output characteristic of the instrument is
approximately linear in that range. Thus if the sensitivity of a thermocouple is denoted as 10 μV
/oC, it indicates the sensitivity in the linear range of the thermocouple voltage vs. temperature
characteristics. Similarly sensitivity of a spring balance can be expressed as 25 mm/kg (say),
indicating additional load of 1 kg will cause additional displacement of the spring by 25mm
Again sensitivity of an instrument may also vary with temperature or other external factors.
This is known as sensitivity drift. Suppose the sensitivity of the spring balance mentioned above
is 25 mm/kg at 20 oC and 27 mm/kg at 30oC. Then the sensitivity drift/-C is 0.2 (mm/kg)/oC. In
order to avoid such sensitivity drift, sophisticated instruments are either kept at controlled
temperature, or suitable in-built temperature compensation schemes are provided inside the
instrument.
Sensitivity, (K) = Δθο/Δθi
Δθο : change in output; Δθi : change in input
Sensitivity of the whole system is (k) = k1 x k2 x k3 x .. x kn
Linearity
Linearity is actually a measure of nonlinearity of the instrument. When we talk about
sensitivity, we assume that the input/output characteristic of the instrument to be
approximately linear. But in practice, it is normally nonlinear, as shown in Fig.1.1. The linearity
is defined as the maximum deviation from the linear characteristics as a percentage of the full
scale output. Thus,
∆𝑂
𝐿𝑖𝑛𝑒𝑟𝑖𝑡𝑦 =
𝑂𝑚𝑎𝑥 − 𝑂𝑚𝑖𝑛
Where, ∆𝑂 = max(∆𝑂1 , ∆𝑂2 )

Figure 1.1: Linearity

Hysteresis
Hysteresis exists not only in magnetic circuits, but in instruments also. For example, the
deflection of a diaphragm type pressure gage may be different for the same pressure, but one
for increasing and other for decreasing, as shown in Fig.1.2. The hysteresis is expressed as the
maximum hysteresis as a full scale reading, i.e., referring fig.1.2,

𝐻
𝐻𝑦𝑡𝑒𝑟𝑒𝑠𝑖𝑠 = 𝑥 100
𝑂𝑚𝑎𝑥 − 𝑂𝑚𝑖𝑛

Figure 1.2: Hysteresis


Accuracy
Accuracy indicates the closeness of the measured value with the actual or true value, and is
expressed in the form of the maximum error (= measured value – true value) as a percentage of
full scale reading. Thus, if the accuracy of a temperature indicator, with a full scale range of 0-
500 oC is specified as ±0.5%, it indicates that the measured value will always be within ±2.5 oC
of the true value, if measured through a standard instrument during the process of calibration.
But if it indicates a reading of 250 oC, the error will also be ±2.5 oC, i.e. ±1% of the reading. Thus
it is always better to choose a scale of measurement where the input is near full-scale value.
But the true value is always difficult to get. We use standard calibrated instruments in the
laboratory for measuring true value if the variable.

Example:
The accuracy specified for a pressure gauge of range 0-10kPa is 2%. Find the maximum error in
measurement in Pa if it gives a reading of 4.0 kPa.
Answer: ±0.2kPa.

Precision
Precision indicates the repeatability or reproducibility of an instrument (but does not indicate
accuracy). If an instrument is used to measure the same input, but at different instants, spread
over the whole day, successive measurements may vary randomly. The random fluctuations of
readings, (mostly with a Gaussian distribution) are often due to random variations of several
other factors which have not been taken into account, while measuring the variable. A precision
instrument indicates that the successive reading would be very close, or in other words, the
standard deviation σe of the set of measurements would be very small. Quantitatively, the
precision can be expressed as:

Precision = measured range / σe

The difference between precision and accuracy needs to be understood carefully. Precision
means repetition of successive readings, but it does not guarantee accuracy; successive
readings may be close to each other, but far from the true value. On the other hand, an
accurate instrument has to be precise also, since successive readings must be close to the true
value (that is unique).

Repeatability
Ability of an instrument to give identical indications or responses for repeated applications of
the same value of the quantity measured under the same conditions of used. Good
repeatability does not guarantee accuracy. It is the ability of the measuring instrument to
repeat the same results for the measurements for the same quantity, when the measurements
are carried out
- by the same observer,
- with the same instrument,
- under the same conditions,
- without any change in location,
- without change in the method of measurement,
- the measurements are carried out in short intervals of time.

Reproducibility
The similarity of one measurement to another over time, where the operating conditions have
varied within the time span, but the input is restored.
In other words reproducibility is the closeness of the agreement between the results
of measurements of the same quantity, when individual measurements are carried out:
- by different observers,
- by different methods,
- using different instruments,
- under different conditions, locations, times etc.

Resolution
 When the input is slowly increased from some non-zero value, it is observed that the
output does not change at all until a certain increment is exceeded; this increment is
called resolution.
 It is the min. change in measured variable which produces an effective response of the
instrument.

Resonance
The frequency of oscillation is maintained due to the natural dynamics of the system.

Response
When the output of a device is expressed as a function of time (due to an applied input) the
time taken to respond can provide critical information about the suitability of the device. A
slow responding device may not be suitable for an application. This typically applies to
continuous control applications where the response of the device becomes a dynamic response
characteristic of the overall control loop. However in critical alarming applications where
devices are used for point measurement, the response may be just as important. The diagram
below shows response of the system to a step input.

1.2 Error
Through measurement, we try to obtain the value of an unknown parameter. However this
measured value cannot be the actual or true value. If the measured value is very close to the
true value, we call it to be a very accurate measuring system. But before using the measured
data for further use, one must have some idea how accurate is the measured data. So error
analysis is an integral part of measurement. We should also have clear idea what are the
sources of error, how they can be reduced by properly designing the measurement
methodology and also by repetitive measurements. Besides, for maintaining the accuracy the
readings of the measuring instrument are frequently to be compared and adjusted with the
reading of another standard instrument. This process is known as calibration.

Error Analysis
The term error in a measurement is defined as:
Error = Instrument reading – true reading
Error is often expressed in percentage as:
𝐼𝑛𝑠𝑡𝑟𝑢𝑚𝑒𝑛𝑡 𝑅𝑒𝑎𝑑𝑖𝑛𝑔 − 𝑇𝑟𝑢𝑒 𝑅𝑒𝑎𝑑𝑖𝑛𝑔
% 𝐸𝑟𝑟𝑜𝑟 =
𝑇𝑟𝑢𝑒 𝑅𝑒𝑎𝑑𝑖𝑛𝑔
The errors in instrument readings may be classified in to three categories as:
1. Gross errors
2. Systematic errors
3. Random Errors.

Gross errors arise due to human mistakes, such as, reading of the instrument value before it
reaches steady state, mistake of recording the measured data in calculating a derived
measured, etc. Parallax error in reading on an analog scale is also is also a source of gross error.
Careful reading and recording of the data can reduce the gross errors to a great extent.

Systematic errors are those that affect all the readings in a particular fashion. Zero error, and
bias of an instrument are examples of systematic errors.

On the other hand, there are few errors, the cause of which is not clearly known, and they
affect the readings in a random way. This type of errors is known as Random error.

There is an important difference between the systematic errors and random errors. In most of
the case, the systematic errors can be corrected by calibration, whereas the random errors can
never be corrected, the can only be reduced by averaging, or error limits can be estimated.

1.3 Calibration and error reduction


It has already been mentioned that the random errors cannot be eliminated. But by taking a
number of readings under the same condition and taking the mean, we can considerably
reduce the random errors. In fact, if the number of readings is very large, we can say that the
mean value will approach the true value, and thus the error can be made almost zero. For finite
number of readings, by using the statistical method of analysis, we can also estimate the range
of the measurement error.
On the other hand, the systematic errors are well defined, the source of error can be identified
easily and once identified, it is possible to eliminate the systematic error. But even for a simple
instrument, the systematic errors arise due to a number of causes and it is a tedious process to
identify and eliminate all the sources of errors. An attractive alternative is to calibrate the
instrument for different known inputs.
Calibration is a process where a known input signal or a series of input signals are applied to the
measuring system. By comparing the actual input value with the output indication of the
system, the overall effect of the systematic errors can be observed. The errors at those
calibrating points are then made zero by trimming few adjustable components, by using
calibration charts or by using software corrections.
Strictly speaking, calibration involves comparing the measured value with the standard
instruments derived from comparison with the primary standards kept at Standard
Laboratories. In an actual calibrating system for a pressure sensor (say), we not only require a
standard pressure measuring device, but also a test-bench, where the desired pressure can be
generated at different values. The calibration process of an acceleration measuring device is
more difficult, since, the desired acceleration should be generated on a body, the measuring
device has to be mounted on it and the actual value of the generated acceleration is measured
in some indirect way.
The calibration can be done for all the points, and then for actual measurement, the true value
can be obtained from a look-up table prepared and stored beforehand. This type of calibration
is often referred as software calibration. Alternatively, a more popular way is to calibrate the
instrument at one, two or three points of measurement and trim the instrument through
independent adjustments, so that, the error at those points would be zero. It is then expected
that error for the whole range of measurement would remain within a small range. These types
of calibration are known as single-point, two-point and three-point calibration. Typical input-
output characteristics of a measuring device under these three calibrations are shown in fig.1.3.
The single-point calibration is often referred as offset adjustment, where the output of the
system is forced to be zero under zero input condition. For electronic instruments, often it is
done automatically and is the process is known as auto-zero calibration. For most of the field
instruments calibration is done at two points, one at zero input and the other at full scale input.
Two independent adjustments, normally provided, are known as zero and span adjustments.
One important point needs to be mentioned at this juncture. The characteristics of an
instrument change with time. So even it is calibrated once, the output may deviate from the
calibrated points with time, temperature and other environmental conditions. So the
calibration process has to be repeated at regular intervals if one wants that it should give
accurate value of the measurand throughout.

Figure 1.3: (a) single point calibration, (b) two point calibration, (c) three point calibration

You might also like