Metrology Assignment
Metrology Assignment
METROLOGY AND
INSTRUMENTATION
KAILAS SREE CHANDRAN
S7 INDUSTRIAL 432
[email protected]
S7 N 2010
PRINCIPLES OF MEASUREMENT
Today, the techniques of measurement are of immense importance in
most facets of human civilization. Present-day applications of measuring
instruments can be classified into three major areas. The first of these is
their use in regulating trade, and includes instruments which measure
physical quantities such as length, volume and mass in terms of standard
units.
The second area for the application of measuring instruments is in
Monitoring functions. These provide information which enables human beings
to take some prescribed action accordingly. Whilst there are thus many
uses of instrumentation in our normal domestic lives, the majority of
monitoring functions exist to provide the information necessary to allow a
human being to control some industrial operation or process. In a chemical
process for instance, the progress of chemical reactions is indicated by the
measurement of temperatures and pressures at various points, and such
measurements allow the operator to take correct decisions regarding the
electrical supply to heaters, cooling water flows, valve positions, etc. One
other important use of monitoring instruments is in calibrating the
instruments used in the automatic process control systems.
Use as part of automatic control systems forms the third area for the
application of measurement systems. The characteristics of measuring
instruments in such feedback control systems are of fundamental importance
to the quality of control achieved. The accuracy and resolution with which an
output variable of a process is controlled can never be better than the
accuracy and resolution of the measuring instruments used. This is a very
important principle, but one which is often inadequately discussed in many
texts on automatic control systems. Such texts explore the theoretical
aspects of control system design in considerable depth, but fail to give
sufficient emphasis to the fact that all gain and phase margin performance
Measuring Equipments
A measuring instrument exists to provide information about the
physical value of some variable being measured. In simple cases, an
instrument consists of a single unit which gives an output reading or signal
according to the magnitude of the unknown variable applied to it. However,
in more complex measurement situations, a measuring instrument may
consist of several separate elements. These components might be contained
within one or more boxes, and the boxes holding individual measurement
elements might be either close together or physically separate. Because of
the modular nature of the elements within it, a measuring instrument is
commonly referred to as a measurement system, and this term is used
extensively to emphasize this modular nature.
other cases, this element takes the form of either a signal presentation unit
or a signal recording unit. These take many forms according to the
requirements of the particular measurement application.
PRECISION
Precision is how close the measured values are to each other. The
precision of a measurement is the size of the unit you use to make a
measurement. The smaller the unit, the more precise the measurement.
Precision depends on the unit used to obtain a measure. Consider measures
of time, such as 12 seconds and 12 days. A measurement of 12 seconds
implies a time between11.5 and 12.5 seconds. This measurement is precise
to the nearest second, with a maximum potential error of 0.5 seconds. A
time of 12 days is far less precise. Twelve days suggests a time between
11.5 and 12.5 days, yielding a potential error of 0.5 days, or 43,200
seconds! Because the potential error is greater, the measure is less precise.
Thus, as the length of the unit increases, the measure becomes less precise.
Measurements in industrial settings such as a rubber manufacturing plant must be both accurate and
precise. Here a technician is measuring tire pressure.
centimeters. However, because the first measure is between 3.65 and 3.75
cm, and the second measure is between 5.55 and 5.65 cm, the area is
somewhere between 20.2575 and 21.1875 square centimeters. Reporting
the result to the nearest hundredth of a square centimeter is misleading.
The accepted practice is to report the result using the fewest number of
significant digits in the original measures. Since both 3.7 and 5.6 have two
significant digits, the result is rounded to two significant digits and an area
of 21 square centimeters is reported. Again, while the result may not even
be this precise, this practice normally produces acceptable results.
ACCURACY
their accuracy may be quite different. Suppose the measurements are both
about 0.3 cm too small. The relative errors for these measures are 0.3 cm
out of 15.3 cm (about 1.96 percent) and0.3 cm out of 201.3 cm (about
0.149 percent). The second measurement is more accurate because the
error is smaller when compared with the actual measurement. Consequently,
for any specific measuring tool, one can be equally precise with the
measures. But accuracy is often greater with larger objects than with smaller
ones.
Confusion can arise when using these terms. The tools one uses affect
both the precision and accuracy of one's measurements. Measuring with a
millimeter tape allows greater precision than measuring with an inch tape.
Because the error using the millimeter tape should be less than the inch
tape, accuracy also improves; the error compared with the actual length is
likely to be smaller. Despite this possible confusion and the similarity of the
ideas, it is important that the distinction between precision and accuracy be
understood.
Degree of Accuracy
Examples:
SENSITIVITY OF MEASUREMENT
All calibrations and specifications of an instrument are only valid under
controlled conditions of temperature, pressure, etc. These standard ambient
conditions are usually defined in the instrument specification. As variations
occur in the ambient temperature, etc., certain static instrument
characteristics change, and the sensitivity to disturbance is a measure of the
magnitude of this change. Such environmental changes affect instruments in
two main ways, known as zero drift and sensitivity drift.
Zero drift describes the effect where the zero reading of an instrument
is modified by a change in ambient conditions. Typical units by which zero
drift is measured are volts/°C, in the case of a voltmeter affected by ambient
temperature changes. This is often called the zero drift coefficient related to
temperature changes. If the characteristic of an instrument is sensitive to
several environmental parameters, then it will have several zero drift
coefficients, one for each environmental parameter. The effect of zero drift is
to impose a bias in the instrument output readings; this is normally
removable by recalibration in the usual way.
Sensitivity drift (also known as scale factor drift) defines the amount
by which an instrument's sensitivity of measurement varies as ambient
conditions change. It is quantified by sensitivity drift coefficients which
define how much drift there is for a unit change in each environmental
parameter that the instrument characteristics are sensitive to.
CALIBRATION
Understanding instrument calibration and its proper use is an essential
element in an overall laboratory program. Proper calibration will ensure that
equipment remains within validated performance limits to accurately report
patient results. Calibration is the set of operations that establish, under
specified conditions, the relationship between the values of quantities
indicated by a measuring instrument and the corresponding values realized
by standards.
Instrument Calibration
Although the exact procedure may vary from product to product, the
calibration process generally involves using the instrument to test samples
of one or more known values called ―calibrators.‖ The results are used to
establish a relationship between the measurement technique used by the
instrument and the known values. The process in essence ―teaches‖ the
instrument to produce results that are more accurate than those that would
occur otherwise. The instrument can then provide more accurate results
when samples of unknown values are tested in the normal usage of the
product.
be made between the desired level of product performance and the effort
associated with accomplishing the calibration. The instrument will provide
the best performance when the intermediate points provided in the
manufacturer’s performance specifications are used for calibration; the
specified process essentially eliminates, or ―zeroes out‖, the inherent
instrument error at these points.
Importance of Calibration
instructions and selecting the wrong calibrator values will ―teach‖ the
instrument incorrectly, and produce significant errors over the entire
operating range. While many instruments have software diagnostics that
alert the operator if the calibrators are tested in the incorrect order (i.e.
Calibrator 2 before Calibrator 1), the instrument may accept one or more
calibrators of the wrong value without detecting the operator error.
Frequency of Calibration
The simple answer to this question, although not a very helpful one, is
―when it needs it.‖ From a more practical standpoint, daily or periodically
testing the control solutions of known values can provide a quantitative
indication of instrument performance, which can be used to establish a
history. If the controls data indicate that instrument performance is stable,
or is varying randomly well within the acceptable range of values, then there
is no need to recalibrate the instrument. However, if the historical data
indicates a trend toward, or beyond, the acceptable range limits, or if the
instrument displays a short-term pronounced shift, then recalibration is
warranted. Realize also that specific laboratory standard operating
procedures or regulatory requirements may require instrument recalibration
even when no action is warranted from a results standpoint. These
requirements should always take precedence, and the above guidance used
at times when there is uncertainty as to whether instrument recalibration
should be performed to improve accuracy.
STANDARDS OF MEASUREMENTS
In A.D. 1120 the king of England decreed that the standard of length
in his country would be named the yard and would be precisely equal to the
distance from the tip of his nose to the end of his outstretched arm.
Similarly, the original standard for the foot adopted by the French was the
length of the royal foot of King Louis XIV. This standard prevailed until 1799,
when the legal standard of length in France became the meter, defined as
one ten-millionth the distance from the equator to the North Pole along one
particular longitudinal line that passes through Paris.
Many other systems for measuring length have been developed over
the years, but the advantages of the French system have caused it to prevail
in almost all countries and in scientific circles everywhere. As recently as
1960, the length of the meter was defined as the distance between two lines
on a specific platinum–iridium bar stored under controlled conditions in
France. This standard was abandoned for several reasons, a principal one
being that the limited accuracy with which the separation between the lines
on the bar can be determined does not meet the current requirements of
science and technology. In the 1960s and 1970s, the meter was defined as 1
650 763.73 wavelengths of orange-red light emitted from a krypton-86
lamp. However, in October 1983, the meter (m) was redefined as the
distance traveled by light in vacuum during a time of 1/299 792 458 second.
In effect, this latest definition establishes that the speed of light in vacuum
is precisely 299 792 458 m per second.
1. Material Standards
2. Wavelength Standard
The wavelength of a selected orange radiation of Krtypton-86 isotope
was measured and used as the basic unit of length.
After the 1 July 1959 deadline, agreed upon in 1958, the US and the
British yard were defined identically, at 0.9144 metres to match the
international yard. Metric equivalents in this article usually assume this
latest official definition. Before this date, the most precise measurement of
the Imperial Standard Yard was 0.914398416 metres.
There are corresponding units of area and volume, the square yard
and cubic yard respectively, and these are sometimes referred to simply as
"yards" when no ambiguity is possible. For example, an American or
Figure 10. Standard lengths on the wall of the Royal Observatory, Greenwich, London - 1 yard (3
feet), 2 feet, 1 foot, 6 inches (1/2 foot), and 3 inches. The separation of the inside faces of the
markers is exact at an ambient temperature of 60 °F (16 °C) and a rod of the correct measure,
resting on the pins, will fit snugly between them.
Figure 12. Historical International Prototype Metre bar, made of an alloy of platinum and
iridium, that was the standard from 1889 to 1960.
The metre is the length of the path travelled by light in vacuum during
a time interval of 1⁄299 792 458 of a second.
and then the unknown wavelengths could be determined from the standard
wavelengths by using interpolation. This technique has evolved into the
modern computer-controlled photoelectric recording spectrometer. Accuracy
of many more orders of magnitude can be obtained by the use of
interferometric techniques, of which Fabry-Perot and Michelson
interferometers are two of the most common. See also Interferometry;
Spectroscopy.
Source Conditions
Advantages :
(a) Not a material standard and hence it is not influeced by effects of
variation of environmental conditions like temperature, pressure
(b) It need not be preserved or stored under security and thus there is not
fear of being destroyed.
(c) It is subjected to destruction by wear and tear.
(d) It gives the unit of length which can be produced consistently at all
times.
(e) The standard facility can be easily available in all standard laboratories
and industries
(f) Can be used for making comparative measurements of very high
accuracy.