5.the Essence and Application of Information Measurement Systems
5.the Essence and Application of Information Measurement Systems
The word “measurement” comes from the Greek word “metron,” which means
“limited proportion.” Measurement is a technique in which properties of an object
are determined by comparing them to a standard.
In 1960, the metric system was revised, simplified, and renamed the Système
International d’Unites (International System of Units) or SI system (meters,
kilograms, etc.). This system is the standard form of measurement in almost every
country around the world, except for the United States, which uses the U.S.
customary units system (inches, quarts, etc.). The SI system is, however, the
standard system used by scientists worldwide, including those in the United States.
There are several properties of matter that scientists need to measure, but the most
common properties are length and mass. Length is a measure of how long an
object is, and mass is a measure of how much matter is in an object. Mass and
length are classified as base units, meaning that they are independent of all other
units. In the SI system, each unit of measure has a base unit.
Some things scientists want to measure may be very large or very small. The SI,
or metric, system is based on the principle that all quantities of a measured
property have the same units, allowing scientists to easily convert large and small
numbers. To work with such large or small numbers, scientists use metric prefixes.
Prefixes can be added to base units and make the value of the unit larger or
smaller. For example, all masses are measured in grams, but adding prefixes, such
as milli- or kilo-, alters the amount. Measuring a human’s mass in grams would not
make much sense because the measurement would be such a large number.
Instead, scientists use kilograms because it is easier to write and say that a human
has a mass of 90 kilograms than a mass of 90,000 grams. Likewise, one kilometer
is 1,000 meters, while one millimeter is 0.001 meters.
New scientific instruments have allowed scientists to measure even smaller and
larger amounts. Therefore, additional prefixes have been added over the years,
such as femto- (10-15) and exa- (1018). When scientists take measurements, they
generally have two goals—accuracy and precision. Accuracy means to get as close
as possible to the true measurement (true value) of something. Precision means to
be able to take the same measurement and get the same result repeatedly.
Unfortunately, measurement is never 100% precise or accurate, so the true value
measure of something is never exactly known. This uncertainty is a result of error.
Error is a concept that is naturally associated with measuring because measurement
is always a comparison to a standard. Manually measuring something always
involves uncertainty because it is based on judgment. If two people use a ruler to
measure how tall a plant is, it may look like 20 cm to one person and 18 cm to the
other. To increase the accuracy of a measurement, and therefore reduce error, an
object should always be measured more than once. Taking multiple measurements
and then determining the average measurement increases the likelihood that you
have the exact measurement. For example, when measuring an object, you
determine its length to be 10.50 cm; when you measure it again, you get a
measurement of 10.70 cm. If you average these measurements, you get 10.60 cm.
The length of the object is most likely closer to 10.60 cm than it is to either 10.50
cm or 10.70 cm. There are two main types of error—random error and systematic
error. Random error is not controllable. As the name suggests, the occurrence of
random errors is random and due to chance. Alternatively, systematic errors are
controllable and have a known cause. A systematic error can result from many
things, such as instrument error, method error, or human error. Systematic errors
can usually be identified and reduced or even eliminated.
Why MSA
A measurement system tells you in numerical terms important information about
the part that you measure. How sure can you be about the data that the
measurement system delivers? Is it the real value that you obtain out of the
measurement process, or is it the measurement system error that you see?
Measurement system errors can be costly, and can affect your capability to obtain
the true value of what you measure. It is often said that you can be confident about
your reading of a parameter only to the extent that your measurement system can
allow.
For example, a process may have total tolerance to an extent of 30 microns. The
measurement system that you use to measure this process, however, may have an
inherent variation (error) of 10 microns. This means that you are left with only 20
microns as your process tolerance. The measurement system variation is eating
into your process tolerance.
However on the shop floor, where these instruments are used, the measurement
process is affected by many different factors such as method of measurement,
appraiser’s influence, environment and the method of locating the part. All these
can introduce variation in the measured value. It is important we assess, measure
and document all the factors affecting the measurement process, and try to
minimize their effect.