0% found this document useful (0 votes)
4 views

Measurements and Instrumentation Part I

The document provides an overview of the International System of Units (SI), detailing its base, supplementary, and derived units essential for scientific measurements. It also discusses the functional elements of measuring instruments, including primary sensing, variable conversion, manipulation, transmission, and presentation elements, along with static and dynamic characteristics that define instrument performance. Key concepts such as accuracy, precision, repeatability, and calibration processes are elaborated to enhance understanding of measurement systems.

Uploaded by

abdullaasif680
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Measurements and Instrumentation Part I

The document provides an overview of the International System of Units (SI), detailing its base, supplementary, and derived units essential for scientific measurements. It also discusses the functional elements of measuring instruments, including primary sensing, variable conversion, manipulation, transmission, and presentation elements, along with static and dynamic characteristics that define instrument performance. Key concepts such as accuracy, precision, repeatability, and calibration processes are elaborated to enhance understanding of measurement systems.

Uploaded by

abdullaasif680
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

MODULE-I

INTRODUCTION
1.1 SI UNITS
An international organization of which most of the advanced and developing countries, are
members called the General Conference of Weights and Measures (CGPM), has been entrusted with
the task of prescribing definitions for fundamental units of weights and measures are the very basis
of Science & Technology today.
The International System of Units and designated by the abbreviation, SI, System
Internationale d’ units is now legally compulsory.
It consist of 6 base units, 2 supplementary units & 27 derived units. Principles for the use of prefixes
for forming the multiples & sub-multiples of units were also laid down.

1.1 Base units


S.No. Unit Name Symbol
1. Length Metre m
2. Mass kilogramme kg
3. Time second s
4. Intensity of electric current Ampere A
5. Thermodynamic temperature Kelvin K
6. Luminous intensity candela cd
1.2 Supplementary units
S.No. Unit Name Symbol
1. Plane angle radian rad
2. Solid angle steradian sr
1.3 Derived units
S.No. Unit Name Symbol
1. Area Square metre m2
2. Volume Cubic metre m3
3. Frequency Hertz Hz
4. Density Kilogtamme/Cubic metre kg/ m3
5. Velocity Metre/second m/s

1
6. Angular velocity Radian/second Rad/s
7. Acceleration Metre/second square m/s2
8. Angular acceleration radian/second square Rad/s2
9. Force Newton N
10. Pressure Newton per square metre N/m2
11. Dynamic viscosity Newton second per square metre Ns/m2
12. Kinematic viscosity Square metre per second m2/s
13. Work, energy, quantity of heat Joule J
14. Power Watt W
15. Quantity of electricity Coulomb C
16. Potential, potential difference, emf Volt V
17. Electric field strength Volt/metre V/m
18. Resistance Ohm Ω
19. Capacitance Farad F
20. Magnetic flux Weber Wb
21. Inductance Henry H
22. Magnetic flux density Tesla T(wb/m2)
23. Magnetic field strength Ampere per metre A/m
24. Magneto motive force Ampere A
25. Luminous flux Lumen Lm
26. Luminance Candela/square metre Cd/m2
27. Illuminance Lux lx

1.2 FUNCTIONAL ELEMENTS OF AN INSTRUMENT


Primary Sensing Element:
It receives energy from the measured medium and produce an output depending on the
measured quantity (“Measurand”). An instrument always extracts some energy from the measured
medium. The output of primary sensing element is some physical variable like Voltage,
Displacement.
Variable Conversion Element:
An instrument to perform the desired function, it may be necessary to convert the physical
variable into another more suitable variable while preserving the information content of the original

2
signal. Element which perform this function is called variable conversion element. Some instruments
may not require this conversion element.

Fig. 1.1 Block Diagram of Basic Functional Elements of an Instruments

Variable Manipulation Element:


By manipulation, here mean specifically a change in numerical value according to some
definite rule but a preservation of the physical nature of the variable. An element performs such
function is called variable manipulation element.
Data Transmission Element:
When the functional element of an instruments are actually physically separated, it becomes
necessary to transmit the data from one to another. An element performing the function is called a
data transmission element.
 shaft and bearing assembly
 Telemetry system for Transmitting Signal from satellite to ground equipment.
Data Presentation Element:
The information about the measured quantity is to be communicated to a human being for
monitoring, control or analysis purpose, measured quantity must be put into a form recognizable by
one of the human senses.

3
Fig. 1.2 Block Diagram of Functional Elements of an Instruments

An element that perform this “translation” function is called data presentation element.
- Indication of a pointer moving over a scale.
- The recording .A pen moving over a chart.
- Data storage /playback = magnetic tape recorder/reproducer. FIG A2
1. Indicating functions – instruments and systems use different kinds of methods for supplying
information concerning the variable quantity measurement. Most of the time this information
is obtained as a deflection of a pointer of a measuring instrument. For eg. The deflection of
pointer of a speedometer indicates the speed of the automobile at that moment.
2. Recording function – the instrument makes a written record, of the value of the quantity
under measurement against time or some other variable. For eg., a potentiometric type of
recorder used for monitoring temperature records the instantaneous values of temperatures on
a strip chart recorder.
3. Controlling function – this is one of the most important functions especially in the field of
industrial control processes. In this case, the information is used by the instrument or the
system to control the original measured quantity.
1.3 STATIC AND DYNAMIC CHARACTERSTICS:
1.3.1 Introduction:
In this part we propose to study in considerable detail, the performance of measuring
instrument and systems with regard to how well they measure the desired input and how thoroughly
they reject the spurious inputs.
Performance characteristics 1.static characteristics
2. dynamic characteristics

4
Some application involve the measurement of quantities that are constant or vary only quite
slowly. Under these condition, it is possible to define a set of performance criteria that give a
meaningful description of the quality of measurement without concerned with the dynamic
descriptions. These criteria are called static characteristics.
Some measurement problems involve varying quantities. The dynamic relations between the
instrument input and output can be examined by the use of differential equations. Performance
criteria based on these dynamic relations constitute the dynamic characteristics. Static char also
influence the quality of measurement under dynamic conditions.
Static char- non linear (or) statistical effect.
Dynamic char- linear differential equation- neglect the effect of dry friction, backlash etc.
Both characteristic studied separately and the overall performance is judged superimposition of both
characteristics.
1.3.2 Static Characteristics:
Static calibration:
All the static performance characteristics are obtained by one form of a process called static
calibration. Static calibration refers to a situation in which all input (desired interfering, modifying)
except one are kept at constant values. Then the one input under study is varied over some range of
constant values, which causes the output to vary over some range of constant values. The input and
output relations developed in this way comprise a static calibration valid under the stated constant
condition of all the other inputs.
This procedure can be repeated for each input and some suitable form of superposition of
these individual effects can describe the overall instrument static behavior.
Pressure measurement = 100oc changes 0.1% error.
Measuring pressure in 100oc gives 2% error
2 0.1
Total error = 0.1   =0.102%
100 100
0.102% error change is negligible.
In performing a calibration the following steps are necessary.
1) Examine the construction of the instrument, and identify and list all the possible inputs.
2) Decide which of the inputs will be significant in the application for which the instrument
is to be calibrated.
3) Procure apparatus that will allow you to vary all significant inputs over the ranges
considered necessary.
5
4) By holding some input constant varying other and recording the outputs develop the
desired static I-O relations.
The process of static calibration enables the definition of the measuring system. The
following terms specify the static char of measuring devices.
Accuracy:
It is defined as the nearness of the indicated value to the true value of the quantity being
measured.
True Value:
It may be defined as the average of an infinite no of measured values when the average
deviation due to the various contributing factor tends to zero.
True value- It refers to a value that would be obtained if the quantity under consideration
were measured by an “EXAMPLAR METHOD”, that is a method agreed upon by experts. The
measured value invariably differs from the true value because of the effect of the disturbing inputs
such as temp, humidity and performance characteristics of the measuring system itself. Because of
the disturbance the calibration char is likely to swing. When the accuracy of an indicating instrument
is specified, if indicated the maximum likely departure of the indicated value from the true value.
Accuracy =  1% i.e max departure from true value=1% of the full-scale indication, and that it may
be present at any part of the scale.
Static error is a algebraic difference between indicated value and the true value of the quantity
presented to the input.
Uncertainty:
The range of variation of the indicated value from the true value.
Precision:
It is a term, which describes an instruments degree of freedom from random errors. If an
input quantity of the some value is applied repeatedly to the instrument under exactly the same
environmental conditions, the indicated value is expected to be the same.
Note : High precision does not imply anything about accuracy and high precision instrument may
have low accuracy.
Repeatability:
It reflects the closeness of agreement of a group of output signals obtained by the same
observer for the same input quantity using the same methods and apparatus under the same
operating environment but over a short time span.
Reproducibility:
6
It is the closeness with which the same value of the input quantity is measured at different
times and under different condition of usage of the instrument and by different instruments. The
degree of repeatability or reproducibility in measurements from an instrument is an alternative way
of expressing its precision.
Stability:
The ability of a measuring system to maintain its standard of performance over prolonged
period of time.
Zero Stability:
The ability of an instrument to restore to zero reading after the input quantity has been
brought to zero.
Absolute static error = measured value – true value 1.1
It dose not indicate accuracy measure
 2A negligible for order of 1000A
 2A regarded when current measure of 10A
True value
Relative static error  1.2
Absolute error
From 1.1 & 1.2
Measured value
True value 
1+ Relative static error
Static correction = True value- Measured value = (- Static error)
A thermometer reads 95.450C and static correction given is -0.080C. Determine the true
value of the temperature.
True value = measured value+ correction
= 95. 40C+(-0.080C)
= 95.370C.
Scale span of an instrument is defined as the difference between the largest and the smallest
reading of the instruments.
Thermometer calibrated between 2000C to 5000C
Range = 200C to 500C (or 500)
Span = 500- 200= 3000C
A thermometer is calibrated 1500C to 2000C the accuracy =  0.25% of instrument span.
Find the maximum static error.

7
0 .25*span
Static error 
100
Span = 200-150 = 500C.
Noise is an unwanted signal superimposed upon the signal of interest there by causing a
deviation of the output from its expected value. The ratio of desired signal to the unwanted noise is
called signal to noise ratio.
S /N =signal power/noise power
= (signal of interest in V)2/(unwanted noise in V)2
1) Generated noise- amplifier has resistance, capacitance
2) Conducted noise- power supply to the amp.
3) Radiated noise- due to environmental disturbances
Johonson noise = it is the thermally generated noise in a resistor.

Fig. 1.3 Zero Stability

S / N at the input
Noise factor 
S / N at output
Noise figure = 10 log f = 10 log(noise voltage at output & input/ s/n at output):
Static Sensitivity:
The static sensitivity of an instrument is the ratio of the magnitude of the output signal to the
magnitude of input signal. The inverse of static sensitivity is called as deflection factor or inverse
sensitivity.

8
q0
Sensitivity = ,
qi

qi
Deflection factor 
q0

The Sensitivity of an instrument should be high and therefore the instrument should not have
a range greatly exceeding the value to be measured.
A mercury thermometer has a capillary tube of 0.25 mm dia. If the bulb and tube are made of
a zero expansion material what volume must it have if a sensitivity of 2.5mm/co is desired?
Let output temperature = 200C & co-efficient of volumetric of mercury = 0.181*10-3/C0.
let Lc = mercury length inside the capillary at 00C (mm)
LC+ ΔLC = mercury length inside capillary after heated (mm)
AC = area of capillary tube (mm2)( πr2)
αv = co-efficient of volumetric of mercury = 0.181*10-3/C0
Δv=change in temperature8C
q0
Sensitivity =
qi

= (Lc+ΔLc)-Lc/Δv =Δ Lc/Δv =2.5mm/8C


Lc=ΔLc/(αv * Δv)=1/(0.181 * 10-3 ) * 2.5 =13.8m
hence the volume of mercury before heating
=ACLC=(0.254)2*(π/4)*LC=677.4mm3

Fig. 1.4 Static Sensitivity


9
Linearity:
The proportionality between the input quantity and output signal. If the sensitivity is constant
for all values from zero to full scale value of the measuring system then the calibration
characteristics is linear and is a straight line passing through zero.

Fig. 1.5 Linearity


This straight line drawn by using the method of least squares from the given calibration data.
Non linearity = (Max deviation of output from idealized straight line)*100/(actual reading or
full scale reading)
Hysteresis:
Hysteresis is a phenomenon which depicts different output effects when loading and
unloading- electric and mechanical system it is a non-coincidence of loading and unloading curves.
Reason: The energy put into the stressed parts when loading cannot be recovered back when
unloading.
Threshold
The input of the instrument is increasing very gradually from zero there will be
some minimum value below which no output change can be detected this minimum value
defines the threshold

The first detectable output change equal to noticeable measurable change.

10
This phenomenon is due to input hysteresis.

There will be no noticeable change in the movement of the driven gear unless the

driving gear moves through a distance (x) which backlash between the gear.
Bias
It describes the constant error which exists over the full range of measurement of a
instrument
Sensitivity to disturbance
The environment changes affect instruments in two main ways known as zero drift and
sensitivity drift.
Zero drift
It describes the effect where the zero reading of an instrument is modified by a change in
ambient conditions (bias).
Sensitivity drift (scale factor drift)
It defines the amount by which an instruments sensitivity of measurements varies as ambient
conditions change
Dead Zone or Dead Space
Dead space is defined as range of different input values over which there is no change in
output value. The dead zone in a thermometer is 0.125% of span range is 400ºC to 1000ºC. what
temperature change might occur before it is detected

Span=1000-400 =600 C

Dead zone=(.125/100) * 600=0.75ºC


Resolution or Discrimination
If the input is slowly increased some arbitrary (non zero ) input value it will again be found
that output does not change at all until a certain increment is exceeded. This increment is called
resolution or discrimination of instrument.
0

Resolution defines the smallest measurable input change while Threshold define the smallest
measurable input A moving coil meter has uniform scale with hundred divisions, fsd =200V and
1/10 of a scale division can be estimated with a fair degree of certainty. Determine the resolution
instrument in volts
1 scale divison =200/100 =2V
Resolution 1/10 * scale division = 2/10=0.2V

11
Input and Output Impedance
The loading are due to impedance of various elements connected in a system and hence it is
a desirable at this stage to analyze their effects.
Input impedance
This fig shows a voltage source and input device connected across it. The magnitude
of impedance of the element connected across the signal source is called input impedance
The magnitude of the input impedance zi =ei/ii
The power extracted by the input device from the source is P= eiii =ii2/Zi
From the above two equations the low input impedance device connected across the voltage
signal source draws more current and drains more power from the source than a high input
impedance device hence the device should have high input impedance volt When the signal is as the
form of current then series input devices are used it is helpful to use the concept of input admittance
The magnitude of input admittance is given by Yi=Zi/ei
Input impedance Zi=ei/Ii=1/Yi
Power extracted from the source is P =iiei=Ii2/Yi
In the case series elements power drawn from the current signal source is small when the input
admittance of the device is high that is input impedance is low.
Ammeter = low input impedance
Output impedance of a device
Output impedance of a device is defined as equivalent impedance as seen by the load
Let eo=voltage appearing across the output terminals (thevenin’s volt)
The word equivalent implies a thevenin’s equivalent circuit
let eL=voltage appearing across the output terminals when the load is connected
(eo-eL)=ZoIL
Zo=(eo-eL)/IL
Power loss in voltage source p=(eo-eL)*IL=IL2*Zo
For voltage sources the lower the output impedance, lesser the voltage drop and also less power
consumption ideally there should need be any loading effect and this required that output impedance
Zo of the voltage source be equal to zero.
Output admittance
When dealing with current source it is preferable to deal with output admittance. The output
admittance of the current source is Yo=(Io-IL)/eL
Io = current delivered by the constant current source
12
Power loss in the current source P = (Io-IL) * eL =eL2Yo =eL2/Zo
From the above equation power drained from the power source is small if its output admittance is
small or when its output impedance is large. Hence the current source should have infinite output
impedance.
Output impedance of the voltage source- in series
Output impedance of the current source – in parallel
Instrument
A device or mechanism used to determine the present value of the quantity under
measurement.
1. Measurement
The process of determining the amount, degree, or capacity by comparison (direct or
indirect) with the accepted standards of the system units being used.
2. Accuracy
The degree of exactness (closeness) of a measurement compared to the expected (desired)
value.
3. Resolution
The smallest change in a measured variable to which an instrument will respond.
4. Precision
A measure of the consistency or repeatability of measurements, i.e. successive reading are
not differ. (Precision is the consistency of the instrument output for a given value of input.)
5. Expected value
The design value, i.e. the most probable value that calculations indicate one should expected
to measure.
6. Error
The deviation of the true value from the desired value.
7. Sensitivity
The ratio of the change in output (response) of the instrument to a change of input or
measurement variable.

1.4 DYNAMIC CHARACTERISTICS


Static characteristics are determined when the output remains constant. But dynamic
characteristics of measuring system relate to its performance when the measurant is a function of

13
time. Dynamic response of a measuring system, when subjected to dynamic inputs, which are
function of time, depends very much on its own parameter.
The performances of the measuring systems are analyzed for using the common
input signals.
Step-constant amplitude-step change (f(t)=A).
Ramp-Amplitude changing linearly with time ( f(t)=kt for t>0).
Sinusoidal-Amplitude varies sinusoid ally with time (f(t)=Xm sin(2Πft at t>0).
1. Measuring devices are normally sluggish in their behavior once the input signal is given, the
output signal takes some time to reach a value which bears a definite and fixed relations with the
input signal.
2. The response consists of two components.
(a) Transient response
(b) Steady state response
3. When the transient component reaches a negligible value, the measuring system is said to have
gone to steady state.
4. The behavior of the measuring system under dynamic conditions can be mathematically modeled
and the solution can be obtained for required specific inputs.
5. The mathematical for measuring system are invariably linear differential equations with constant
coefficients.
6. Most of the measuring systems either belong to first or second order. Hence the dynamic
characteristics are specified in such a way as to distinguish one measuring system from the other.
Dynamic characteristics are classified as
(a)Dynamic error
(b)Fidelity
(c)Bandwidth and
(d)Speed of response.
1.4.1 Dynamic Error
1. It is defined as the algebraic difference between the indicated/recorded value of a measurand and
its true value at any instant when the measurand is a function of time.
2. The dynamic error is also a function of time when both the measurand and the output signal of
instrument are function of time.
3. The error is zero for only zero order system.
4. The measuring system of higher order has two components,
14
(a)Transient error
(b)Steady state error
5. In steady state the static sensitivity of the system remains constant for a specified range of values
of the measurand.
6. The transducer system dynamic characteristics has only transient error.
7. The ratio of peak amplitude of the output signal to that of the input signal is known as dynamic
sensitivity is also depending on the frequency of the input.
8. Steady state error apart from amplitude it is also there in phase, which is the phase difference
between the output and input signals. It is also frequency dependant. The variation of phase
difference with frequency is known as phase-frequency characteristics.
Fidelity
Fidelity of a measuring system refers to its ability to follow instant by instant the variation of
measurand with time. True or excellent fidelity of a system would imply that the waveforms of the
output and input signals coincide with each other at all instants, under steady state conditions. Then
there won’t be any amplitude error or phase error. Only zero order system possesses excellent
fidelity.
Bandwidth:
Bandwidth of the system is a range of frequencies for which its dynamic sensitivity is
satisfactory. Measuring system dynamic sensitivity is required to be within 2% of its static
sensitivity. Self-filters and amplifiers, their BW specification extends to frequency at which the
dynamic sensitivity is 70.7% of that at zero or the mid frequency.
Speed of Response:
It refers to its ability to respond to sudden change of amplitude of input signal. It is usually
specified as the time taken by the system to come close to steady state conditions, for a step input
function.
Time Constant:
It is associated with the behavior of a first order system and is defined as the time taken by
the system to reach 63.2% of its final output amplitude.
Time delay in the occurrence of the output signal due to input signal is called as
measurement lag.
Settling Time

15
It is the time taken by the system to be within a close range of its steady state value. Relating
to measurements, it is the time taken by the pointer to reach a value within 1% of the final
indication.
Dynamic Range:
It is the range of signals which the measuring system is likely to respond faithfully under
dynamic condition or it is the ratio of the amplitude of the largest signal to the smallest signal to
which the system is subjected and the system can handle satisfactorily.

Dynamic Response of Different Input


i. Step response.
A step response represents an application of a sudden change. It may represents a sudden
application of a force to a mechanical system like sudden rotation of a shaft or may represent sudden
closure of a switch of an electric circuit. Unit step function u(t) is defined as
r(t)=1; t>0
=0; t<0.
A unit step input is designated by u(t). therefore, a unit step input is written as r(t)=u(t)

Fig. 1.6 Unit Step Input and Step Input


The laplace transform of a unit step input is,

1
R( s)   1.e st dt 
0
s

A step input of magnitude A wherein the magnitude changes from 0 to A in zero time.
Input is r(t)=A; t≥0 or r(t)=A.u(t)
The laplace transform of a step input with magnitude A,
16

A
R( s)   A.e st dt 
0
s

ii. Ramp input


It represents an input signal which changes at a constant rate with respect to time like
constant velocity and is called as velocity input.
A unit ramp signal starts at a value of zero and increases with constant slope of unit with
respect to time. It can be expressed as, r(t)=t; t>0
=0; t<0.
r(t) = t.u(t)

Fig. 1.7 Unit Ramp Input and Ramp Input


1
The laplace transform of unit ramp function R( s)   t.e st dt 
0 s2

A ramp input signal of magnitude A. The Input is r(t)=At; t≥0 or r(t)=At.u(t)


=0; t<0.

A
The laplace transform of this function, R( s)   At.e st dt 
0 s2

iii. Parabolic input


It represents an input signal which is proportional to the square of time and there represents a
constant acceleration. Hence, this signal is called an acceleration input. The unit parabolic input is,
r(t)=t2; t>0 or r(t) = t2.u(t)
=0; t<0.

17
Fig. 1.8 Unit Parabolic Input and Parabolic Input


2
The laplace transform of unit parabolic function, R( s)   t 2 .e st dt 
0 s3

The input signal as, is r(t)=A2t; t≥0 or r(t)=At2.u(t)


=0; t<0.

2A
The laplace transform of this function, R( s)   At 2 .e st dt 
0 s3

1.4.2 ANALYSIS OF DYNAMIC CHARACTERISTICS USING DIFFERENTIAL


EQUATION
Based on the behavior of the instrument, certain assumptions are made and a simpler
mathematical model is developed. The physical losses governing the behavior of each element are
utilized to obtain the mathematical model of the physical system. The development of such models
usually results in the formation of standard differential equation with constant coefficients and hence
these equations are considered as linear models. The composite physical system is then characterized
as linear time invariant and is represented by means of a relationship between input and output
variables.

Transfer function is used to estimate the transient and steady state response of the system
when subjected to input signals of varying description. The response for the instrumentation system

18
for step input and sinusoidal input functions are mostly analyzed. While modeling of simple physical
system, we get ordinary linear differential equations with constant coefficients, generally represented
as

Where,
q (t) – input variable,
i

q (t) – output variable,


0

a and b are constant representation of the physical parameters


n – order of the system.
When the relationship between q (t) and q (t) is transformed in to the complex frequency
0 i

domain it is represented by the TF G(s)


G(s)=q (s)/q (s)=
0 i

Complete solution

The dynamic characteristics of an instruments are (i) Speed of Response (ii) Fidelity (iii)
Dynamic Error (iv) Measuring Lag.
1. Speed of Response
It is defined as the rapidity with which a measurement system responds to changes in
measured quantity. It is one of the dynamic characteristics of a measurement system.
2. Fidelity
It is defined as the degree to which a measurement system indicates changes in the measured
quantity without any dynamic error.
3. Dynamic Error

19
It is the difference between the true value of the quantity changing with time and the value
indicated by the measurement system if no static error is assumed. It is also called measurement
error. It is one the dynamic characteristics.
4. Measuring Lag
It is the retardation delay in the response of a measurement system to changes in the
measured quantity. It is of 2 types:
Retardation type: The response begins immediately after a change in measured quantity has
occurred.
Time delay: The response of the measurement system begins after a dead zone after the application
of the input.
1.5 ERRORS IN MEASUREMENT
 The deviation of the true value from the derived value is called an error.
 True value of the quantity to be measured may be defined as the average of an infinite
number of measured values when the average deviation due to various contributing factors
tends to zero (expected value).
 The degree to which a measurement nears the expected value is expressed in terms of the
error in measurement.
 Error may be expressed either as absolute or % of error.

1.5.1 TYPES OF ERROR

20
Systematic Errors
Systematic error is caused by factors that systematically affect measurement of the variable
across the sample. Constant errors are those which affect all of a series of measurements by the same
amount.
These systematic errors are usually quite easy to avoid or compensate for, but only by a
conscious effort in the conduct of the observation, usually by proper zeroing and calibration of the
measuring instrument.
1. Instrumental Errors
2. Environmental Errors
3. Observational Errors
1. Instrumental Error: These errors arise due to 3 reasons-
Due to inherent short comings in the instrument
Due to misuse of the instrument
Due to loading effects of the instrument
2. Environmental Error: These errors are due to conditions external to the measuring device.
These may be effects of temperature, pressure, humidity, dust or of external electrostatic or
magnetic field.
3. Observational Error: The error on account of parallax is the observational error.
Residual error This is also known as residual error. These errors are due to a multitude of
small factors which change or fluctuate from one measurement to another. The happenings or
disturbances about which we are unaware are lumped together and called “Random” or “Residual”.
Hence the errors caused by these are called random or residual errors.
Random Error
Errors which occur due to multitude of small factors which change or fluctuate from one
measurement to another Each measurement is also influenced by a number of minor events, such as
building vibrations, electrical fluctuations, motions of the air, and friction in any moving parts of the
instrument. These tiny influences constitute a kind of "noise" that also has a random character.
Comparison between Random error and Systematic Error
S.No. RANDOM ERRORS SYSTEMATIC ERROR
1. Non specific causes Definite causes
2. Poor precision Poor accuracy

21
3. Not reproducible Reproducible
4. Measurements are evenly direction All measurements are in one
distributed about true value.

Limiting Error
 Choice of the instrument for a particular application depends on the accuracy desired.
 Manufacturer has to specify the deviations from the nominal value of the particular quantity.
 The limits of the deviations from the specified value are defined as limiting/guarantee errors.
Є0= Am-At =_A
Actual value of quantity, Aa=As+/- _A
Relative Error
 Ration of the error to specified value of quantityЄ0= Am-At =_A
 Relative error = (Absolute value/Specified value)x100
 Relative error, Єr= (_A/As)x100
1.5.2 SOURCES OF ERRORS
 Insufficient knowledge of process parameters and design conditions.
 Poor design.
 Change in process parameters, irregularities, upsets, etc.
 Poor maintenance of instruments.
 Errors caused by person operating the instrument or equipment.
 Certain design limits
1.5.3 PRECAUTIONS TO AVOID ERRORS
1. Gross errors:
 Great care should be taken in reading & recording data.
 Two or three even more readings should be taken. These readings should be taken by
different experiments
2. Instrumental errors:
 Selecting a suitable instrument for the particular measurement applications.
 Applying correction factors after determining the amount of instrumental error.
 Calibrating the instrument against a standard
3. Environmental errors:
 Keep the conditions constant

22
 Using the instrument immune to environmental factors
 Applying computed corrections
4. Observational errors:
 Modern equipments with digital display
1.5.4 STATISTICAL EVALUATION OF RANDOM ERRORS
1.Statistical Averages
2.Dispersion from Mean
3.Average Deviation
4.Standard Deviation
5.Variance
6.Gaussian Curve
7.Precision index
8.Probable error (P.E)
9..Average deviation from normal curve.
10. Standard deviation for normal curve
11. Probable error of a finite number of readings
 Random errors-occur/scatters around a central value.
 Follows “Laws of probability”.
 Statistical treatment of data
 Multi sample data –repeated measurements at different test conditions, using different
instruments, different ways of measurements and by multiple observers- presented in
histogram.
 Single sample data-done at identical conditions.
 Arithmetic mean-computed as the sum of all the numbers in the series divided by the count
of all numbers in the series.
 Dispersion is variability or spread in a variable or a probability distribution- denotes the
extent to which the values are dispersed about the central value.
 Range-It is calculated by subtracting the smallest observation (sample minimum) from the
greatest (sample maximum) and provides an indication of statistical dispersion. x max - x min
 Deviation:-the difference between the value of an observation and the mean of the population
in mathematics and statistics.
d=(x1-X)+(x2 -X)+(x3-X)+...(xn -X)

23
d=(x1 +x2 +x3 +....+xn)+nX
 Standard deviation, which is based on the square of the difference absolute deviation, where
the absolute value of the difference is used
 The absolute deviation is the absolute difference between each value of the statistical
variable and the arithmetic mean. The average deviation is the arithmetic mean of the
absolute deviations. The average deviation is represented by

D
d1  d 2  ...  d n

d
n n
 Standard deviation, which is based on the square of the difference. The Standard Deviation
is a measure of how spread out numbers are.

n>20, SD   
d12  d 22  ...  d n2

d 2

n n

n<20, SD   
d12  d 22  ...  d n2

d 2

n n 1
 Variance-The average of the squared differences from the Mean

n>20, SD 2

d 2

n<20, SD 
d 2

n 1
 Normal or Gaussian curves of errors:
 The law of the probability states that the normal occurance of deviations from average value
of an infinite number of measurements or observations can be expressed as
n
y exp( h 2 x 2 )

x- magnitude of deviation from mean
h-constant called precision index
y-number of readings at any deviation x.
n
 Precision index: when x=0, y 

 Probable error, PE, r  0.6745


d 2

n

 Standard deviation of mean,  m 
n
24
 Standard deviation of standard deviation,

1.6 CALIBRATION
 Calibration is defined as the comparison of an instrument with a primary or secondary
standard of an instrument of known accuracy
 Determines the accuracy of the measurement data
 Provides traceability to the measurement
 Comparison under specific conditions with a higher standard, which is traceable to a national
or international standard, or an acceptable alternative.
 Calibration of all instruments is important since it affords the opportunity to check the
instruments against a known standard and subsequently to find errors and accuracy.
 Calibration Procedure involve a comparison of the particular instrument with either
1. Primary standard
2. Secondary standard with a higher accuracy than the instrument to be calibrated an
instrument of known accuracy.
1. Calibration process
The purpose of calibration is to ensure that the measuring accuracy is known over the whole
measurement range under specified environmental conditions for calibration.

Instrument calibration has to be repeated at prescribed intervals because the characteristics of


any instrument change over a period of time. Factors deciding the frequency of calibration:
25
• usage rate
• conditions of use
• skill level of personnel
• degree of accuracy expected
• costs of calibration
Maintaining proper records is an important part of fulfilling the calibration function, which is
very useful in providing a feedback which shows whether the calibration frequency has been chosen
correctly or not.
1.7 STANDARDS
A standard is a physical representation of a unit of measurement. The term “standard‟ is
applied to a piece of equipment having a known measure of physical quantity.
Categories
Types of Standards
1. International Standards (defined based on international agreement )
2. Primary Standards (maintained by national standards laboratories)
3. Secondary Standards ( used by industrial measurement laboratories)
4. Working Standards ( used in general laboratory)
1.7.1 INTERNATIONAL STANDARDS
 One recognized by international agreement as the basis for fixing the values of all other
standards of the given quantity.
 Defined by international agreements.
 Maintained at the international bureau of weights & Measures in France.
 Used for comparison with primary standards.
 Checked & evaluated regularly against absolute measurements.
1.7.2 PRIMARY STANDARD
 One which establishes the value of all other standards of a given qty within a particular
country.
 To verify and calibrate the secondary standard
 Maintained at national institutions in various countries around the world.
NBS-Washington
NPL- Great Britain

26
1.7.3 SECONDARY STANDARD
 One whose value has been established by comparison with a primary standard.
 Basic reference standard used in industrial measurement laboratories to calibrate high
accuracy equipments and components.
 To verify the accuracy of working standards.
 Periodically checked at the institutions that maintain primary standards. (National Standard
Laboratories)
1.7.4 WORKING STANDARD
A secondary standard used to verify the measuring instruments in places such as
factories, shops etc., Standard resistors, standard capacitor & standard inductors in an
electronics laboratory.
TWO MARKS
1. What is meant by measurement?
Measurement is an act or the result of comparison between the quantity and a Pre-defined
standard.
2. Mention the basic requirements of measurement. ·
The standard used for comparison purpose must be accurately defined and should be commonly
accepted. · The apparatus used and the method adopted must be provable.
3. What are the 2 methods for measurement? ·
Direct method and · Indirect method.
4. Explain the function of measurement system.
The measurement system consists of a transducing element which converts the quantity to be
measured in an analogous form. the analogous signal is then processed by some intermediate means
and is then fed to the end device which presents the results of the measurement.
5. Define Instrument.
Instrument is defined as a device for determining the value or magnitude of a quantity or
variable.
6. List the types of instruments. ·
The 3 types of instruments are · Mechanical Instruments · Electrical Instruments and · Electronic
Instruments.
7.Classify instruments based on their functions.
Indicating instruments Integrating instruments Recording instruments

27
8.Give the applications of measurement systems.
The instruments and measurement systems are sued for
· Monitoring of processes and operations.
· Control of processes and operations.
· Experimental engineering analysis.
9.Why calibration of instrument is important?
The calibration of all instruments is important since it affords the opportunity to check the
instrument against a known standard and subsequently to errors in accuracy.
10. Explain the calibration procedure.
Calibration procedure involves a comparison of the particular instrument with either. · A primary
standard · A secondary standard with a higher accuracy than the instrument to be calibrated or An
instrument of known accuracy.
11. Define Calibration.
It is the process by which comparing the instrument with a standard to correct the accuracy.

REFERENCES
1. Sawhney A.K. “A course in Electrical and Electronics, Measurement and Instrumentation”,
Dhanpal Rai & Sons, New Delhi, 2001.
2. H.S. Kalsi, “Electronic Instrumentation”,Tata McGraw Hill Publishing Co. Ltd, New Delhi,
2011.

28

You might also like