0% found this document useful (0 votes)
58 views97 pages

Notes Ae 606 Flight Instrumentation Notes

This document defines key terms related to flight instrumentation: - Accuracy measures how close readings are to true values, while precision describes freedom from random errors. Calibration removes bias (systematic error) to define imprecision (random error). - Resolution is the smallest measurable input change, while threshold is the minimum input value that produces an output change. Hysteresis is the difference between loading and unloading curves, caused by internal friction. It produces dead space where no output change occurs over a range of inputs. - Transducers convert one form of energy to another. Active transducers add energy to measurements, while passive transducers do not add energy but may remove it during operation.

Uploaded by

siddhartha2862
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views97 pages

Notes Ae 606 Flight Instrumentation Notes

This document defines key terms related to flight instrumentation: - Accuracy measures how close readings are to true values, while precision describes freedom from random errors. Calibration removes bias (systematic error) to define imprecision (random error). - Resolution is the smallest measurable input change, while threshold is the minimum input value that produces an output change. Hysteresis is the difference between loading and unloading curves, caused by internal friction. It produces dead space where no output change occurs over a range of inputs. - Transducers convert one form of energy to another. Active transducers add energy to measurements, while passive transducers do not add energy but may remove it during operation.

Uploaded by

siddhartha2862
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

AE-606 FLIGHT INSTRUMENTATION NOTES

Prepared by :RK Satpathy

Basic Definitions:

Accuracy: depends on calibration , degree of closeness with which the reading


approaches the true value of the quantity to be measured. The accuracy of an
instrument is a measure of how close the output reading of the instrument is to the
correct value. In practice, it is more usual to quote the inaccuracy figure rather
than the accuracy figure for an instrument. Inaccuracy is the extent to which a
reading might be wrong, and is often quoted as a percentage of the full-scale (f.s.)
reading of an instrument. The term measurement uncertainty is frequently used in
place of inaccuracy.

Precision: Precision is a term that describes an instrument’s degree of freedom


from random errors. If a large number of readings are taken of the same quantity
by a high precision instrument, then the spread of readings will be very small.
Precision is often, though incorrectly, confused with accuracy. High precision does
not imply anything about measurement accuracy. A high precision instrument may
have a low accuracy. Low accuracy measurements from a high precision instrument
are normally caused by a bias in the measurements, which is removable by
recalibration.
In contrast: precession is a drift (precession) of angular velocity to even when a
constant friction torque Tf is applied. Precession is a change in the orientation of
the rotational axis of a rotating body of gyroscopes.

Calibration allows decomposition of the total error of a measurement process into


two parts, the bias and the imprecision. That is, if we get a reading of 4.32 kPa, the
true value is given as 4.78 ± 0.58, the bias would be -0.46 kPa, and the imprecision
± 0.58.
Once the instrument is calibrated, the bias can be removed, and the only remaining
error is due to imprecision. The bias is also called the systematic error (since it is
the same for each reading and thus can be removed by calibration). The error due
to imprecision is called the random error. It is, in general, different for every
reading and we can only put bounds on it, but cannot remove it. Thus calibration
is the process of removing bias and defining imprecision numerically.
In practice, most calibration errors are some combination of zero, span, linearity,
and hysteresis problems. An important point to remember is that with rare
exceptions, zero errors always accompany other types of errors. In other words, it
is extremely rare to find an instrument with a span, linearity, or hysteresis error
that does not also exhibit a zero error. For this reason, technicians often perform a
single-point calibration test of an instrument as a qualitative indication of its
calibration health. If the instrument performs within specification at that one point,
its calibration over the entire range is probably good. Conversely, if the instrument
fails to meet specification at that one point, it definitely needs to be recalibrated.

Resolution: If the input is increased slowly from some arbitrary (non-zero) input
value, the output does not change at all , until a certain input increment is
exceeded. This increment is called the resolution; it is defined as the input
increment that gives some small but definite numerical change in the output. Thus
resolution defines the smallest measurable input change while threshold defines
the smallest measurable input.

Threshold: If the instrument input is increased very gradually from zero there will
be some minimum value below which no output change can be detected. This
minimum value defines the threshold of the instrument. For example, in a digital
voltmeter whose least significant digit represents 1 mV, the threshold is also 0.00
1 V.

Both threshold and resolution may be given either in absolute terms or as a


percentage of full-scale reading. An instrument with large hysteresis does not
necessarily have poor resolution. Internal friction( a property of the spring
material) in a spring can give a large hysteresis, but even small changes in input
(force) cause corresponding changes in deflection, giving high resolution.
Hysteresis: The terms "dead space," "dead band," and "dead zone" sometimes are
used interchangeably with the term "hysteresis." However, they may be defined as
the total range of input values possible for a given output and thus may be
numerically twice the hysteresis.
Non-coincidence between these loading and unloading curves is known as
hysteresis. The noncoincidence of loading and unloading curves is due to the
internal friction or hysteretic damping of the stressed parts (mainly the spring). That
is, not all the energy put into the stressed parts upon loading is recoverable upon
unloading, because of the second law of thermodynamics, which rules out perfectly
reversible processes in the real world. Two quantities are defined, maximum input
hysteresis and maximum output hysteresis, These are normally expressed as a
percentage of the full-scale input or output reading respectively.
Hysteresis is most commonly found in instruments that contain springs, such as
the passive pressure gauge and the Prony brake (used for measuring torque). It is
also evident when friction forces in a system have different magnitudes depending
on the direction of movement, such as in the pendulum-scale mass-measuring
device.
Devices like the mechanical flyball (a device for measuring rotational velocity)
suffer hysteresis from both of the above sources because they have friction in
moving parts and also contain a spring. Hysteresis can also occur in instruments
that contain electrical windings formed round an iron core, due to magnetic
hysteresis in the iron. This occurs in devices like the variable inductance
displacement transducer, the LVDT and the Rotary differential transformer(RVDT).
Dead space is defined as the range of different input values over which there is no
change in output value. Any instrument that exhibits hysteresis also displays dead
space, as marked on Figure below. Some instruments that do not suffer from any
significant hysteresis can still exhibit a dead space in their output characteristics,
however.
Backlash in gears is a typical cause of dead space. Backlash is commonly
experienced in gear sets used to convert between translational and rotational
motion (which is a common technique used to measure translational velocity).
A hysteresis calibration error occurs when the instrument responds differently to
an increasing input compared to a decreasing input. The only way to detect this
type of error is to do an up-down calibration test, checking for instrument response
at the same calibration points going down as going up.
Hysteresis errors are almost always caused by mechanical friction on some moving
element (and/or a loose coupling between mechanical elements) such as bourdon
tubes, bellows, diaphragms, pivots, levers, or gear sets. Friction always acts in a
direction opposite to that of relative motion, which is why the output of an
instrument with hysteresis problems always lags behind the changing
input, causing the instrument to register falsely low on a rising stimulus and falsely
high on a falling stimulus. Flexible metal strips called flexures – which are designed
to serve as frictionless pivot points in mechanical instruments – may also cause
hysteresis errors if cracked or bent. Thus, hysteresis errors cannot be remedied by
simply making calibration adjustments to the instrument –
one must usually replace defective components or correct coupling problems
within the instrument mechanism.
Instrument Characteristics with Hysteresis

Pl see the diagram Fig 3.24 page 87 -for detailed explanations


Active & Passive Transducer:

What is a transducer ?
In much technical literature, the term "transducer" is restricted to devices involving
energy conversion; but, conforming to the dictionary definition of the term, we do
not make this restriction. A device that converts variations in a physical quantity,
such as pressure or brightness, into an electrical signal, or vice versa.

Sensors convert physical variables to signal variables. Sensors are often


transducers in that they are devices that convert input energy of one form into
output energy of another form. Sensors can be categorized into two broad classes
depending on how they interact with the environment they are. Active sensors add
energy to the measurement environment as part of the measurement process.
Example:- Radar or sonar system, where the distance to some object is
measured by actively sending out a radio (radar) or acoustic (sonar) wave to bounce
off of some object and measure its range from the sensor.

Passive sensors do not add energy as part of the measurement process


but may remove energy in their operation. Example: - Thermocouple, which
converts a physical temperature into a voltage signal.

Instruments are divided into active or passive ones according to whether the
instrument output is entirely produced by the quantity being measured or whether
the quantity being measured simply modulates the magnitude of some external
power source. An example of active instrument is a float-type fuel tank level
indicator. Here, the change in fuel level moves a potentiometer arm, and the
output signal consists of a proportion of the external voltage source applied
across the two ends of the potentiometer. The energy in the output signal comes
from the external power source: the primary transducer float system is merely
modulating the value of the voltage from this external power source.
In active instruments, the external power source is usually in electrical form, but in
some cases, it can be other forms of energy such as a pneumatic or hydraulic one.
A component whose output energy is supplied entirely or almost entirely by its
input signal is commonly called a passive transducer. The output and input signals
may involve energy of the same form (say, both mechanical), or there may be an
energy conversion from one form to another (say, mechanical to electrical). An
active transducer, however, has an auxiliary source of power which supplies a
major part of the output power while the input signal supplies only an insignificant
portion- Ex-Amplifier. Microswitch is an active transducer. Again, there may or may
not be a conversion of energy from one form to another.
One very important difference between active and passive instruments is the
level of measurement resolution that can be obtained. While it is possible to
increase measurement resolution by making the pointer longer, such that the
pointer tip moves through a longer arc, the scope for such improvement is clearly
restricted. In an active instrument, however, adjustment of the magnitude of the
external energy input allows much greater control over measurement resolution.
In terms of cost, passive instruments are normally of a more simple construction
than active ones and are therefore cheaper to manufacture.
Active sensors generate an electric current in response to an external stimulus
which serves as the output signal without the need of an additional energy source.
Such examples are a photodiode, a piezoelectric sensor & thermocouple.
Passive sensors require an external power source to operate, which is called an
excitation signal. The signal is modulated by the sensor to produce an output signal.
For example, a thermistor does not generate any electrical signal, but by passing
an electric current through it, its resistance can be measured by detecting
variations in the current or voltage across the thermistor.

Passive Pressure Gauge

Null & Deflection Type of Instrument

The pressure gauge with a pointer is an example of a deflection type of instrument,


where the value of the quantity being measured is displayed in terms of the amount
of movement of a pointer. A deadweight Pressure gauge is a null-type instrument.
Pressure measurement is made in terms of the value of the weights needed to
reach this null position. Accuracy depends on the linearity and calibration of the
spring, whilst for the dead weight gauge it relies on the calibration of the weights.
Calibration of weights is much easier than careful choice and calibration of a linear-
characteristic spring, this means that the second type of instrument will normally
be the more accurate. Null-type instruments are more accurate than deflection
types.
The static characteristics generally show up as nonlinear or statistical effects in the
otherwise linear differential equations giving the dynamic characteristics.
DYNAMIC CHARACTERISTICS

Zero order Instrument

Where q0 is output quantity and qi is input quantity. a0 & b0 are system physical
parameters assumed to be constant.
Any instrument or system that closely obeys above equation over its intended
range of operating conditions is defined to be a zero-order instrument.
The two constants a0 & b0 are not necessary and so we define the static sensitivity
(or steady state "gain") as follows:


● No matter how qi varies with time, the instrument output (reading) follows
it perfectly with no distortion or time lag of any sort. Thus, the zero-order
instrument represents ideal or perfect dynamic performance and is thus a
standard against which less perfect instruments may be compared. All
instruments behave as zero order instruments when they give a static output
in response to a static input. Example: Resistance wire strain gage.

A practical example of a zero-order instrument is the displacement-measuring
potentiometer.
The reasons why a potentiometer is normally called a zero-order instrument is as
follows:

1. The parasitic inductance and capacitance can be made very small by design.
2. The speeds ("frequencies") of motion to be measured are not high enough to
make the inductive or capacitive effects noticeable.
For zero-order instruments, the response is instantaneous and so no dynamic
characteristics exist. The only parameter to-be determined is the static sensitivity
K, .which is found by static calibration. For first-order and Second order
instruments, the static sensitivity K also is found by static calibration.

First Order Instrument


The Order is the highest derivative .

Any instrument that follows this equation is, by definition,


a first-order instrument.

Time constant τ = a1/a0


The liquid in gas thermometer is first order instrument(consider the liquid-in-glass

thermometer of Fig. 3.37) is the


transfer function.

where x0 = displacement from reference mark, Kex= differential


expansion coefficient of thermometer fluid & bulb glass, V b= vol of bulb, Ttf =
temp of fluid in the bulb, Ac= cross sectional area of capillary tube.
To get a differential equation relating input-output , consider conservation of
energy over an infinitesimal time dt for the thermometer bulb:
Heat in- heat out = Energy stored ,
, where U= overall heat transfer
coeff across bulb wall, Ab= heat transfer area of bulb wall , ρ, C, density and
specific heat of the thermometer fluid.

The above equation can be written as


This eqn can also be written as(using above x0 eqn derivative):-
Comparing the above with the general form of the first order instrument equation
above K= (Kex Vb)/ Ac , τ=(ρCVb)/ UAb
So τ is reduced by reducing ρCVb & increasing UAb .
Decreasing fb Vb will reduce K.
Thus increased speed of response is traded off for lower sensitivity. This trade-off
is not unusual and will be observed in many other instruments.
A cup anemometer for measuring wind speed is also a first order instrument.
The time constant depends on the anemometer's moment of inertia( analogy
with AoA measurement sensor).

The operational transfer function of any first order instrument is

,
The above is general operational transfer function. In the analysis, design, and
application of measurement systems, the concept of the operational transfer
function is very useful. The operational transfer function relates output q 0 to input
qi . One of the several useful features of transfer functions is their utility for graphic
symbolic depiction of system· dynamic characteristics by means of block diagrams.
The transfer function is helpful in determining the overall characteristics of a
system made up of components whose individual transfer functions are known.

Second Order Instrument:

A second-order equation could have more terms on the right-hand side, but
in common engineering usage, the above Eq. is generally accepted as defining a
second-order instrument. A good example of a second-order instrument is the
force-measuring spring scale of Fig. 3.50. We assume the applied force fi has
frequency components only well below the natural frequency of the spring itself.
Instruments that exhibit a spring–mass type of behavior are second order.
Examples are galvanometers, accelerometers, diaphragm-type pressure
transducers, and U-tube manometers.
There is only one parameter pertinent to dynamic response, the time Constant τ ,
and this may be found by a variety of methods. The time constant τ always has the
dimensions of time, while the static sensitivity K has the dimensions of output
divided by input. For any-order instrument, K always is defined as b0 /a0 and always
has the same physical meaning, that is, the amount of output per unit input when
the input is static (constant). We see that accurate dynamic measurement requires
a small time constant.
𝑎1 𝑏0
τ= 𝑎0 = Time constant , K=𝑎0 = Static sensitivity

Figures 3.20 a & b (Below) : Definition of sensitivity: which is the slope of the
calibration curve.

( Importance of time constant for example:The low-pass filter time constant τ is


chosen so as to attenuate the random atmospheric noise)
One common method to determine time constant is that it applies a step
input and measures T as the time to achieve 63.2 percent of the final
value. This method is influenced by inaccuracies in the determination of the
t = 0 point and also gives no check as to whether the instrument is really first-
order.
The dynamic characteristic of potentiometers (if we consider displacement as input
and voltage as output) is essentially that of a zero-order instrument since the
impedance of the winding is almost purely resistive at the motion frequencies for
which the device is usable.
The idea of a pure resistance is a mathematical model, not a real system; thus the
potentiometer will have some (however small) inductance and capacitance. If xi is
varied relatively slowly, these parasitic inductance and capacitance effects will not
be apparent. However, for sufficiently fast variation of xi , these effects are no
longer negligible and cause dynamic errors between xi and e 0 .

Dynamic Error: It is the difference between the true value of the quantity changing
with time & the value indicated by the measurement system if no static error is
assumed. It is also called measurement error.

Speed of response : depends on only the value of τ(time constant a1 /a0) and is
faster if τ is smaller. Thus in first-order instruments we strive to minimize τ for
faithful dynamic measurements. A dynamic characteristic useful in characterizing
the speed of response of any instrument is the settling time. This is the time (after
application of a step input) for the instrument to reach and stay within a stated
plus-and-minus tolerance band around its final value. A small settling time thus
indicates fast response. It is obvious that the numerical value of a settling time
depends on the percentage tolerance band used.
Sometimes increased speed of response is traded off for lower sensitivity. This
trade-off is not unusual and will be observed in many instruments.
Settling Time:
Settling-time definition:
A dynamic characteristic useful in characterizing the speed of response of any
instrument is the settling time. This is the time (after application of a step input)
for the instrument to reach and stay within a stated plus-and-minus tolerance band
around its final value. A small settling time thus indicates fast response. It is obvious
that the numerical value of a settling time depends on the percentage tolerance
band used; you must always state this. Thus you speak of, say, a 5 percent settling
time. For a first-order instrument a 5 percent settling time is equal to three time
constants (see Fig. 3.40 below). Other percentages may be and are used when
appropriate.
A settling time actually may be a better indication of response speed.
Settling Time Definition Diagram

(There needs to be clarity of Response Time, Settling Time and Time constant.)

Errors: Taylor’s Series:


First write down the equation which calculates the specification value. Then take
the partial derivative of that equation as a function of each variable. Square each
result and multiple times the knowledge uncertainty (i.e. variance in data) for the
measurement of each variable. Then take the square root of the sum. For example,
assume that a requirement R is a function of variables (a,b,c), i.e. R = f(a, b, c). The
uncertainty of the knowledge of the requirement R is give by:

If the defining equation is a linear sum, then the result is a simple root mean square
of the individual standard deviations. But, if the equation is not linear, then there
will be cross terms and scaling factors.
When building an error budget use the standard deviation of measurement
reproducibility not of repeatability. Repeatability will give an ‘optimistic’ result.
Reproducibility gives a realistic result. Repeatability is the ability to get the same
answer twice if nothing in the test setup is changed. Reproducibility is the ability to
obtain the same answer between two completely independent measurements. If
one is measuring the reproducibility of the ability to align a part in a test setup,
then to obtain two independent measurements one must physically remove the
part from the test setup and reinstall it between measurements. If one is measuring
the reproducibility of atmospheric turbulence, then all that is required is to make
sure sufficient time has passed since the last measurement to insure that the two
measurements are not correlated.
STATIC CHARACTERISTICS AND STATIC CALIBRATION:

In which all inputs (desired, interfering, modifying) except one are kept at some
constant value. Then the one input under study is varied over some range of
constant values, which causes the output(s) to vary over some range of constant
values. The input-output relations developed in this way comprise a static
calibration valid under the stated constant conditions of all the other inputs. This
procedure may be repeated, by varying in turn each input considered to be of
interest and :thus developing a family of static input-output relations. The
statement "all other inputs are held constant" refers to an ideal situation which can
be only approached, but never reached, in practice. It is impossible to calibrate an
instrument to an accuracy greater than that of the standard with which it is
compared. A rule often followed is that the calibration system (the standard and
any auxiliary apparatus used with it) have a total uncertainty four times better than
the unit under test.
When planning a specific experimental project, we need to decide how accurate
our measurements need to be, and then arrange to calibrate each instrument
against a standard that is about four times more accurate. For 1 percent accuracy
in a pressure gage, we need to calibrate it against a standard accurate to about 0.25
percent or better.

Static & Dynamic Characteristics (May further add)


Static Dynamic
Measurands are constant or vary quite slowly rapidly varying quantities
meaningful description of the quality of measurement dynamic relations between
without becoming concerned with dynamic descriptions the instrument input and output must be examined,
generally by the use of differential
equations.Involve differential eqns.
static characteristics also influence the quality of Differential equations of dynamic performance
measurement under dynamic conditions, static generally neglect the effects of dry friction, backlash,
characteristics generally show up as nonlinear or hysteresis, statistical scatter, etc., even though these
statistical effects in the otherwise linear differential effects influence the dynamic behavior. These
equations giving the dynamic characteristics phenomena are more conveniently studied as static
characteristics,

Transfer Function
The transfer function shows the functional relationship between physical input
signal and electrical output signal. Usually, this relationship is represented as
a graph showing the relationship between the input and output signal, and the
details of this relationship may constitute a complete description of the sensor
characteristics.

In writing transfer functions, we always write (q0 /qi)(D), not just q0/qi to
emphasize that the transfer function is a general relation between q0 and qi and
very definitely not the instantaneous ratio of the time-varying quantities q0 and
qi.

General operational transfer function.


One of the several useful features of transfer functions is their utility for graphic
symbolic depiction of system· dynamic characteristics by means of block diagrams.
Furthermore, the transfer function is helpful in determining the overall
characteristics of a system made up of components whose individual transfer
functions are known. In writing transfer functions, we always write (q0 /qi)(D), not
just q0 /qi to emphasize that the transfer function is a general relation between q 0
and qi and very definitely not the instantaneous ratio of the time-varying quantities
q0 and qi .

When the output of the preceding device becomes the input of the following one,
the overall transfer function is simply the product of the individual ones.

An alternative name for the sinusoidal transfer function is the system frequency
response .

Analog & Digital Mode of Operation:


It is possible further to classify how the basic functions may be performed by
turning attention to the analog or digital nature of the signals that represent the
information. For analog signals, the precise value of the quantity (voltage, rotation
angle, etc.) carrying the information is significant. However, digital signals are
basically of a binary (on/off) nature, and variations in numerical value are
associated with changes in the logical state ("true/false") of some combination of
"switches." In a typical digital electronic system, any voltage in the range of + 2 to
+5 V produces the on state, while signals of 0 to +0.8 V correspond to off. Thus
whether the voltage is 3 or 4 V is of no consequence. The same result is produced,
and so the system is quite tolerant of spurious "noise" voltages which might
contaminate the information signal. In a digitally represented value of, say, 5 .763,
the least significant digit (3) is carried by on/off signals of the same (large) size as
for the most significant digit (5). Thus in an all-digital device such as a digital
computer, there is no limit to the number of digits which can be accurately carried;
we use whatever can be justified by the particular application. When combined
analog/digital systems are used (often the case in measurement systems), the
digital portions need not limit system accuracy. These limitations generally are
associated with the analog portions and/or the analog/digital conversion devices.
The majority of primary sensing elements are of the analog type. The only digital
device illustrated in this text up to this point is the revolution counter shown.

Digital revolution counter


This is clearly a digital device since it is impossible for this instrument to indicate,
say, 0.79; it measures only in steps of 1 . The importance of digital instruments is
increasing, perhaps mainly because of the widespread use of digital computers in
both data-reduction and automatic control systems. Since the digital computer
works only with digital signals, any information supplied to it must be in digital
form. The computer's output is also in digital form. Thus any communication with
the computer at either the input or the output end must be in terms of digital
signals.
Since most measurement and control apparatus is of an analog nature, it is
necessary to have both analog-to-digital converters (at the input to the computer)
and digital-to-analog converters (at the output of the computer). These devices
(which are discussed in greater detail in a later chapter) serve as "translators" that
enable the computer to communicate with the outside world, which is largely of
an analog nature.

An analogue instrument gives an output that varies continuously as the quantity


being measured changes. The output can have an infinite number of values within
the range that the instrument is designed to measure. The deflection-type of
Passive pressure gauge is a good example of an analogue instrument. As the
input value changes, the pointer moves with a smooth continuous motion. A
digital instrument has an output that varies in discrete steps and so can only
have a finite number of values.
A revolution counter can only count whole revolutions and cannot
discriminate any motion that is less than a full revolution. The distinction between
analogue and digital instruments has become particularly important with the rapid
growth in the application of microcomputers to automatic control systems. Any
digital computer system, of which the microcomputer is but one example, performs
its computations in digital form. An instrument whose output is in
digital form is therefore particularly advantageous in such applications, as it can be
interfaced directly to the control computer. Analogue instruments must be
interfaced to the microcomputer by an analogue-to-digital (A/D) converter, which
converts the analogue output signal from the instrument into an equivalent digital
quantity that can be read into the computer. This conversion has several
disadvantages.
A/D converter adds a significant cost to the system. Secondly, a finite time is
involved in the process of converting an analogue signal to a digital quantity, and
this time can be critical in the control of fast processes where the accuracy of control
depends on the speed of the controlling computer.
Resistance Strain gage:
Strain gauges are used either to obtain information from which stresses (internal forces) in bodies can be
calculated or to act as indicating elements on devices for measuring such quantities as force, pressure, and
acceleration.
The resistance strain gauge is a valuable tool in the field of experimental stress analysis.The electrical
resistance of a copper or iron wire changes when the wire is either stretched or compressed.

The gauge shown in the figure consists of a length of very fine wire
looped into a grid pattern and cemented between two sheets of
very thin paper. It is firmly glued (bonded) to the surface on which the
strain is to be measured and is energized by an electric current. When the
part is deformed, the gauge follows any stretching or contracting of the
surface, and its resistance changes accordingly. This resistance change is
amplified and converted into strain, after proper calibration.

Resistance gauges can be classified as transducers, i.e., devices for converting a mechanical displacement
into an electrical signal.
Material property called piezoresistance, which indicates a dependence of
resistivity ρ on the mechanical strain.R=ρ.L/A
Differentiating we get

Rg = Resistance of the gage when not strained ; ∆Rg = Change in resistance


GF= Gage Factor
Resistance Change α Strain or Fractional change in resistance is directly
proportional to strain.
Application of load causes a strain, a ∆Rg which unbalances the bridge , causing
an output voltage e0 which is proportional to ϵ .
Thus if the gage factor is known, measurement of dR/R allows measurement of
the strain dL/L =ϵ. This is the principle of the resistance strain gage. The term
(dp/p)/(dL/L) can also be expressed as π1E, where π1= longitudinal
piezoresistance coefficient, a material property which can be positive or negative.
E= elastic modulus
Temperature is an important interfering input for strain gages since resistance
changes with both strain and temperature. Since strain-induced resistance
changes are quite small, the temperature effect can assume major proportions.
Fig. Temperature compensation.
A "dummy" gage (identical to the active gage) is cemented to a piece of the same
material as is the active gage and placed so as to assume the same temperature.
The dummy and active gages are placed in adjacent legs of a Wheatstone bridge;
thus resistance changes due to the temperature coefficient of resistance and
differential thermal expansion will have no effect on the bridge output voltage,
whereas resistance changes due to an applied load will unbalance the bridge in
the usual way.
Strain sensitive pattern

The voltage output from metallic strain-gage circuits is quite small (a few
microvolts to a few millivolts), and so amplification is generally needed.
Amplification when the signal amplitude is less than the noise is of no use since the
signal and noise are both amplified. So filtering the noise component is essential.
Wheatstone bridges offer an attractive alternative for measuring small resistance
changes accurately.
FORCE MEASUREMENT

The three main methods of force measurement are the mass balance, in which
the Force is balanced against a known mass, the force balance, in which the
balancing force is via a spring or magnet-coil arrangement, and the deflection
type, in which the deflection of an elastic element is measured. Mass and force
balance systems are often in the form of a beam balance, the deflection being
detected by a displacement transducer.

Proving rings(Load cells) are used to calibrate force measuring machines &
Proving rings are calibrated against dead weights.
Deflection( Elastic strain) of the proving ring is large and measured by L V D T.
Proving Ring: Both Tensile & Compressive forces can be measured.
LVDT- Linear- Variable-
Differential-Transformer: A displacement transducer.
Excitation- Sinusoidal voltage - 3 to 15 V rms,Frequency of 60-20K Hz. The
amplitude of e0 a nearly linear fn of core position for a considerable range either
side of null.
-Since the coupling variation due to core motion is a continuous phenomenon, the
resolution of LVDTs is infinitesimal. Amplification of the output voltage allows
detection of motions down to a few microns.
-Sensitivity depends on frequency(higher frequency gives more sensitivity) &
stroke (smaller strokes usually have higher sensitivity) Transformer principle of the
LVDT requires ac excitation, but DCDTs are available where oscillator produces ac
excitation from DC power.
LVDTs are limited to motion frequencies of 2 KHz.
It is an Inductive transducer -Linear motion into electrical signals. Single primary
winding and 2 secondary windings of same turns connected in series opposition
for a single output voltage. The amt of V change in either secondary winding is
proportional to movement of core. Output voltage is a linear function of core
displacement within a limited range. High range(1.25mm-- 250mm), about 3
microns resolution.
Adv: Friction and electric isolation, immunity from external effects, rugged, low
hysteresis, low power consumption.
Disadv: Can be sensitive to stray mag fields if not shielded, can be affected by
vibration.
Dynamic response is limited by mass of the core and electrically by frequency of
applied voltage (mostly). Frequency of the carrier need be at least 10 times the
highest frequency component to be measured.

Bonded Strain gage-For force measurement

Used directly, the bonded strain gage is useful for measuring only very small
displacements (strains). However, larger displacements may be measured by
bonding the gage to a flexible element, such as a thin cantilever beam, and applying
the unknown displacement to the end of the beam, as in Fig. 4. 14a. For such an
application, the gage factor need not be accurately known since the overall system
can be calibrated by applying known displacements to the end of the beam and
measuring the resulting bridge output voltage. The dynamic response of bonded
strain gages as a resistance variation to strain variation of the underlying surface is
very good.

The parallelogram flexure above is


extremely rigid (insensitive) to all applied forces and moments except in
the direction shown by the arrow. This displacement transducer arranged
to measure motion in the sensitive direction(ARROW), will measure only
that component of an applied vector force which lies along the sensitive
axis. NOT ONLY works as force-to-deflection transducers, But also perform
the function of resolving vector forces or moments into rectangular
components. Development of multi-Component-force pickups of small size
and high natural frequency and Various types of flexures for isolating and
measuring different force components characterizes the design of these
devices.

For short beams, the theory behind this Eq. (3


.46), which is based on pure bending, is
inaccurate since shearing effects become
important now.

PRESSURE:
Low Pressure Measurement
The basic standards for pressures ranging from medium vacuum (about 10 - 1
mmHg) up to several hundred thousand pounds/ inch 2 are in the form of precision
mercury columns (manometers) and deadweight piston gages. For pressures in
the range 10- 1 to 10-3 mmHg, the McLeod vacuum gage is the standard. For
pressures below 10-3 mmHg, a pressure-dividing technique allows flow through a
succession of accurate orifices to relate the low downstream pressure to a higher
upstream pressure (which is accurately measured with a McLeod gage).
This technique can be further improved by substituting a Schulz hot-cathode
or radioactive ionization vacuum gage for the McLeod gage. Each of these must
be calibrated against a McLeod gage at one point (about 9 X 10-2 mmHg), but their
known linearity is then used to extend their accurate range to much lower pressure.
1 mbar = 0.75006 torr = 0. 1 0000 kPa = 0.014504 psi
For the higher pressure ranges of vacuum, gages can still use the familiar concept
of force per unit area as their operating principle; these are called absolute gages,
and their readings do not depend on the gas being measured. For lower pressures,
other principles must be used, and all these are sensitive to the specific gas, that
is, two different gases at the same pressure will give different readings, so readings
must be corrected for each gas.

Pressure usually can be easily transduced to force by allowing it to act on a


known area , the basic methods of measuring force and pressure are essentially the
same, except for the high-vacuum region. Most pressure measurement is based on
comparison with known deadweights(exemplified by manometers and piston
gages) acting on known areas or on the deflection of elastic elements subjected to
the unknown pressure. Dead weight gages and manometers are employed mainly
as standards for the calibration of less accurate gages or transducers.
For highly accurate results, the frictional force between the cylinder and
piston must be reduced to a minimum and/or corrected for. This is generally
accomplished by rotating either the piston or the cylinder. If there is no axial
relative motion, this rotation should reduce the axial effects of dry friction to zero.
The clearance between the piston and cylinder also raises the question of which
area is to be used in computing pressure. The effective area generally is taken as
the average of the piston and cylinder areas. Further corrections are needed for
temperature effects on areas of piston and cylinder, air and pressure-medium
buoyancy effects, local gravity conditions, and height differences between the
lower end of the piston and the reference point for the gage being calibrated. Since
the piston assembly itself has weight, conventional deadweight gages are not
capable of measuring pressures lower than the piston weight/area ratio ("tare"
pressure). This difficulty is overcome by the tilting-piston gage in which the cylinder
and piston can be tilted from vertical through an accurately measured angle, thus
giving a continuously adjustable pressure from 0 lb/in2 gage up to the tare
pressure.
Fig 6.2 – Dead weight gauge calibrator
Deadweight gages may be employed for absolute- rather than gage-pressure
measurement by placing them inside an evacuated enclosure at (ideally) 0 lb/in2
absolute pressure. Since the degree of vacuum (absolute pressure) inside the
enclosure must be known, this really requires an additional independent
measurement of absolute pressure.

Manometer: The manometer in its various forms is closely related to the piston
gage, since both are based on the comparison of the unknown pressure force with
the gravity force on a known mass. The manometer differs, however, in that it is
self-balancing, is a deflection rather than a null instrument, and has continuous
rather than stepwise output. The accuracies of dead weight gages and manometers
of similar ranges are quite comparable; however, manometers become unwieldy
at high pressures because of the long liquid columns involved.
The relation between input and output for static conditions:

Where, g= local gravity and ρ= mass density of manometer fluid. If p2 is atmospheric


pressure, then h is a direct measure of p1 as a gage pressure. Note that the cross-
sectional area of the tubing (even if not uniform) has no effect. At a given location
(given value of g) the sensitivity depends on only the density of the
manometer fluid. Water and mercury are the most commonly used fluids.

U –tube manometer ; Considerable care must be exercised in


order to keep inaccuracies as small as 0.0 1 mm Hg for the overall measurement.
Given that manometers inherently measure the pressure difference between the
two ends of the liquid column, if one end is at zero absolute pressure, then “h” is
an indication of absolute pressure. This is the principle of the barometer of Fig. 6.5.
Although it is a "single-leg" instrument, high accuracy is achieved by setting the
zero level of the well at the zero level of the scale before each reading is taken. The
pressure in the evacuated portion of the barometer is not really absolute zero, but
rather the vapor pressure of the filling fluid, mercury, at ambient temperature. This
is about 10-4 lb/in2 absolute at 70•F and usually is negligible as a correction.
Various forms of manometer

To increase sensitivity, the manometer may be tilted with respect to gravity, thus
giving a greater motion of liquid along the tube for a given vertical-height change.
The inclined manometer (draft gage)in above Fig. exemplifies this principle.
The accurate measurement of extremely small pressure differences is made with
the micro-manometer, a variation on the inclined-manometer principle. In the Fig.
the instrument is initially adjusted so that when p1 = p2, the meniscus in
the inclined tube is located at a reference point given by a fixed hairline viewed
through a magnifier. The reading of the micrometer used to adjust well height is
now noted. Application of the unknown pressure difference causes the meniscus
to move off the hairline, but it can be restored to its initial position by raising or
lowering the well with the micrometer. The difference in initial and final
micrometer readings gives the height change h and thus the pressure. Instruments
using water as the working fluid and having a range of either 10 or 20 in of water
can be read to about 0.00 1 inch of water. In another instrument in which the
inclined tube (rather than the well) is moved and which uses butyl alcohol as the
working fluid, the range is 2 in of alcohol, and readability is 0.0002 in. This
corresponds to a resolution of 6 X 10-6 lb/in2. Manometers are utilized mainly for
static measurements.

ELASTIC TRANSDUCERS
While a wide variety of flexible metallic elements conceivably might be used for
pressure transducers, the vast majority of practical devices utilize one or another
form of Bourdon tube, diaphragm, or bellows as their sensitive element. The gross
deflection of these elements may directly actuate a pointer/scale readout through
suitable linkages or gears, or the motion may be transduced to an electrical signal
by one means or another. Strain gages bonded directly to diaphragms or to
diaphragm-actuated beams are widely used to measure local
strains that are directly related to pressure. The Bourdon tube is the basis of many
mechanical pressure gages and is also used in electrical transducers by measuring
the output displacement with potentiometers, differential transformers, etc. The
basic element in all the various forms is a tube of noncircular cross section. A
pressure difference between the inside and outside of the tube (higher pressure
inside) causes the tube to attempt to attain a circular cross section. This results in
distortions which lead to a curvilinear translation of the free end in the C type and
spiral and helical types and an angular rotation in the twisted type, which motions
are the output.
The Shape and X-section of the tubes:
If a rectangle ( say 2x5 and a circle have 10 units of area, the perimeter of
rectangle= 2x2+2x5=14 units , whereas that of circle is 11.202 units(r= 1.7837)
Moment of inertia for rectangular section

I = bh3 ∕ 12

where h is the dimension in the plane of bending,


i.e. in the axis in which the bending moment is applied.

Deflection of the cantilever beam with single


fixed support
:δ = FL3 ∕ 3EI
Moment of inertia for round section

I = πr4 ∕ 4 = πd4 ∕ 64

where r and d are the radius and diameter respectively.

Fig. Elastic Pressure Tranducers


Piezoelectric pressure transducers share many common characteristics with
· piezoelectric accelerometers and force transducers, discussed earlier.
In comparison with other types of pressure transducers, the strain gage type
pressure transducer is of higher accuracy, higher stability and of higher response.
So the strain gage type pressure transducers are widely used as the high accuracy
force detection means in the hydraulic testing machines.

Low Pressure Measurements:

Wide variety of vacuum gages for measurement of low pressures: Diaphragm


Gages, McLeod Gage, Knudsen Gage, Momentum-Transfer (Viscosity) Gages,
Thermal-Conductivity Gages, Ionization Gages etc. Pressure range, sensitivity,
dynamic response and cost all vary by several orders of magnitude from one
instrument design to another.
For higher Pressures up to about 100,000 lb/in2 can be measured fairly easily with
strain-gage pressure cells or Bourdon tubes. Bourdon tubes for such high pressures
have nearly circular cross sections and thus give very little output i.e. motion per
turn.( So what is the remedy ??) But For LOW PR measmnt F/A is not the
principle.For pressures in the range 10- 1 to 10-3 mmHg, the McLeod vacuum gage
is the standard. For pressures below 10-3 mmHg, a pressure-dividing technique
allows flow through a succession of accurate orifices to relate the low downstream
pressure to a higher upstream pressure (which is accurately measured with a
McLeod gage). It is one of the most common instrument used for measuring low
vacuum pressures. The gauge is simple to operate and can measure pressure
ranges from 100 to 10-4 torr. Range of the instrument can also be extended by
multiple-compression technique. The inaccuracy of McLeod gages is less than 1
percent. Except for vapor/condensation, the reading of the McLeod gage is not
influenced by the composition of the gas. Only the diaphragm and the Knudsen
gage share this desirable feature of composition insensitivity.
Because of their accuracy and independence of gas species, capacitance gages are
suitable for calibration standards.(Why?) Transduction based on the deflection of
a Diaphragm. In the vacuum region the transduction technique is almost exclusively
that of capacitance change.Because of their accuracy and independence of gas
species, capacitance gages are suitable for calibration standards.

The principle of all McLeod gages is the compression of a sample of the low
pressure gas to a pressure sufficiently high to read with a simple manometer.By
releasing the plunger as shown in fig (a), the mercury level is lowered and the gas
at unknown pressure P shall be allowed to fill in the u –tube. Then the plunger is
pushed in, the mercury level goes up, sealing off a gas sample of known volume V
in the bulb and capillary tube as shown in fig (b). Further motion of the plunger as
shown in fig (c) causes compression of gas trapped in the Volume V, and motion is
continued until the mercury level in capillary B is at the zero mark. The unknown
pressure is then calculated, by using Boyle's law:……….
Advantages : Independent of the Gas properties ,Acts as a reference for calibration
,Linear operation
Disadvantages: Gas should obey Boyle’s Law ,Vapour formed has to be removed.
Continuous operation not possible.
Ionisation Gage
An electron passing through a potential difference acquires a kinetic energy
proportional to the potential difference. When this energy is large enough and
the electron strikes a gas molecule, there is a definite probability that the electron
will drive an electron out of the molecule, leaving it a positively charged ion. In an
ionization gage, a stream of electrons is emitted from a cathode. Some of these
strike gas molecules and knock out secondary electrons, leaving the molecules as
positive ions. For normal operation of the gage, the secondary electrons are a
negligible part of the total electron current; thus, for all practical purposes,
electron current ie is the same whether measured at the emitting point (cathode)
or the collecting point (anode). The number of positive ions formed is directly
proportional to ie and directly proportional to the gas pressure. If ie is held fixed
(as in most gages), the rate of production of positive ions (ion current) is, for a
given gas, a direct measure of the number of gas molecules per unit volume and
thus of the pressure. The positive ions are attracted to a negatively charged
electrode, which collects them and carries the ion current. Sensitivity is given by

According to our usual definition of


sensitivity as output/input, the "sensitivity" would be Si e rather than S. But the
above definition makes "sensitivity" independent of ie and dependent on only
gage construction.This allows comparison of the "sensitivity" of different gages
without reference to the particular ie being used. A main advantage of ionization
gages in general is their linearity; that is, the sensitivity S is constant for a given
gas over a wide range of pressures. The sensitivity does vary from gas to gas; if we
take air as a reference value of 1 .0, helium would be about 0. 1 8 and water
vapor about 2.0. Ionization gages lose linearity at pressures above about 0.00 1
torr, so they are most suitable for lower pressures.
A typical vacuum system is shown below.
Integration & Differentiaion

Sometimes we want to compute integrals and/or derivatives of sensor signals or


computed results. We might, for example, integrate a mass flow meter signal
(kg/s) to compute the total mass (kg) that passed through the meter over some
timed interval, or we could differentiate a velocity signal to get acceleration.
So, in measurement systems it is necessary to obtain integrals and/or derivatives
of signals with respect to time. Depending on the physical nature of the signal,
various devices may be most appropriate. Generally accurate differentiation is
harder to accomplish than integration, since differentiation tends to accentuate
noise (which is usually high frequency) whereas integration tends to smooth
noise. The main problem is that differentiation accentuates any low-amplitude,
high-frequency noise present in the displacement signal. Thus second and higher
integrals may be found easily while derivatives present real difficulties.
Because differentiation is basically problematic due to its noise
accentuation, we should always consider alternatives. For example, if we have a
displacement signal and want also a velocity signal, consider velocity transducers
such as the moving coil pickup or the rate gyro.

FLOW MEASUREMENT

For point measurements insert velocity probes to obtain accurate point


measurements. Such probes (pitot-static tubes, hot-wire anemometers, and
laser-doppler velocimeters are the most common) always involve a sensing
volume of finite size. Thus true "point" measurements are impossible.
However, sensing volumes can be made sufficiently small to provide data
of practical utility.
Hot-Wire and Hot-Film Anemometers
This sensor use feedback principles. A device for measuring rapidly
varying fluid velocity. Without feedback, the hot wire used in the instrument
is accurate only for velocity fluctuations of frequency less than about 100
Hz. By redesigning the instrument to use feedback, this limit is
extended to about 30,000 Hz, making the instrument much more
useful.

Generalized Feedback control system:

Feedback control system


The majority of hot-wire and hot-film anemometers now use the feedback
(self-balancing) version of the constant temperature system. We earlier
analyzed the constant-current system mainly to define the wire time
constant, which is needed in the analysis of the feedback-type
constant-temperature system. Constant-current operation is provided as a
selectable option on some constant-temperature systems, but this mode is
rarely used.

2 types: the constant current type and the constant-temperature type. Both
utilize the same physical principle.
In the constant-current type, a fine resistance wire, carrying a fixed current
similaris exposed to the flow velocity. The wire attains an equilibrium
temperature when the i 2 R heat generated in it is just balanced by the
convective heat loss from its surface. The circuit is designed so that the i 2R
heat is essentially constant; thus the wire temperature must adjust itself to
change the convective loss until equilibrium is reached.
Since the convection film coefficient is a function of flow velocity, the
equilibrium wire temperature is a measure of velocity.
The wire temperature can be measured in terms of its electrical resistance.
In the constant-temperature form, the current through -the wire is
adjusted to keep the wire temperature (as measured,by its resistance)
constant. The current required to do this then becomes a measure of flow
velocity. For equilibrium conditions we can write an energy balance for a
hot wire as Where

Now h is mainly a function of flow velocity for a given fluid density.


The hot wire can be used in the low-velocity flow unlike the pitot tube.
A commercial calibrator uses a stagnation chamber with a compressed air
supply to provide flow through a carefully designed nozzle discharging to
the atmosphere. Measurement of stagnation pressure and temperature,
and nozzle pressure drop allows calculation of the velocity. The low velocity
ranges require very sensitive and accurate transducers; capacitance
diaphragm gages are often used.
On the lowest range, velocity is measured by a "low-speed transducer"
rather than using a differential pressure measurement.
The main application of hot-wire instruments is themeasurement of rapidly
fluctuating velocities, such as the turbulent components superimposed on
the average velocity.
When velocities are too low to allow direct use of a pitot-static tube as
the calibration standard, a number of alternative schemes are possible. By
using a flow passage with two sections of widely differing areas connected
in series, a pitot tube may be employed in the small-area (high-velocity)
section, the hot wire in the large-area (low-velocity) section, and the known
area ratio used to infer velocities at the hot wire from those measured at
the pitot tube.

The principle is somewhat similar to the thermal conductivity gages where


the element assumes an equilibrium temperature when heat input and
losses by conduction and radiation are just balanced.
In compressible flows, the film coefficient ‘h’ is really related to ρV,· thus
calibrations such as that of Fig. 7. 1 5 must be adjusted when a probe
calibrated at one density is employed at another. The sensitivity to ρ V
makes the hot-wire useful as a mass-flow meter, and it is used as such in
automobile engine control systems, where an inlet air mass-flow signal
from the hot wire is used to adjust the fuel injection to maintain a desired air
to fuel ratio.

Flow visualization: two basic principles: introduction of tracer particles or


the detection of flow-related changes in fluid optical properties.

Liquids: colored dyes and gas bubbles are common tracers. A line of
hydrogen bubbles can be formed in water by applying a short electric pulse
to a straight wire immersed in the flow. Photography with steady
illumination shows the bubbles as short streaks whose length can be
measured to obtain velocity data, while stroboscopic light gives a series of
dots whose spacing gives similar information.

Gas flows--- smoke, helium-filled "soap" bubbles, or gas molecules made


luminous by an ionizing electric spark have served as tracers. variation in
refractive index of the flowing gas with density. For compressible
flows (Mach number above about 0.3), density varies with velocity
sufficiently to produce measurable effects.

For quantitative results, the more difficult interferometer approach may be


necessary. Here the light/dark patterns are formed by interference effects
resulting from phase shifts between a reference beam and the measuring
beam.
Important "visualization" technique that also produces quantitative results
is particle image velocimetry(PIV). Laser doppler anemometry (LDA) and
thermal (hot-wire/film) anemometry( for measuring rapidly varying fluid
velocity). Applications range from low-velocity natural convection flows up
through supersonic flows.

Assuming steady one-dimensional flow of an incompressible frictionless


fluid---

Vel magnitude from Pitot_Static tube:

GROSS VOLUME FLOW RATE

Mass flow rate or volume flow rate.


Possible sources of error in flowmeters include variations in fluid properties
(density, viscosity, and temperature), orientation of the meter, pressure
level, and particularly flow disturbances (such as elbows, tees, valves, etc.)
upstream (and to a lesser extent downstream) of the meter.

Fast-response, high-resolution flowmeters such as turbine, positive


displacement,vortex-shedding etc., types, where steady state can be
achieved quickly and the integration of the flow rate to get total flow is
accomplished accurately by accumulating the meter pulse-rate output in a
counter. (The integration gives accurate total flow even if the flow rate is
not perfectly steady.)

The calibration of flowmeters to be used with gases often can be carried


out with liquids as long as the pertinent similarity relations (Reynolds
number) are maintained and theoretical density and expansion corrections
are applied.
Constant-Area, Variable-Pressure-Drop Meters ("Obstruction" Meters)
Most widely used flow metering principle involves placing a fixed-area
flow restriction of some type in the pipe or duct carrying the fluid. This flow
restriction causes a pressure drop which varies with the flow rate; thus
measurement of the pressure drop by means of a suitable differential-
pressure pickup allows flow-rate measurement.
The square-edge orifice is undoubtedly the most widely employed flow
metering element.

If one-dimensional flow of an incompressible frictionless fluid without work,


heat transfer, or elevation change is assumed:

Volume Flow rate

There are frictional losses that affect the measured pressure drop and lead
to a permanent pressure loss and area of cross section issues.
So experimental calibration to determine the actual flow rate Qa.
For this a Cd = Qa/Qc is defined. The ratio between true flow rate and
theoretical flow rate for any measured amount of differential pressure is
known as the discharge coefficient of the flow-sensing element.
The discharge coefficient of a given installation varies mainly with the
Reynolds number NR at the orifice. Thus the calibration can be performed
with a single fluid, such as water, and the results used for any other fluid as
long as the Reynolds numbers are the same.
Since the , a 10: 1 change in Δp corresponds
to only about a 3 : 1 change in flow rate. This nonlinearity typical of all
obstruction meters (other than the laminar-flow element). A difficulty
caused by the nonlinearity occurs when flow rate must be integrated to get
total flow during a given time interval.
The square root of the Δp signal must be taken before integration.
The permanent pressure loss is given approximately by
is the differential pressure used for flow measurement.
The standard orifice design requires that the edge be very sharp and that
the orifice plate be sufficiently thin relative to its diameter. Wear (rounding)
of this sharp edge by long use or abrasive particles can cause significant
changes in the discharge coefficient.
The flow nozzle, venturi tube, and Dallflow tube (Fig. 7.3 1 ) all operate on
the same principle as the orifice. Dall flow tube belongs to a class of
modified venturis of proprietary design called "low-loss" tubes, intended to
save pumping costs by having an especially low permanent pressure loss,
while retaining other good properties. Because of their less abrupt flow-area
changes, nozzles and venturis have higher discharge coefficients than
orifices, as high as 0.99 in some cases. Venturis can have very low
permanent pressure loss, so are popular for high-flow situations.

Flow nozzles are more expensive than orifices but cheaper than
venturis, and are often used for high-velocity steam flows, being more
dimensionally stable at high temperature and velocity than an orifice.
Keeping the measured differential pressure the same for each meter,
the flow nozzle must have a smaller β ratio than the orifice, and losses
increase with smaller β; thus, the nozzle will have a permanent pressure
loss about the same as the orifice. The venturi would also require a smaller
β for the given Δp, but because of its streamlined form, its losses are low
and nearly independent of β.
The simplest form of laminar-flow element is merely a length of small-
diameter (capillary) tubing. They are usually less sensitive to upstream and
downstream flow disturbances than the other devices discussed. The
laminar elements have the advantages accruing from a linear (rather than
square-root) relation between flow rate and pressure drop; these
are principally a large accurate range of as much as 100: 1 (compared with
3 : 1 or 4: 1 for square-root devices), accurate measurement of average
flow rates in pulsating flow, and ease of integrating Δp signals to compute
total flow. The laminar elements also can measure reversed flows with no
difficulty.

Constant-Pressure-Drop, Variable-Area Meters (Rotameters)

Rotameter may be thought of as an orifice of adjustable area. For a fixed


flow: area, Δp varies with the square of flow rate, and so to keep Δp
constant for differing flow rates, the area must vary. The tapered
tube provides this variable area. The float position is the output of the meter
and can be made essentially linear with flow rate by making the tube area
vary linearly with the vertical distance. Rotameters thus have an accurate
range of about 10: 1 , considerably better than square-root-type elements.

Turbine Flow Meter

Turbine flowmeters and their associated digital counting equipment have


been found particularly suitable for secondary standards. With attention to
detail, such standards can closely approach the accuracy of the primary
methods themselves. Turbine flowmeters use a free-spinning turbine wheel
to measure fluid velocity, much like a miniature windmill installed in the flow
stream. The fundamental design goal of a turbine flowmeter is to make
the turbine element as free-spinning as possible, so no torque will be
required to sustain the turbine’s rotation. If this goal is achieved, the turbine
blades will achieve a rotating (tip) velocity directly proportional to the linear
velocity of the fluid, whether that fluid is a gas or a liquid:
Rotary speed depends on the flow rate of the fluid. By reducing bearing
friction and keeping other losses to a minimum, one can design a turbine
whose speed varies linearly with flow rate. The ideal meter result would
give a rotor speed (and thus an electrical pulse rate) exactly proportional to
the volume flow rate of the fluid. Various real-world effects (mainly
viscosity) make a real meter deviate from such perfection and require
calibration to achieve practical utility.
Magnetic proximity pickup to produce voltage pulses. Bearing friction, fluid
viscosity, or inertial effects when accelerating are retarding torques. For
one-dimensional flow, the flow velocity V and volume flow rate Q are
proportional, so we predict the rotor speed to be proportional to flow rate.
By including a temperature sensor and a microprocessor loaded with data
allows calculation of viscosity and density from the measured temperature,
thus making a ‘compensated’ instrument.
The density measurement also allows calculation of mass flow rate, often
preferred to volume. Measurement of rapidly changing flows possible.
Pressure drop across the meter varies with the square of flow rate. Turbine
meters behave essentially as first-order dynamic systems for small
changes about an operating point. The operating frequencies of turbine
meters are of the order of 100 to 2,000 Hz.

For "velocity" flowmeters, such as the turbine and electromagnetic types,


the available signal is proportional to Vav ; thus the computer simply must
multiply this by the ‘ρ’ signal, and the square-root operation is unnecessary.
Positive Displacement Meters
Such meters exhibit little sensitivity to viscosity (accuracy actually improves
with increased viscosity, because of decreased leakage) and can give high
accuracy over wide flow ranges -up to 1 ,000: 1.

An advantage shared by positive-displacement meters in general is their


insensitivity to distorted inlet/outlet flow profiles; thus flow straighteners and
/or long runs of straight pipe upstream/downstream of the meter
usually are not required. Dynamic response specifications are not usually
quoted; because the basic principle implies fast response since the output
rotation is (ideally) "kinematically coupled" to the fluid flow rate.

Metering pump :A variable-displacement positive-displacement pump, if


properly designed, can serve both to cause a flow rate and to measure it
simultaneously.
Electromagnetic Flowmeters( Not for A/C instrumentation): Applies the
principle of induction e= Blv, conducting fluid. B-Mag field intensity(Flux
Density) In this case it is BDpv. where Dp is internal pipe dia.
Partial short circuiting is avoided due to insulating pipe. The pipe must be
nonmagnetic to allow the field to penetrate the fluid and usually is
nonconductive (plastic, for instance), so that it does not provide a short-
circuit path. Due to short circuiting, the voltage is reduced from the value
BDPv. If the field is sufficiently long, this effect will be slight at the center of
the field length. A length of 3 diameters usually is sufficient. The emf
generated “e” corresponds to the average velocity of any profile which is
symmetric about the pipe centreline. The magnetic field used in such a
flowmeter could be either constant or alternating, giving rise to a DC or an
AC output signal. But the powerful AC field coils induced spurious AC
signals into the measurement circuit. So an interrupted DC meter, a DC
field is switched in a square-wave fashion between the working value and
zero, at about 3 to 6 Hz. When the field is zero, any instrument output that
appears is considered to be an error. Additional advantages include power
savings of up to 75 percent and simpler wiring practices. ??? A dis-
advantage in some applications is the slower response time of about 7 s
(60-Hz systems have about 2 s).
For a 60-Hz AC system with tap-water flowing at 100 gal/min in a 3-
in-diameter pipe, e is about 3 m V rms. diameter. For tap-water, fluid
conductivity σ = 200 μS/cm. Standard magnetic flowmeters accept fluids
with conductivities as low as about 5 μS/cm with special systems going
down to about 0. 1 μS/cm.
A specialized but important application area which has received much
attention is that of blood flow in the vessels of living specimens.
Miniaturized sensors allow measurements on vessels as small as 1 mm. To
obtain dynamic flow capability, ac or switched dc ("square-wave") systems
with 200 to 1 ,000 Hz frequency are used.

Drag-Force Flowmeters

For sufficiently high Reynolds number and a properly shaped body, the
drag coefficient is reasonably constant. Therefore, for a given density, F d is
proportional to V2 and thus to the square of volume flow rate. The drag
force can be measured by attaching the drag-producing body to a
cantilever beam with bonded strain gages. Liquids, gases, and steam can
be metered over wide ranges of temperature, pressure, and flow rate. If the
drag body is made symmetrical, reversing flows can be measured. Relative
to most other flowmeters, dynamic response is quite fast; natural
frequencies of 70 to 200 Hz are possible.

Ultrasonic Flowmeters

Small-magnitude pressure disturbances are propagated through a fluid at a


definite velocity (speed of sound) relative to the fluid. If the fluid also has a
velocity,the absolute velocity of pressure-disturbance propagation is the
algebraic sum of the two. Since flow rate is related to fluid velocity , this
effect is used in an "ultrasonic" flowmeter.

For a crystal to be an efficient transmitter of acoustic energy, its diameter D


must be large compared with the wavelength λ of the oscillation, a small
λ/D ratio.
: ‘c’ varies, with temperature, and since c appears as c2, the
error caused may be significant. Hence a method which does not require
knowledge of the acoustic velocity c is required. But the time interval can
be increased to twice its value by taking times in the direction and counter
direction of the flow.
Unlike electromagnetic meters, which read the correct average
velocity as long as the flow profile is axisymmetric, ultrasonic meters
of the type described above will give different readings for axisymmetric
profiles of different shape but identical average velocity. The reason is that
the pulse transit time is related to the integral of fluid velocity along the
path. Thus 1 cm of path near the wall is weighted equally with 1 cm near
the centerline, even though the contributions of the annular flow areas of
these two regions would not be the same.
Because of the availability of meters for both clean and dirty fluids, little or
no obstruction or pressure drop, convenient installation for clamp-on types,
and good rangeability, ultrasonic meters are finding increasing application.

Ultrasonic transit time flow meter

Vortex-Shedding Flowmeters
When the pipe Reynolds number NR exceeds about 10,000, vortex
shedding is reliable, and the shedding frequency f is given by

Nst- strouhal no and is constant in the useful metering


range.d=characteristic dimension of shedding body, V= fluid velocity
A wide variety of liquids and gases and steam may be metered. Linear
ranges of 1 5 : 1 are common, with 200: 1 sometimes possible.
for a frequency of 1 000 Hz. If the fastest response is not needed, we can
get higher accuracy by averaging several pulses, a technique commonly
used in all vortex meters.
Special Cases:
Metering of cryogenic fluids such as liquid oxygen and hydrogen (used in
rocket engines), liquefied natural gas, etc., presents problems of extreme
low temperatures and explosion hazards. Use of safer liquid nitrogen as a
surrogate fluid for calibration purposes has been established as a valid
procedure and is the basis of a calibration facility which provides a mass
flow uncertainty of about ± 0.2 percent over a flow range of 20 to 210
gal/rnin at temperatures of 70 to 90 K.

GROSS MASS FLOW RATE

Mass flow rate is actually more significant than volume flow rate. As an
example, the range capability of an aircraft or liquid-fuel rocket is
determined by the mass of fuel, not the volume. Thus flowmeters
used in fueling such vehicles should indicate mass, not volume.
Density meas: The buoyant force on the float is directly related to density
and may be measured in a number of ways, such as the strain-gage beam.
A method of measuring gas density by using a small centrifugal
blower (run at constant speed) to pump continuously a sample of the flow.
The pressure drop across such a blower is proportional to density and may
be measured with a suitable differential-pressure pickup. Acoustic
impedance depends on the product of density and speed of sound.
The attenuation of radiation from a radioisotope source depends on the
density of the material through which the radiation passes. For gas flow,
indirect measurement of density by means of computation from pressure
and temperature signals is also common.

Obstruction meters( Flow restriction, Flow rate proportional to


Most widely used flow Fictional losses affect the sq root of press drop.
metering principle) measured pressure drop,
Nonlinearity typical of all lead to permanent pr.
obstruction meters (other loss & reduced x-section
than the laminar-flow area.
element) So experimental
calibration to determine
the actual flow rate Qa.
For this coeff of
discharge, Cd = Qa/Qc is
defined

Rotameter To keep Δp constant for Float position is the


Accurate range of about differing flow rates, the meter output. essentially
10: 1 , considerably better area must vary, hence linear with flow rate by
tapered making the tube area
than square-root-type vary linearly with the
elements. vertical distance.
Pitot Tube Can’t meas low vel.

Turbine Flowmeter Velocity" flowmeter. Pressure drop across the


(Suitable for secondary Measurement of rapidly meter varies with the
standards) changing flows possible square of flow rate.
Rotary speed depends on
the flow rate of the fluid
Drag-Force Flowmeters Drag force by bonding For sufficiently high
strain gage Reynolds number
Electromagnetic meters Velocity" flowmeter. e= BlV or BDpV Correct av velocity of any
Conductive fluid profile symmetric about
the pipe center

Ultrasonic meters No obstruction or press Relation without acoustic


drop, good range Pulse repetition velocity “c’ for accuracy.
frequency Unlike electromagnetic
meters, which read the
correct
is average velocity when
independent of ‘c’ flow profile is axi-
V= fluid vel. symmetric, ultrasonic
meters will give different
readings for axi-
symmetric profiles of
different shape but
identical average velocity
Vortex shedding Reliable for high RN Vel. meas. Linear ranges
about 10000, wide Shedding of 15 : 1 are common,
variety of liquids/gases/ frequency. V= fluid vel. with 200: 1 sometimes
steam may be metered. possible.

Direct Mass flow Flow directed thru a r, ω are constant( impeller T is a direct & linear
turbine/ impeller motor driven at const. meas of mass flowrate G.
speed. Impeller speed ω is a
measure of mass flow
When driven at const. rate G at const. torque. ω
torque is a direct but not linear
ω= (T/𝑟 2 )/G, ω is easier to meas. but pulse duration
measure. ‘t’ is a direct & linear
meas of G
Hot-Wire and Hot-Film Heat generated is just H is mainly a function of
Anemometers(const balanced by convective flow velocity, the
current and const temp heat loss from wire Energy balance in eqlbm, equilibrium wire
types).Majority of hot- surface. Principle is I, R=wire current & temperature is a
resistance, h=film coeff
wire and hot-film somewhat similar to the measure of velocity,
heat transfer, A= heat
anemometers now use thermal conductivity Hot wire can be used in
transfer area,
the feedback (self- gages where heat input Tw ,Tf = wire , fluid the low-velocity flow
balancing) version of the and losses by conduction temperatures Without feedback, the
constant temperature and radiation balanced. hot wire is accurate only
system. for velocity fluctuations
of frequency less than
about 100 Hz. By
redesigning the
instrument to use
feedback, this limit is
extended to about
30,000 Hz unlike the
pitot tube.

Direct Mass Flowmeters

Direct mass flow meters may have advantages with respect to accuracy,
simplicity, cost, weight, space, etc., in certain applications.
A principle widely employed in aircraft fuel-flow measurement depends on
the moment-of-momentum law of turbomachines.

From fluid mechanics , for one dimensional, incompressible, lossless flow


through a turbine or an impeller wheel, the torque T exerted by an
impeller wheel on the fluid (minus sign) or on a turbine wheel by the fluid
(plus sign) is given by

Where

The flow to be measured is directed through an impeller wheel which is


motor-driven at constant speed. If the incoming flow has no rotational
component (Vti = 0); and if the axial length of the impeller is enough
to make Vto = rw, the driving torque necessary on the impeller is
Since r and w are constant, the
torque T (which could be measured in several ways) is a direct and linear
measure of mass flow rate G. However, for G = 0, torque will not be zero,
because of frictional effects; furthermore, viscosity changes also would
cause this zero-flow torque to vary.

A variation on this approach is to drive the impeller at constant torque (with


some sort of slip clutch). Then, impeller speed is a measure of mass flow
rate according to ω= (T/𝑟 2 )/G .
The speed ω is now nonlinear with G, but may be easier to measure than
torque. If a magnetic proximity pickup is used for speed measurement, the
time duration ‘t’ between pulses is inversely related to ω ; thus
measurements of ‘t’ would be linear with G, the mass flow rate.

Another version of the above meter embodies several advantageous


features and is the basis of successful commercial instruments. A spring
measures torque, but both ends of the spring rotate at the same speed,
and spring deflection is inferred from the time interval between pulses
1. Fuel first enters the hydraulic driver, which provides the torque to
rotate the shaft drum and impeller.
2. The fuel then passes through a stationary straightener and into the
impeller.
3.The mass of the fuel flowing through the rotating impeller causes it to
deflect proportionally against the spring.
4. Impeller deflection relative to the drum is measured by pulses
generated by magnets (attached to the drum and the impeller) rota
ting past two pickoff coils.
5. The time between pulses, caused by the angular displacement of the
impeller relative to the drum, is directly proportional to the mass flow
rate of the fuel.

produced by two magnetic pickoffs. This makes the Δt signal independent


of impeller speed ω, since a smaller ω gives a smaller torque (and spring
deflection), but this is exactly compensated by the increase in Δt resulting
from ω itself being lower. This independence of ω allows use of a drive
employing a fluid turbine and thus eliminating the constant-speed electric
motor drive completely.

Temperature and Heat-Flux


Measurement

Thermal Expansion Methods


Bimetallic elements-differential expansion of bonded strips of two metals.
Liquid expansion at essentially constant pressure is used in the common
liquid-in-glass thermometers.
Restrained expansion of liquids, gases, or vapors results in a pressure rise,
which is the basis of pressure thermometers.

To increase the difference of CTE , Invar is used.Since there are no


practically usable metals with negative thermal expansion, the B element is
generally made of Invar, a nickel steel with a nearly zero:-
[ 1 .7 X 1 0- 6 in/(in · C)] expansion coefficient.
Working temperature range is
about from -100 F to 1000F. Inaccuracy of the order of 0.5 to 1 percent of
scale.

THERMOELECTRIC SENSORS
(S): Thermoelectric temperature measurement depends only on the
Seebeck effect.
Many materials exhibit the thermoelectric effect to some degree, only
a small number of pairs are in wide use. They are platinum/rhodium,
Chromel/Alumel, copper/constantan, and iron/constantan.
The magnitude of the voltage E depends on the materials and
temperatures. Location of the thermoelectric voltage in the circuit is not
at the junction of the two metals: The correct interpretation is that “The
thermoelectric emf is actually an effect distributed along the length of each
single metal wire and exists even if the wire is not connected to anything.
Its magnitude Eσ depends on a material property called the absolute
Seebeck coefficient σ and the distribution of temperature along the wire.

The inhomogeneity and hence the error can occur anywhere along the
length of the wires and thus the false focus on the junction can hide real
error source, so this is a major contribution of the "new" viewpoint.
Since the thermoelectric effect is somewhat nonlinear, the sensitivity varies
with temperature. The maximum sensitivity of any of the above pairs is
about 60 μV/0C for copper/constantan at 350°C. Platinum/platinum-rhodium
is the least sensitive: about 6 μV/0C between 0 and 100°C.
Platinum/platinum-rhodium thermocouples are the most accurate(error ±
0.25%) and employed mainly in the range 0 to 1 500°C. The main features
of this combination are its chemical inertness and stability at high
temperatures in oxidizing atmospheres.

An isothermal block is often used and can accommodate multiple


channels of thermocouples for uniform reference temperature. The
semiconductor sensor does not require a reference junction unlike the
thermocouple.
5 -Thermocouple Laws :(Pictorially) Pl familiarize with the 5 laws.

(a) thermal emf of a thermocouple with junctions at T1 and T2 is totally


unaffected by temperature elsewhere in the circuit
(b) If a third homogeneous metal C is inserted into either A or B (see Fig.
b),
as long as the two new thermojunctions are at like temperatures, the
net emf of the circuit is unchanged irrespective of the temperature of C
away from the
junctions.
(c) If metal C is inserted between A and B at one of the junctions, the
temperature of C at any point away from the AC and BC junctions is
immaterial. As long as the junctions AC and BC are both at the temperature
T1 , the net emf is the same as if C were not there
( d) If the thermal emf of metals A and C is EAC and that of metals B and C
is ECB , then the thermal emf of metals A and B is EAC + ECB
( e) If a thermocouple produces emf E1 when its junctions are at T1 and T2
, and E2 when at T2 and T3 , then it will produce E1 + E2 when the
junctions are at T1 and T3

( Speed of response, conduction and radiation errors, and precision


of junction location all are improved by the use of smaller wire, very-fine-
wire thermocouples are utilized in special applications requiring these
attributes and where lack of ruggedness is not a serious drawback )

Several thermocouples may be connected in series or parallel to achieve


useful functions (Fig. 8 . 1 8). The series connection with all measuring
junctions at one temperature and all reference junctions at another is used
mainly as a means of increasing sensitivity. Such an arrangement is called
a thermopile and for n thermocouples gives an output n times as great as
a single couple. A typical Chromel/constantan thermopile has 25 couples
and produces about 1 mV/F0 •Common potentiometers can resolve 1 μV,
thus making such an arrangement sensitive to O.OOlF0 .
Multiple-junction thermocouples and flow couple

Here All measuring


junctions at one temperature and all reference junctions at another,
Mainly used as a means of increasing sensitivity. For n thermocouples-
Output n times as great as a single couple. A typical Chromel/constantan
thermopile has 25 couples and produces about 1 mV/F0 Common
potentiometers can resolve1 μV, So the arrangement is sensitive to 0.001
0
F

The parallel combination


generates the same voltage as a single couple if all measuring and
reference junctions are at the same temperature. If the measuring junctions
are at different temperatures and the thermocouples are all the same
resistance, the voltage measured is the average of the individual voltages.
Used to measure differential temperatures in Flowing Fluids
Conductive Sensors(RTDs)
The variation of resistance R with temperature T for most metallic materials
can be represented by an equation of the form:
where R0 is the resistance· at
temperature T = 0. The number of terms necessary depends on the
material, the accuracy required, and the. temperature range to be covered.
Platinum, is linear within ± 0.4 percent over the ranges - 300 to - 100'F and
– 100 to + 300'F, ± 0.3 percent from 0 to 300'F, ± 0.25 percent from
- 300 to - 200'F, ± 0.2 percent from 0 to 200'F, and ± 1 .2 percent from 500
to 1 500'F.
Using a 1 -𝜇m wire resistance element, gas temperature fluctuations of up
to 3 kHz can be measured, either for their own sake or as compensation for
hot-wire flow sensors. Bridge circuits used with resistance temperature
sensors may employ either the deflection mode of operation or the null
(manually or automatically balanced)While the resistance/temperature
variation of the sensing element may be quite linear, the output voltage
signal of a bridge used in the deflection mode is not necessarily linear for
large percentage changes in resistance. Unlike the case of strain gages,
the resistance change of resistance thermometers for full-scale deflection
may be quite large.

Bulk Semiconductor Sensors (Thermistors)


Compared with conductor type sensors (which have a small positive
temperature coefficient), thermistors have a very large negative coefficient.
While some conductors (copper, platinum, tungsten) are quite linear,
thermistors are very nonlinear. Their resistance/temperature relation is
generally of the form where R= resistance at temp T, R0 =
resistance at temp T0.
The major difference between thermistors and RTDs is linearity: thermistors are highly sensitive and nonlinear,
whereas RTDs are relatively insensitive but very linear. For this reason, thermistors are typically used where high
accuracy is unimportant. Many consumer-grade devices use thermistors for temperature sensors.

Digital voltmeters/Digital thermometers


In most applications, the nonlinearity of thermocouples and RTDs is
sufficiently great that we must use calibration tables (or curve-fit formulas
derived from them) to convert voltage or resistance measurements to the
corresponding temperatures. When many measurements are to be made,
this is inconvenient, time-consuming, and prone to error. Also, with
thermocouples (TCs), the reference-junction temperature must be
accounted for.
The extreme nonlinearity limits the practical application of non-contact
pyrometry to relatively narrow ranges of target temperature wherever good
accuracy is required.

Infrared Imaging Systems

Here the temperature distribution along a selected line or over a selected


area is more useful. When a two-dimensional image of a selected area is
needed, various forms of infrared camera are available. Most of these
systems have the function of providing a television-like visual display in
which the colors represent various temperature levels of the surface of
some two-dimensional object (target) on which the infrared camera is
focused. While these colors are accurately related to infrared energy
levels emitted from the target, to be noted that all infrared temperature
sensing requires knowledge of the emittance of the target surface to
convert the detector signal to degrees of temperature. Both staring and
scanning systems are available, but staring systems are preferred in many
cases since they do away with complex moving parts and can often gather
more of the available radiation.

Emissivity:Materials ability to emit thermal radiation. Ranges from 0.0


to 1.0. Black body is a theoretical object with emissivity 1.
Condition monitoring, thermal mapping , hot gas leakages, night
vision and targeting,NDE are some of the applications. Thermal
imagers detect tiny difference in heat as low as 0.01 deg C.

Monochromatic-Brightness Radiation Thermometers


(Optical Pyrometers)

It is the disappearing-filament optical pyrometer and the most accurate of


all the radiation thermometers; it is limited to temperatures greater than
about 700°C since it requires a visual brightness matching by a human
operator. A red band-pass filter restricts the matching to wavelengths
near 0.655 μm, the most sensitive region of the human eye.
Data on Different Type of thermocouples
VIBRATIONS: Why Study Vibration ??
Noise & Failures due to vibrations
Structural dynamics studies for natural frequencies, mode shapes, and
damping for Vehicle and machine-tool frames, bridges, and buildings……
For monitoring “Health" of machinery.
Impending failure alert….
Of interest are: displacement, velocity, and acceleration
"Seismic" sensors or pickups (based on a spring/mass system) are widely
used in all types of shock and vibration measurements.
Vibrations of shafts supported in fluid film bearings are often measured by
non-seismic, noncontact relative displacement sensors, such as the eddy-
current probes. MEMS accelerometers use the usual mass/spring
system, but in miniaturized form and produced by micromachining
processes that result in low cost.

ACCELEROMETERS
Piezoelectric accelerometers are self-generating devices characterized by
an extended region of flat frequency response range, a large linear
amplitude range and excellent durability. resonant frequency (ω) of the
sensor can be estimated by: ω = √k /m .
Piezoelectric accelerometers can be broken down into two main categories
that define their mode of operation.
(a) Internally amplified accelerometers or IEPE (internal electronic
piezoelectric) contain built-in microelectronic signal conditioning.
(b) Charge mode accelerometers contain only the self-generating
piezoelectric sensing element and have a high impedance charge
output signal.

Accelerometer
The most important pickup for vibration, and general-purpose absolute
motion measurement is the accelerometer. This instrument is commercially
available in a wide variety of types and ranges to meet correspondingly
diverse application requirements. The basis for this popularity :
1. Frequency response is from zero to some high limiting value. Steady
accelerations can be measured (except in piezoelectric types).
2. Displacement and velocity can be easily obtained by electrical
integration, which is much preferred to differentiation.
3. Measurement of transient (shock) motions is more readily achieved than
with displacement or velocity pickups.
4. Destructive forces in machinery, etc., often are related more closely to
acceleration than to velocity or displacement.

SEISMIC- (ABSOLUTE-)
DISPLACEMENT PICKUPS
"Seismic" sensors or pickups (based on a spring/mass system) are
widely used in all types of shock and vibration measurements.
Important examples include structural dynamics studies that yield dynamic
models (natural frequencies, mode shapes, and damping) for vehicle and
machine-tool frames, bridges, and buildings, and also for monitoring the
"health" of machinery, alerting factory personnel to incipient machine faults
such as bearing failures.
Vibration variables of interest are displacement, velocity, and acceleration
of selected points on the structure or machine. In a particular application,
one of these variables may be more significant than the others, so seismic
pickups are available for each of the three.
High-frequency vibration--small (perhaps immeasurable) displacement and
large acceleration, while low-frequency vibration has large displacement
but very small acceleration.
The basic principle of seismic(absolute-) displacement pickups is
simply to measure (with any convenient relative-motion transducer) the
relative displacement of a mass connected by a soft spring to the vibrating
body. ωn should be much less than the lowest vibration frequency ω for
accurate displacement measurement. Since a low ωn is desired, either a
large mass or a soft spring (or both) is necessary. To keep size (and
thereby loading on the measured system) to a minimum, soft springs are
preferred to large masses.

Vibrations of shafts supported in fluid film bearings are often measured by


non-seismic, noncontact relative displacement sensors, such as the eddy-
current probes.

SEISMIC- (ABSOLUTE-) VELOCITY PICKUPS

A voltage signal from the displacement pickup could be electrically


differentiated, but this approach suffers from the general noise-
accentuating behavior of differentiation, whether done numerically or with
an analog circuit.
So 2 alternatives are: (1)A more practical approach is to build a
displacement pickup as shown in Fig. 4.77a but replace the relative
displacement sensor with a relative velocity sensor, usually some form of
moving-coil device.(2) Another viable scheme is to build an accelerometer
(see Sec. 4.8) and add an electrical integrating circuit to the amplifier that is
already a part of many piezoelectric accelerometers.
Velocity pickups are favored by some vibration analysts, particularly for
"machinery health" monitoring applications in the low-frequency ranges.
Moving coil velocity
pick-up
SEISMIC- (ABSOLUTE-) ACCELERATION PICKUPS
(ACCELEROMETERS
The most important pickup for vibration, shock, and general-purpose
absolute motion measurement is the accelerometer.

1.Frequency response is from zero to some high limiting value. Steady


accelerations can be measured (except in piezoelectric types).
2. Displacement and velocity can be easily obtained by electrical
integration(much preferred to differentiation).
3. Measurement of transient (shock) motions is more readily achieved than
with displacement or velocity pickups.
4. Destructive forces in machinery, etc., often are related more closely to
acceleration than to velocity or displacement.

Since spring deflection x0 is proportional to force, which in turn is


proportional to acceleration, So X0 is a measure of acceleration Ẍi .
An accelerometer cannot distinguish between a force due to acceleration
and a force due to gravity.
The majority of accelerometers may be classified as either deflection
type or null-balance type. Those used for vibration and shock measurement
are usually the deflection type whereas those used for measurement of
gross motions of vehicles (submarines, aircraft, spacecraft, etc.) may be
either type, with the null-balance being used when extreme accuracy is
needed.
MEMS accelerometers use the usual mass/spring system, but in
miniaturized form and produced by micromachining processes that
result in low cost.

Accelerometers for Inertial Navigation

Inertial navigation is accomplished in principle by measuring the absolute


acceleration (usually in terms of three mutually perpendicular components
of the: total acceleration vector) of the vehicle and then integrating these
acceleration signals twice to obtain displacement from an initial known
starting location. Thus instantaneous position is always known without the
need for any communication with the world outside the vehicle. The
accelerometers are sensitive also to gravitational force, this force must be
computed and corrections applied continuously.
To keep the accelerometers' sensitive axes always oriented parallel
to their original starting positions, elaborate stable platforms using
gyroscopic references and feedback systems are necessary. But today an
alternative design concept ("strap-down system") does not require the
stable platform but rather mounts the gyros and accelerometers directly to
the vehicle frame. This approach simplifies the hardware but requires
elaborate computation to transform the body-fixed measured data to the
required space-fixed coordinate system.

All inertial navigation systems suffer from integration drift: small errors in
the measurement of acceleration and angular velocity are integrated into
progressively larger errors in velocity, which are compounded into still
greater errors in position. Since the new position is calculated from the
previous calculated position and the measured acceleration and angular
velocity, these errors accumulate roughly proportionally to the time since
the initial position was input. Even the best accelerometers, with a standard
error of 10 micro-g, would accumulate a 50-meter error within 17
minutes. Therefore, the position must be periodically corrected by input
from some other type of navigation system.

SEISMIC DISPLACEMENT PICK UP FREQUENCY RESPONSE


A typical Piezoelectric frequency response.
Table ;Vibration Producing Methods
Required…..

Accelerometer Table :

Accelerometer Advantages Limitations Typical


Types Applications
IEPE Piezoelectric Wide Dynamic & Limited Temperature Modal Analysis, NVH
Accelerometer Frequency Range, Range - Max 175 C, Engine NVH
(Internal electronic Durable (High Shock Low Frequency Flight testing, Ground
piezoelectric) Protection) , Less Response is Fixed Vibration Testing,
Susceptible to EMI & within the Sensor, HALT/HASS
RF Interference, High built in amplifier is
Impedance Circuitry, exposed to same test
Sealed in Sensor, Can environment
be made small

Charge Piezoelectric High operating More Care required Jet Engine,


Accelerometer (contain temperatures to install and High Temperature
only the self-generating to 700°C, Wide maintain, Capacitive Steam Pipes,
piezoelectric sensing dynamic & Frequency loading from long Turbo Machinery,
element) Range, Charge cable run results in Steam Turbine,
Converter electronics noise floor increase, Exhaust
is at ambient condition, Special Low Noise
away from test Cable reqd.
environment
Piezoresistive DC Response Lower Shock Crash Testing,
Accelerometer Small Size Protection Flight testing,
Smaller Dynamic Shock testing
Range
Capacitive DC Response Frequency Range Ride Quality,
Accelerometer Better Resolution than Average Resolution Ride Simulation,
PR Bridge Testing,
Type Accelerometer Flutter,
Airbag Sensor
Alarms

Servo Accelerometer High Sensitivity Limited Frequency Guidance


Highest Accuracy for range, Applications Requiring
Low High Cost little
Level Low Frequency Fragile, Low Shock or no DC Baseline Drift
Measurements Protection

GYROSCOPE
Notion of angular displacement and velocity.
Order of response, sensitivity factor, frequency and damping ratio, high
performance gyroscopes

Gyroscope:
Almost all commercial and military jet aircraft now use inertial navigation
systems based on ring-laser gyros. Optical (ring-laser) "gyros" do not use
gyroscopic principles at all. Rather,they measure phase shifts between two
laser beams directed around a loop by mirrors fastened to the object whose
rotation is to be measured. One beam travels with the rotation while the
other travels against it, causing a phase shift proportional to rotation when
the beams are recombined and compared.

The method of opposing inputs consists of intentionally introducing into the


instrument interfering and /or modifying inputs that tend to cancel the bad
effects of the unavoidable spurious inputs.
An example of the method of opposing inputs is the rate gyroscope in fig.
below. Such devices are widely used in aerospace vehicles for the
generation of stabilization signals in the control system. The action of the
device is that a vehicle rotation at angular velocity θi causes a proportional
displacement θo of the gimbal relative to the case. This rotation θo is
measured by some motion pickup (not shown in Fig).
Thus a signal proportional to vehicle angular velocity is available,
and this is useful in stabilizing the vehicle. When the vehicle undergoes
rapid motion changes, however, the angle θo tends to oscillate, giving an
incorrect angular velocity signal.
To control these oscillations, the gimbal rotation θo is damped by the
shearing action of a viscous silicone fluid in a narrow damping gap.
FIG:METHOD OF OPPOSING INPUTS

GYROSCOPES

Gyroscopes are critical rotation sensing elements used in navigation


systems (inertial navigation systems (INS), attitude and heading reference
systems (AHRS) or inertial measurement units (IMUs) for manned and
unmanned aircraft, spacecraft, marine vehicles and surface vehicles.
In addition, gyroscopes are important elements in platform stabilization
systems (i.e. antenna stabilization) and where accurate pointing and/or
targeting is required.

Spinning mass mechanical gyros


Fiber-optic gyros (FOGS)
Micro-electro-mechanical (MEMS) gyros
Ring laser gyros (RLG)

Gyroscopes help navigate vehicles ranging from airplanes and ships to


drones and self-driving cars. They stabilize and orient cameras, scientific
instruments and robots. They isolate sensitive equipment from vibration
and guide drill rigs for oil and gas producers. They’re used in virtual-reality
headsets, smartphones and computer-pointing devices. They keep
satellites pointed in the right direction, enabling the voice and data
communications that connect our world and fuel the global economy. In
fact, compared to a mechanical gyro that can drift 0.1-0.01 degrees per
hour, an RLG drifts about 0.0035 degrees.

Ring Laser Gyros


Many tens of thousands of RLGs are operating in inertial navigation
systems and have established high accuracy, with better than
0.01°/hour bias uncertainty, and mean time between failures in excess
of 60,000 hours. The advantage of using an RLG is that there are no
moving parts (apart from the dither motor assembly),compared to the
conventional spinning gyroscope.
No friction, which in turn eliminates a significant source of drift. Additionally,
the entire unit is compact, lightweight and highly durable, making it suitable
for use in mobile systems such as aircraft, missiles, and satellites. Unlike a
mechanical gyroscope, the device does not resist changes to its
orientation. Contemporary applications of the Ring Laser Gyroscope (RLG)
include an embedded GPS capability to further enhance accuracy of RLG
Inertial Navigation Systems (INS)s on military aircraft, commercial airliners,
ships and spacecraft. These hybrid INS/GPS units have replaced their
mechanical counterparts in most applications. Where ultra-accuracy is
needed however, spin gyro based INSs are still in use today.

A certain rotation rate creates a small difference between the time light
takes to traverse the ring in the two directions according to the Sagnac
effect. This introduces a small separation between the frequencies of the
counter-propagating beams, a motion of the standing wave pattern within
the ring, and thus a beat pattern when those two beams are interfered
outside the ring. Therefore, the net shift of that interference pattern follows
the rotation of the unit in the plane of the ring. RLGs, while more accurate
than mechanical gyroscopes, suffer from an effect known as "lock-in" at
very slow rotation rates. When the ring laser is hardly rotating, the
frequencies of the counter-propagating laser modes become almost
identical. In this case, crosstalk between the counter-propagating beams
can allow for injection locking so that the standing wave "gets stuck" in a
preferred phase, thus locking the frequency of each beam to that of the
other, rather than responding to gradual rotation.

The input laser beam is split into two that travel the same path but in
opposite directions: one clockwise and the other counter-clockwise.
The beams are recombined & sent to the output detector. In the absence or
rotation, the path lengths will be same and the output will be the total
constructive interference of the two beams. If the apparatus rotates, there
will be a difference in the path lengths travelled by the two beams, resulting
in a net phase difference and destructive interference. The net signal will
vary in amplitude depending on the phase shift, therefore the resulting
amplitude is a measurement of the phase shift, and consequently, the
rotation rate.

The classical gyroscope always contains a spinning wheel, but the term is
today also used for devices that measure absolute angular motion using
other principles that have nothing to do with spinning wheels (ring laser,
vibrating tuning fork, fiber optic, fluidic vortex, hot-wire jet-deflection)
The simplest gyro configuration is the free gyro ( shown below) which is
used to measure the absolute angular displacement of the vehicle to which
the instrument frame is attached. A single free gyro can measure rotation
about two perpendicular axes, such as the angles θ and φ. This can be
accomplished because the axis of the spinning gyro wheel remains fixed in
space (if the gimbal bearings are frictionless) and thus provides a reference
for the relative-motion transducers. If the angles to be measured do not
exceed about 10°, the readings of the relative-displacement transducers
give directly the absolute rotations with good accuracy. For larger rotations
of both axes, however, there is an interaction effect between the two
angular motions, and the transducer readings do not accurately represent
the absolute motions of the vehicle. The free gyro is also limited to
relatively short-time applications (less than about 5 min) since gimbal-
bearing friction causes gradual drift (loss of initial reference) of the gyro
spin axis.

Free Gyro( Two axis position gyro)


A constant friction torque Tf causes a drift (precession) of angular velocity

ωd given by
where Hs is the angular momentum of the spinning wheel. It is clear
that a high angular momentum is desirable in reducing drift. A typical
application would be found in the guidance system of a short-range missile,
where the short lifetime (less than a minute) can tolerate the drift.

Rather than using free gyros to measure two angles in one gyro (thus
requiring two gyros to define completely the required three axes of motion),
high performance systems utilize the so-called single-axis or constrained
gyros. Here a single gyro measures a single angle (or angular rate);
therefore three gyros are required to define the three axes. This approach
avoids the coupling or interaction problems of free gyros, and the
constrained (rate-integrating) gyros can be constructed with exceedingly
small drift. We consider here two common types of constrained gyros :
the rate gyro and the rate-integrating gyro. The rate gyro measures
absolute angular velocity and is widely used to generate stabilizing signals
in vehicle control systems.
The rate-integrating gyro measures absolute angular displacement
and thus is utilized as a fixed reference in navigation and attitude control
systems. The configuration of a rate gyro is shown in Fig. 4.92; the rate
integrating gyro is functionally identical except that it has no spring
restraint.

Fig. 4.92-Single Axis restrained gyro

The rate-integrating gyro is the basis of highly accurate inertial navigation


systems, where it is used as a reference to maintain so-called stable
platforms in a fixed attitude within a vehicle while the vehicle moves
arbitrarily. This is done by using the motion signals from the gyros to drive
servomechanisms which maintain the platform in a fixed angular
orientation. The system accelerometers are mounted on this platform, and
their double-integrated output is an accurate measure of vehicle motion
along the three orthogonal axes since the platform always moves parallel to
its initial orientation.

General analysis of gyroscopes is exceedingly complex, useful results for


many purposes may be obtained relatively simply by considering small
angles only. This assumption is satisfied in many practical systems. Figure
4.93 shows a
gyro whose gimbals (and thus the angular-momentum vector of the
spinning wheel) have been displaced through small angles θ and φ. We
apply Newton's law

Angular momentum is the rotational equivalent of linear momentum- —


the total angular momentum of a closed system remains constant. Torque
can be defined as the rate of change of angular momentum, analogous
to force. The conservation of angular momentum helps explain many
observed phenomena, for example the precession(is a change in
the orientation of the rotational axis of a rotating body) of gyroscopes.

Fig. 4.93- Gyro Analysis( below)

A high sensitivity is achieved by large angular momentum Hs and soft


spring Ks(spring constant, N/m) , although low Ks gives a low ω n . In some
high-performance rate gyros, the mechanical spring is replaced by an
"electrical spring" arrangement similar to that used in the servo-
accelerometer of Fig. 4.85.

Instrumentation:
Electronic Flight Display has led to multiple Liquid Crystal Display
replacing conventional instruments. Solid state instruments – much lower
failure rates than the conventional analogs. EFD- Primary Flight
Display(PFD) & MFD. Electronic flight displays have replaced free-spinning
gyros with solid-state laser systems that are capable of flight at any attitude
without tumbling.

The instrumentation falls into three different categories: Performance,


Control, and Navigation.

Performance instruments: Altimeter, Airspeed or Vertical speed indicator


(VSI), Heading indicator, and Turn-and-slip indicator, indicator, and the
Slip-skid indicator.

Altitude Measurement
The measurement of aircraft altitude has for many years been
accomplished using a static pressure measurement since the altitude
above sea level is related in a known way (see a section on "standard
atmosphere" in a fluid mechanics book) to static pressure. The relation is
nonlinear but is well known. In the upcoming discussion the altitude
changes studied are small enough that we can neglect any nonlinear
effects. The static pressure, and thus the "barometric altitude," is measured
with any suitable pressure sensor, which receives an input pressure from a
length of tubing coming from the static pressure tap on the aircraft's pitot
tube. (All aircraft and missiles have a pitot tube for measuring altitude and
airspeed.) The "good" feature of this barometric altimeter is that there is no
signficant zero drift with time. The "bad" features are that the pneumatic
response is rather slow arid there is significant noise due to air turbulence.
If our flight vehicle requires a fast, noise-free altitude signal, the barometric
approach may not be adequate. An alternative scheme for measuring
altitude uses an accelerometer oriented to measure vertical acceleration;
we then integrate this signal twice to get an altitude signal. This approach
provides a fast response and is relatively noise-free. However, the slightest
bias error in the accelerometer and/or integrators will cause an ever-
increasing zero drift, which is totally' unacceptable. Also, for level flight at
any altitude, if the altitude signal accumulated at the output of the second
integrator should be lost for any reason, there is no way to recover a value
for the altitude. We see that the two schemes for measuring altitude are
complementary; where one is bad the other is good. Thus, the baro-inertial
altimeter is used in many practical flight vehicles.
Altimeter : A sensitive barometer, Measures the height of an aircraft above
a given pressure level. Only instrument that measures altitude ??? (To be
checked.) A thumb rule followed is :Going from Hot to cold (or) High
to Low: Look Out Below
When the actual pressure is lower than what is set in the altimeter window,
the actual altitude of the aircraft is lower than what is indicated on the
altimeter
Types of Altitudes

1.Indicated altitude: Uncorrected altimeter reading


2.True altitude: Vertical distance above MSL
3.Absolute altitude:Vertical distance above ground
4.Pressure altitude: Above standard datum air press at 15°C, used to
compute density altitude, true altitude, true airspeed (TAS), and other
performance data.
5.Density altitude: Pressure altitude corrected for variations from standard
temperature. Directly related to the aircraft’s performance. Thinner air
affects Thrust(power) and Lift. Density altitude is determined by first finding
pressure altitude and then correcting this altitude for nonstandard
temperature variations. ρ α P, at const temperature. ρ is inversely
proportional to T at const P.

Altitude Measurement
Measurement of aircraft altitude uses static pressure(Absolute)
measurement: The altitude above sea level is related in a known way to static
pressure. The relation is nonlinear but well known. When altitude changes
are small enough , non linearity is neglected. All aircraft and missiles have
Pitot tube for measuring altitude and airspeed. “Good" feature of this
barometric altimeter -- there is no significant Zero drift with time. "bad"
features are that the pneumatic response is rather slow and there is
significant noise due to air turbulence. Flight vehicle requires fast, noise-free
altitude signal, the barometric approach is not adequate. An alternative
scheme for measuring altitude uses an accelerometer oriented to measure
vertical acceleration; we then integrate this signal twice to get an altitude
signal. But the slightest bias error in the accelerometer and/or integrators will
cause an ever-increasing zero drift. The baroinertial altimeter is used in many
practical flight vehicles.
Pitot-static tube is found in aircraft & missiles. The stagnation- and static-
pressure readings of a tube fastened to a vehicle are used to determine the
airspeed 'and Mach number while the static reading alone is utilized to
measure altitude. If altitude is to be measured with an error of 100 ft, the
static pressure must be accurate to 0.5 percent.
The static pressure is usually the more difficult to measure accurately.
• The difference between True (Pstat) and measured (Pstat, m) values
of static pressure may be due to the following:
Misalignment of the tube axis and velocity vector

• Nonzero tube diameter, A similar (and possibly more severe) effect


occurs if a tube is inserted in a duct of cross-sectional area not much
larger than that of the tube.
• Influence of stagnation point on the tube-support leading edge

Response of Pitot Tube:


Long, small-diameter tubing used with pitot-
static tubes leads to a slow dynamic response. Some Aircrafts use
pressure data in their control systems and thus require a fast response.
Small time lag (fast response) is also needed in
wind-tunnel testing
Tiny pressure transducers can have fast response when located close to
pr- sensing hole.
Gyroscopic Flight Instruments

Most common instruments containing gyroscopes are the Turn coordinator,


Heading indicator, and the Attitude indicator.

Two important design characteristics of a Gyro are:


1.Great weight for its size, or high density,
2.Rotation at high speed with low friction bearings.

2 fundamental properties of gyroscopic action: Rigidity in space and


Precession

Rigidity in space refers to the principle that a gyroscope remains in a fixed


position in the plane in which it is spinning.
Increase in wheel speed, increases stability in their plane of rotation.

Precession is the tilting or turning of a gyro in response to a deflective


force. The reaction to this force does not occur at the point at which it was
applied; rather, it occurs at a point that is 90° later in the direction of
rotation. Precession is a drift of angular velocity to even when a constant friction
torque Tf is applied. Precession is a change in the orientation of the rotational axis
of a rotating body of gyroscopes.

This principle allows the gyro to determine a rate of turn by sensing the
amount of pressure created by a change in direction. The rate at which the
gyro precesses is inversely proportional to the speed of the rotor and
proportional to the deflective force.
There is a need to turn the handle bars of bicycle at low speeds because of
the instability of the slowly turning gyros and also to increase the rate of
turn. At normal speeds, leaning is sufficient.
Precession can cause a freely spinning gyro to become displaced from its
intended plane of rotation through bearing friction, etc. Certain instruments
may require corrective realignment during flight, such as the heading
indicator.
As can be seen in fig below, when a force is applied, the resulting force
takes effect 90° ahead of and in the direction of rotation.

Turn Indicators:
Aircraft use two types of turn indicators: turn-and-slip indicators and turn
coordinators.
Because of the way the gyro is mounted, the turn-and-slip indicator shows
only the rate of turn in degrees per second.
Both instruments indicate turn direction and quality (coordination), and also
serve as a backup source of bank information in the event an attitude
indicator fails.
Coordination is achieved by referring to the inclinometer, which consists of
a liquid-filled curved tube with a ball inside.

Turn-and-Slip Indicator: Fig. below


The gyro in the turn-and-slip indicator rotates in the vertical plane
corresponding to the aircraft’s longitudinal axis. A single gimbal limits the
planes in which the gyro can tilt, and a spring works to maintain a center
position. Because of precession, a yawing force causes the gyro to tilt left
or right, as viewed from the pilot seat.

Turn-and-Slip Indicator

Turn Coordinator

The gimbal in the turn coordinator is canted; therefore, its gyro can sense
both rate of roll and rate of turn. Turn coordinators are more prevalent in
training aircraft. A standard-rate turn is defined as a turn rate of 3° per
second. The turn coordinator indicates only the rate and direction of turn; it
does not display a specific angle of bank.
Attitude Indicator

The attitude indicator, with its miniature aircraft and horizon bar, displays a
picture of the attitude of the aircraft. The relationship of the miniature
aircraft to the horizon bar is the same as the relationship of the real aircraft
to the actual horizon. The instrument gives an instantaneous indication of
even the smallest changes in attitude.

The gyro in the attitude indicator is mounted in a horizontal plane and


depends upon rigidity in space for its operation. The gyro spins in the
horizontal plane and resists deflection of the rotational path. Since the gyro
relies on rigidity in space, the aircraft actually rotates around the spinning
gyro. The horizon bar represents the true horizon. This bar is fixed to the
gyro and remains in a horizontal plane as the aircraft is pitched or banked
about its lateral or longitudinal axis, indicating the attitude of the aircraft
relative to the true horizon.
Attitude Indicator

Heading Indicator
The heading indicator is fundamentally a mechanical instrument designed
to facilitate the use of the magnetic compass. A heading indicator,
however, is not affected by the forces that make the magnetic compass
difficult to interpret.
The operation of the heading indicator depends upon the principle of rigidity
in space. The rotor turns in a vertical plane and fixed to the rotor is a
compass card. Since the rotor remains rigid in space, the points on the
card hold the same position in space relative to the vertical plane of the
gyro. The aircraft actually rotates around the rotating gyro, not the other
way around. As the instrument case and the aircraft revolve around the
vertical axis of the gyro, the card provides clear and accurate heading
information.
Power Sources: In some aircraft, all the gyros are vacuum, pressure,
or electrically operated. In other aircraft, vacuum or pressure systems
provide the power for the heading and attitude indicators, while the
electrical system provides the power for the turn coordinator. Most
aircraft have at least two sources of power to ensure at least one source
of bank information is available if one power source fails. The vacuum or
pressure system spins the gyro by drawing a stream of air against the
rotor vanes to spin the rotor at high speed, much like the operation of a
waterwheel or turbine. The amount of vacuum or pressure required for
instrument operation varies, but is usually between 4.5 "Hg and 5.5 "Hg.

A typical Vacuum System


BASIC ELECTRONICS PORTION:

Signal conditioning--signal must be conditioned—i.e., cleaned up,amplified,


and put into a compatible format. Signal conditioning can include
amplification, filtering, converting, range matching, isolation and any other
processes required to make sensor output suitable for processing after
conditioning.

Perhaps the most widely used signal-analysis equipment is that which


measures the frequency spectrum of a fluctuating physical quantity. The
most common applications are in the field of sound and vibration, where
the frequency spectrum of a sound pressure, stress, acceleration, etc., may
be very useful in diagnosing faults in an operating machine or system.
Turbine Flow meter: If an analog voltage signal is desired, the pulses from
magnetic pick-up can be fed to a frequency-to-voltage converter. The
referenced instruments also use a frequency-measuring technique that
overcomes the time lags common to conventional frequency-to-voltage
converters. By measuring the time interval between passage of two blades,
the rotor speed can be updated as rapidly as every 7.5 ms.

Signal to Noise Ratio:

Multiplexing:
In certain applications, many channels of pressure data must be repetitively
transduced to voltage signals. In wind-tunnel and fluid machinery testing,
100 hundreds of test points distributed over the surface of the
model or machine may be instrumented for static or stagnation pressure
sensing. When the needed data rates are sufficiently slow, a cost-effective
solution time shares a single transducer/amplifier by multiplexing the
pressure lines using a pneumatic scanning device.The response time
needs to be sufficiently fast to allow scan rates up to 10 or 20 channels per
second/ high scan rates. Unlimited numbers of channels can be
accommodated by using as many Scanivalves as necessary; however,
each requires its own pressure transducer.

When required data rates are very high, systems using a separate
analog transducer for each channel with electronic multiplexing into a
single analog/digital converter, have been available for some time, but at
considerable expense.

Electrical signals produced by most transducers are at a low voltage


and /or power level, often it is necessary to amplify them before they are
suitable for transmission, further analog or digital processing, indication, or
recording.

CCD cameras are available for extreme speeds; down to 1 0-ns intervals
( 1 00 million frames per second); image conversion cameras capture
information in the picosecond range. The highest speed systems use an
image splitter and an array of CCD cameras, suitably multiplexed, as in
Fig.' 4.55.
Sensor Fusion
GPS can be used by itself in many applications, but is also combined with
other systems to realize the best features and negate the bad features of
each using the concept of sensor fusion (complementary filtering). An
example is the combination of GPS with an inertial measurement unit using
3 accelerometers and 3 rate gyros. The inertial system has fast response,
but becomes inaccurate with time, whereas the GPS maintains long-term
accuracy but is slow. Sensor fusion techniques necessary in adapting
MEMS instruments for critical applications. Sometimes 2 sensors
complement each other, giving rise to the name complementary filtering.
Other names for the same concept are aiding and sensor fusion. A more
advanced version of a similar idea is called Kalman filtering.

When measuring a particular variable, a single type of sensor for that


variable may not be able to meet all the required performance
specifications. If several alternative sensors for making the particular
measurement are available, we may sometimes combine several sensors
into a measurement system that utilizes the best qualities of each individual
device. That is, if one sensor type meets all but a few of the requirements,
perhaps one of the alternative types will "fill in the gap" in performance. The
second sensor makes up for the specifications that the first sensor lacks,
and vice versa. Thus, the two sensors complement each other, giving rise
to the name complementary filtering. One example comes from the
application area of aerospace vehicle systems,in particular, the sensing of
vehicle altitude above sea level. Merhav supplies the background for our
specific application and also extends the concept to more complicated
situations, including the Kalman filtering version.

The measurement of aircraft altitude has been accomplished using a static


pressure measurement since the altitude above sea level is related in a
known way to static pressure. The relation is nonlinear but is well known.
When the altitude changes studied are small enough, then we can neglect
any nonlinear effects. The static pressure, and thus the "barometric
altitude," is measured with any suitable pressure sensor, which receives an
input pressure from a length of tubing coming from the static pressure tap
on the aircraft's pitot tube. (All aircraft and
missiles have a pitot tube for measuring altitude and airspeed.) The "good"
feature of this barometric altimeter is that there is no significant zero drift
with time. The "bad" features are that the pneumatic response is rather
slow and there is significant noise due to air turbulence.

If we require a fast, noise-free altitude signal, the barometric approach may


not be adequate. An alternative scheme for measuring altitude uses an
accelerometer oriented to measure vertical acceleration.We then integrate
this signal twice to get an altitude signal. This approach provides a fast
response and is relatively noise-free. However, the slightest bias error in
the accelerometer and/or integrators will cause an ever-increasing zero
drift, which is totally' unacceptable. Also, for level flight at any altitude, if the
altitude signal accumulated at the output of the second integrator is lost for
any reason, there is no way to recover a value for the altitude. We see that
the two schemes for measuring altitude are complementary; where one is
bad the other is good. Thus, the baroinertial altimeter is used in many
practical flight vehicles.
Operational Amplifiers: The operational amplifier ( op-amp) is the most
widely utilized analog electronic subassembly; it is the basis of
instrumentation amplifiers, filters, and myriad analog and digital data
processing equipment. They are one of the basic building blocks of Analogue
Electronic Circuits. Operational amplifiers are linear devices that have all the
properties required for nearly ideal DC amplification and are therefore used
extensively in signal conditioning, filtering or to perform mathematical
operations such as add, subtract, integration and differentiation.

An operational amplifier is an integrated circuit that can amplify weak electric


signals. It has two input pins and one output pin. Its basic role is to amplify
and output the voltage difference between the two input pins. An operational
amplifier is not used alone but is designed to be connected to other circuits
to perform a great variety of operations.
By operating as a filter of input signals, the operational amplifier circuit
is able to extract the signal with the target frequency. For example, when an
operational amplifier circuit is used for voice recognition or in a voice
recorder, it can extract frequencies close to the targeted sound while shutting
out all other frequencies as noise. An operational amplifier circuit can be
tweaked to perform a broad range of functions such as arithmetical
operations or signal synthesis.

An Operational Amplifier, or op-amp for short, is fundamentally a voltage


amplifying device designed to be used with external feedback components
such as resistors and capacitors between its output and input terminals.
These feedback components determine the resulting function or “operation”
of the amplifier and owing to different feedback configurations,(resistive,
capacitive or both), the amplifier can perform a variety of different operations,
giving rise to its name of “Operational Amplifier”. Feedback is a particularly
valuable concept in op-amp applications.
An Operational Amplifier : a three-terminal device consisting of two high
impedance inputs. One of the inputs is called the Inverting Input, marked
with a negative or “minus” sign, ( – ). The other input is called the Non-
inverting Input, marked with a positive or “plus” sign ( + ). The output
voltage is equal to the difference between the non-inverting input and the
inverting input multiplied by some extremely large value (105).

By connecting resistors or capacitors, you can configure a circuit capable of


the signal amplification, filtering or arithmetic circuit operations. Use of op-
amps as simple amplifiers is uncommon.

An electronic amplifier, (example of Active Transducer) accepts a small


voltage signal as input and produces an output signal that is also a voltage
but is some constant times the input. Here the input controls the output, but
does not actually supply the output power.

Fig – Electronic amplifier

Carrier amplifier- to measure easily and record very Small voltages coming
from transducers (such as strain gages) requires a very-high-gain amplifier.
Because of drift problems, a high-gain amplifier is easier to build as an ac
rather than a dc unit. An ac amplifier, however, does not amplify constant
or slowly varying voltages and so would appear to be unsuitable
for measuring static strains. This problem is overcome by exciting the
strain-gage bridge with ac voltage (say 5 V at 3,000 Hz) rather than dc.

The output of an electronic temperature sensor, which is probably in


the millivolts range is probably too low for an analog-to-digital
converter (ADC) to process directly. In this case it is necessary to bring the
voltage level up by amplifying to that required by the ADC. Attenuation, the
opposite of amplification, is necessary when voltages to be digitized are
beyond the ADC range. This form of signal conditioning decreases the input
signal amplitude so that the conditioned signal is within ADC range.
Attenuation is typically necessary when measuring voltages that are more
than 10 V.

Basic Filter Types


Electronic filters are important for separating signals from noise in a
measurement. Low pass – A low-pass filter uses a resistor and a capacitor
in a voltage divider configuration. In this case, the “resistance” of the
capacitor decreases at high frequency, so the output voltage decreases as
the input frequency increases. So, this circuit effectively filters out the high
frequencies and “passes” the low frequencies. High-pass – The high-pass
filter is exactly analogous to the low-pass filter, except that the roles of the
resistor and capacitor are reversed.

So a low pass filter is a filter that passes low frequencies-i.e., frequencies


around ω = 0-and attenuates or rejects higher frequencies. A highpass filter
is a filter that passes high frequencies and attentuates or rejects low ones,
and a bandpass filter is a filter that passes a band of frequencies and
attenuates frequencies both higher and lower than those in the band that is
passed. When our desired signal is not a simple constant, but rather has its
own frequency range (say f1 to f2 Hz) then a bandpass filter can be used to
advantage.

Voltage Divider, If we
change resistor R2 above for a capacitor, the voltage drop across the two
components would change as the frequency changes, because the
reactance of the capacitor affects its impedance. The impedance of
resistor R1 does not change with frequency. This results in a frequency-
dependent RC voltage divider circuit. With this idea in mind, passive Low
Pass Filters and High Pass Filters can be constructed by replacing one of
the voltage divider resistors with a suitable capacitor as shown.

Low Pass filter

High Pass filter


As the capacitor charges or discharges, a current flows through it which is
restricted by the internal impedance of the capacitor. This internal
impedance is commonly known as Capacitive Reactance and is given the
symbol XC in Ohms. Capacitive Reactance varies with the applied
frequency unlike the resistance. At zero frequency or steady state DC our
220nF capacitor has infinite reactance looking more like an “open-circuit”
between the plates and blocking any flow of current through it.
Sensor Electronics

Most sensors do not directly produce voltages but rather act like passive
devices, such as resistors, whose values change in response to external
stimuli. In order to produce voltages suitable for input to microprocessors
and their analog-to-digital converters, the resistor must be “biased” and the
output signal needs to be “amplified.” The reference resistor is called a load
resistor.The load resistor must be much larger than the sense resistor for
this circuit to offer good linearity. As a result, the output voltage will be
much smaller than the input voltage. Therefore, some amplification will be
needed. A Wheatstone bridge circuit is a very common improvement on the
simple voltage divider. It consists simply of the same voltage divider in
Figure 1.1.1(below),
Rs- Sense resistor, R1- Load resistor
combined with a second divider composed of fixed resistors only.
If the amplifier’s behavior is perfectly linear, there will be no difference
between gain calculated using differences and gain calculated using
differentials (the derivative), since the average slope of a straight line is the
same as the instantaneous slope at any point along that line.

Capacitance sensors: Noncontact sensors and measurement devices-those that


monitor a target without physical contact—advantages- provide higher dynamic response to
moving targets, higher measurement resolution,ability to measure small fragile parts.
Noncontact sensors are also virtually free of hysteresis, the error that occurs with contacting
devices at the point where the target changes direction. No risk of damaging a fragile part
because of contact with the measurement probe, and parts can be measured in highly dynamic
processes and environments as they are manufactured.
Capacitive sensors are noncontact devices used for precision measurement of a conductive
target’s position or a nonconductive material’s thickness or density. Capacitance is an indicator
of the gap size, or the position of the target.Monitor coating thickness, and sense paint, paper,
and film thicknesses. Capacitive displacement sensors are known for nanometer resolutions,
frequency responses of 20 kHz and higher, and temperature stability. Capacitive sensors are
sensitive to the material in the gap between the sensor and the target. works well in a vacuum.
Capacitive sensors measure changes in the capacitance between the sensor and the target by
creating an alternating electric field between the sensor and the target and monitoring
changes in the electric field.
Sensor systems, Sensor Integration & Kalman Filter
If there is only one sensor for measuring the value of any measured parameter,the
accuracy and reliability of this measurement are essentially limited. Measurement of
noise can be partially suppressed only along with some of the spectral components of
the useful signal, and optimal one-dimensional filtration of a signal mixed with noise
defines the limit of measurement accuracy.Failure of a single sensor obviously leads to
the complete loss of the desired measurement. Finally, the possible range of the
relevant physical values is often difficult to cover by any single sensor. Therefore, only
the integration of several sensors within one system
( increased redundancy) makes possible a measurement of the desired quality. This
makes the case for sensor fusion or sensor integration. Such redundancy may include
the following:
• Some identical sensors
• Some similar sensors with different ranges and measurement accuracies
• Some sensors working on the basis of different physical principles, but measuring the
same or functionally connected parameters.

Most modern systems are equipped with numerous sensors that provide estimation of
hidden (unknown) variables based on a series of measurements. For example, a GPS
receiver provides location and velocity estimation, where location and velocity are the
hidden variables and differential time of the satellites’ signals’ arrival are the
measurements.
One of the biggest challenges of tracking and control systems is providing accurate and
precise estimation of the hidden variables in the presence of uncertainty. In GPS
receivers, the measurement uncertainty depends on many external factors such as
thermal noise, atmospheric effects, slight changes in satellite positions, receiver clock
precision and many more.
Due to Measurement Noise and Process Noise, the estimated target position can be far
away from the real target position.
The problem of integrating two or more sensors is one of the major difficulties in
designing navigation and motion control systems for aerospace vehicles.
The Kalman Filter is one of the most important and common estimation algorithms. The
Kalman Filter produces estimates of hidden variables based on inaccurate and
uncertain measurements. Also, the Kalman Filter provides a prediction of the future
system state based on past estimations. But an essential feature of these algorithms
from the practical point of view is that it is not necessary to remember any prior
information—the future state of a measuring system is determined only via currently-
obtained data and up-to-date estimation.

SENSOR MATERIALS

Piezoelectric Sensing Materials


Two categories of piezoelectric materials of accelerometers:quartz and polycrystalline
ceramics. Quartz is a natural crystal, while ceramics are man-made. Each material
offers certain benefits. The material choice depends on the particular performance
features desired of the accelerometer. Quartz is widely known for its ability to perform
accurate measurement tasks and contributes heavily in everyday applications for time
and frequency measurements.
Accelerometers benefit from several unique properties of quartz. Since
quartz is naturally piezoelectric, it has no tendency to relax to an alternative state and
is considered the most stable of all piezoelectric materials. This important feature
provides quartz accelerometers with long-term stability and repeatability. Also, quartz
does not exhibit the pyroelectric effect (output due to temperature change), which
provides stability in thermally active environments. Because quartz has a low
capacitance value, the voltage sensitivity is relatively high compared to most ceramic
materials, making it ideal for use in voltage-amplified systems. Conversely, the charge
sensitivity of quartz is low, limiting its usefulness in charge-amplified systems, where
low noise is an inherent feature.

You might also like