0% found this document useful (0 votes)
91 views

Instrumentation Chapter 2

This document discusses the static and dynamic characteristics of measuring instruments. It defines static characteristics as those related to measuring quantities that remain constant or vary slowly, such as range, sensitivity, accuracy, and resolution. Dynamic characteristics describe how an instrument responds to changing quantities, such as sensitivity drift, linearity, hysteresis, and precision. Precision is defined as the ability of an instrument to reproduce readings, while accuracy refers to how close measurements are to the true value. Static characteristics help evaluate an instrument's performance for steady-state measurements, while dynamic characteristics are important for measuring changing values over time.

Uploaded by

Muluken Filmon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views

Instrumentation Chapter 2

This document discusses the static and dynamic characteristics of measuring instruments. It defines static characteristics as those related to measuring quantities that remain constant or vary slowly, such as range, sensitivity, accuracy, and resolution. Dynamic characteristics describe how an instrument responds to changing quantities, such as sensitivity drift, linearity, hysteresis, and precision. Precision is defined as the ability of an instrument to reproduce readings, while accuracy refers to how close measurements are to the true value. Static characteristics help evaluate an instrument's performance for steady-state measurements, while dynamic characteristics are important for measuring changing values over time.

Uploaded by

Muluken Filmon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Chapter 2

Static and Dynamic Characteristics of Instruments


Performance characteristics of a measuring instrument
The performance characteristics of an instrument system is conclusion by
how accurately the system measures the required input and how absolutely
it reject the undesirable inputs
The performance characteristics may be broadly divided into two groups,
namely
‘Static’ and
 ‘Dynamic’ characteristics
• Static characteristics
• The performance criteria for the measurement of quantities that
remain constant, or vary only quite slowly.
 Range and span,  Sensitivity,
 Accuracy, error, correction,  Threshold

 Calibration,  Resolution,
 Repeatability,  Drift,

 Reproducibility  Hysteresis, dead


zone
 Precision,  
•Measurement
  Sensitivity
The sensitivity denotes the smallest change in the measured variable to which
the instrument responds.
It is defined as the ratio of the changes in the output of an instrument to a
change in the value of the quantity to be measured

While defining the sensitivity, we assume that the input-output characteristic


of the instrument is approximately linear in that range
EX. if the sensitivity of a thermocouple is denoted as 10 0 μV /oC,it indicates the
sensitivity in the linear range of the thermocouple voltage vs. temperature
characteristics.
 Similarly sensitivity of a spring balance can be expressed as 25 mm/kg,
indicating additional load of 1 kg will cause additional displacement of the
spring by 25mm.
sensitivity of an instrument may also vary with temperature or other external
factors. This is known as sensitivity drift.
Suppose the sensitivity of the spring balance mentioned above is 25 mm/kg at 20
oC and 27 mm/kg at 30 oC. Then the sensitivity drift/oC is 0.2 (mm/kg)/oC.

Note that:-In order to avoid such sensitivity drift, sophisticated instruments are
either kept at controlled temperature, or suitable in-built temperature
compensation schemes are provided inside the instrument.
Range (or span)
It defines the maximum and minimum values of the inputs or the outputs for
which the instrument is recommended to use.
Or to define in the other way, the range or span of an instrument defines the
minimum and maximum values of a quantity that the instrument is designed to
measure.
For example, for a temperature measuring instrument the input range may be
100-500 o C and the output range may be 4-20 mA.
Accuracy:
Accuracy indicates the closeness of the measured value with the actual or true
value, and
is expressed in the form of the maximum error (= measured value – true value)
as a percentage of full scale reading.
For example, if the accuracy of a temperature indicator, with a full scale range
of 0- 500 oC is specified as ± 0.5%, it indicates that the measured value will
always be within ± 2.5oC of the true value, if measured through a standard
instrument during the process of calibration.
But if it indicates a reading of 250 oC, the error will also be ± 2.5 oC, i.e. ± 1% of
the reading. Thus it is always better to choose a scale of measurement where
the input is near full-scale value.
 But the true value is always difficult to get. We use standard calibrated
instruments in the laboratory for measuring true value of the variable.
The accuracy of an instrument is quantified by the difference of its readings
and the one given by the ultimate or primary standard.
 Accuracy depends on inherent limitations of instrument and shortcomings in
measurement process.
As the degree of closeness with which the reading approaches the true value
of the quantity to be measured, accuracy can be expressed in following ways:

Point accuracy
Accuracy as percentage of scale span
Accuracy as percentage of true value
Point accuracy: Such accuracy is specified at only one particular point of scale.
It does not give any information about the accuracy at any other Point on the
scale.
Accuracy as percentage of scale span: When an instrument as uniform scale,
its accuracy may be expressed in terms of scale range.
Accuracy as percentage of true value: The best way to conceive the idea of
accuracy is to specify it in terms of the true value of the quantity being
measured.
Unit of accuracy:
Percentage of true value (% of T.V.) = (Measured value – True value) *100
True value
Percentage of Full Scale Deflection (% fsd) = (Measured value – True value) *100
Maximum Scale value
• Precision:
 is defined as the ability of instrument to reproduce a certain set of readings
within given accuracy.
Also describes an instrument’s degree of random variations in its output when
measuring a constant quantity.
It is the measure of reproducibility i.e., given a fixed value of a quantity,
precision is a measure of the degree of agreement within a group of
measurements.
A precision instrument indicates that the successive reading would be very
close, or in other words, the standard deviation σe of the set of measurements
would be very small
The precision is composed of two characteristics:
1. Conformity:
Consider a resistor having true value as 2.385692, which is being measured by
an ohmmeter. But the reader can read consistently, a value as 2.4 Ohm due to
the non-availability of proper scale.
The error created due to the limitation of the scale reading is a precision error.
2. Number of significant figures:
The precision of the measurement is obtained from the number of significant
figures, in which the reading is expressed.
The significant figures convey the actual information about the magnitude &
the measurement precision of the quantity. The precision can be
mathematically expressed as:
• 
Where P = precision
= Value of nth measurement
= Average value the set of measurement values
Note That:- Precision is often confused with accuracy. High precision does not
imply anything about measurement accuracy.
Accuracy Precision
Accuracy represents degree Precision represents degree
of correctness of the of repeatability of several
measured value w.r.t. true independent measurements
value. of desired input at the same
reference conditions.

Accuracy of instrument Precision of instruments


depends on systematic depends on factors that cause
errors. random or accidental errors.
Reproducibility:
It is the degree of closeness with which a given value may be repeatedly
measured.
 It is specified in terms of scale readings over a given period of time.
Repeatability
Repeatability is defined as ability of instrument to reproduce a group of
measurements of same measured quantity, made by same observer, using
same instrument, under same conditions.
Instrument Drift
It is defined as the variation of output for a given input (Constant input)
caused due to change in sensitivity of the instrument due to certain
interfering inputs like temperature changes, component instabilities, etc.
Drift is a complex phenomenon for which the observed effects are that the
sensitivity and offset values vary.
It also can alter the accuracy of the instrument differently at the various
amplitudes of the signal present.
Drift may be classified into three categories:
a) zero drift:
If the whole calibration gradually shifts due to slippage, permanent set, or due
to undue warming up of electronic tube circuits, zero drift sets in.
b) span drift or sensitivity drift
If there is proportional change in the indication all along the upward scale, the
drifts is called span drift or sensitivity drift.
c) Zonal drift:
In case the drift occurs only a portion of span of an instrument, it is called
zonal drift.
• Con’t
Figure: (a) zero drift; (b) sensitivity drift; (c)
zero drift plus sensitivity drift
Linearity
The ability to reproduce the input characteristics symmetrically and linearly.
This is the closeness to a straight line of the relationship between the true
process variable and the measurement. I.e. deviation of transducer output
curve from a specified straight line.
1. Independent of Input
2. Proportional to Input
3. Combined independent & proportional to Input
•  Note that:- Linearity is usually reported as non-linearity, which is the maximum
of the deviation between the calibration curve and a straight line positioned so
that the maximum deviation is minimized.
Linearity is actually a measure of nonlinearity of the instrument.
When we talk about sensitivity, we assume that the input/output
characteristic of the instrument to be approximately linear.
But in practice, it is normally nonlinear, as shown in Fig. below.
The linearity is defined as the maximum deviation from the linear
characteristics as a percentage of the full scale output.

Where =max (
Hysteresis
Careful observation of the output/input relationship of a block will sometimes
reveal different results as the signals vary in direction of the movement.
Mechanical systems will often show a small difference in length as the
direction of the applied force is reversed.
The same effect arises as a magnetic field is reversed in a magnetic material.
This characteristic is called hysteresis.
Hysteresis is defined as the magnitude of error caused in the output for a
given value of input, when this value is approached from opposite directions;
i.e. from ascending order & then descending order.
Causes are backlash, elastic deformations, magnetic characteristics, frictional
effects (mainly).
Hysteresis can be eliminated by taking readings in both direction and then
taking its arithmetic mean.
 
Instrument characteristic with hysteresis
Resolution:
 If the input is slowly increased from some arbitrary input value, it will again
be found that output does not change at all until a certain increment is
exceeded. This increment is called resolution.
Threshold:
If the instrument input is increased very gradually from zero there will be
some minimum value below which no output change can be detected. This
minimum value defines the threshold of the instrument.
Stability:
It is the ability of an instrument to retain its performance throughout is
specified operating life
Reliability
Reliability is the probability that a device will adequately perform (as specified)
for a period of time under specified operating conditions.
 Some sensors are required for safety or product quality, and therefore, they
should be very reliable.
Backlash
It is defined as the maximum distance or angle through which any part of
mechanical system may be moved in one direction without causing motion of
next part.
Can be minimized if components are made to very close tolerances.
Dynamic characteristics
 

The relationship between the system input and output when the measured
quantity (measurand) is varying rapidly.
Speed of response Dead time and dead
and measuring lag, zone,

Fidelity and dynamic Frequency response


error,  

Over shoot,  
Speed of response:
It is defined as the rapidity with which a measurement system responds to
changes in the measured quantity.
Measuring lag:
It is time to respond to that particular instrument when we change the input.
This time is called Lag.
It is the retardation or delay in the response of a measurement system to
changes in the measured quantity
The measuring lags are of two types:
1. Retardation type
2. Time delay lag
a) Retardation type:
In this case the response of the measurement system begins immediately
after the change in measured quantity has occurred.
b) Time delay lag:
In this case the response of the measurement system begins after a dead
time, after the application of the input.
 Fidelity:
 It is defined as the degree to which a measurement system indicates
changes in the measurand quantity without dynamic error.
Dynamic error:
It is the difference between the true value of the quantity changing with
time & the value indicated by the measurement system if no static error
is assumed. It is also called measurement error.
Uncertainty of Instruments
The word “uncertainty” means doubt, and
 “measurement uncertainty” means doubt about the validity of the result of
a measurement.
We refer to the uncertainty as the error in the measurement. Errors fall into
two categories:
1. Systematic Error –
errors resulting from measuring devices being out of calibration.
 Such measurements will be consistently too small or too large.
These errors can be eliminated by pre-calibrating against a known, trusted
standard.
•   Random Errors –
2.
errors resulting in the fluctuation of measurements of the same quantity
about the average.
The measurements are equally probable of being too large or too small.
These errors generally result from the fineness of scale division of a measuring
device.
The following general rules of thumb are often used to determine the
uncertainty in a single measurement when using a scale or digital measuring
device.
1. Uncertainty in a Scale Measuring Device is equal to the smallest increment
divided by 2.

Ex. Meter Stick (scale device)


•   Uncertainty in a Digital Measuring Device is equal to the smallest increment.
2.

Ex. Digital Balance (digital device)


5 . 7 5 1 3 kg

When stating a measurement the uncertainty should be stated explicitly so that


there is no question about the uncertainty in the measurement.
However, if it is not stated explicitly, an uncertainty is still implied.
For example, if we measure a length of 5.7 cm with a meter stick, this implies
that the length can be anywhere in the range
5.65 cm ≤ L ≤ 5.75 cm.
Thus, L =5 .7 cm measured with a meter stick implies an uncertainty of 0.05 cm.
• A common rule of thumb is to take one-half the unit of the last decimal place
in a measurement to obtain the uncertainty.
In general, any measurement can be stated in the following preferred form:

Where

Rule For Stating Uncertainties –


Experimental uncertainties should be stated to 1- significant figure.
Ex. v = 31.25 ± 0.034953 m/s
v = 31.25 ± 0.03 m/s (correct)
Note: The uncertainty is just an estimate and thus it cannot be more precise
(more significant figures) than the best estimate of the measured value.
Rule For Stating Answers – The last significant figure in any answer should be
in the same place as the uncertainty.
Ex. a = 1261.29 ± 200 cm/s2
a = 1300 ± 200 cm/s2 (correct)
Note: Since the uncertainly is stated to the hundreds place, we also state the
answer to the hundreds place. Note that the uncertainty determines the number
of significant figures in the answer.
Common Instruments in the Laboratory
No Apparatus Smallest Uncertainty Examples of recording
  Division  
 

1 Ammeter (0 - 1 A) 0.02 A 0.01 A 0.20 A, 0.21 A


2 Electronic balance 0.1 g 0.1 g 121.0 g, 121.1 g
    0.01 g 0.01 g 121.10 g, 121.11 g
3 Half metre rule or metre rule 0.1 cm 0.1 cm 12.0 cm, 12.1 cm

4 Measuring cylinder (100 cm3) 1 cm3 0.5 cm3 18.0 cm3, 18.5 cm3
5 Micrometer 1 cm3 0.01 mm 2.10 mm, 2.11 mm
6 Milliammeter (0 - 100 mA) 2 mA 1 mA 20 mA, 21 mA

7 Spring balance (0 - 10 N) 0.1 N 0.05 N 3.65 N, 3.70 N


8 Stopwatch (analogue) 0.1 s 0.1 s 6.0 s, 36.1 s
9 Stopwatch (digital) 0.1 s 0.1 s 28.1 s
    0.01 s 0.01 s 28.11 s
10 Thermometer (–10 °C to 110 °C) 1°C 0.5 °C 23.0 °C, 23.5 °C

11 Voltmeter (0 - 5 V) 0.1 V 0.05 V 2.50 V, 2.55 V


Measurement Errors and Analysis
Sources of Errors in Experimental Testing, General Description
(A) Instrumental Errors
Construction, Variability, temperature effects (Not known to experimentalist)
(B) Systematic Errors (Fixed)
Consistent form that result from conditions or procedures that are correctable
(zero shift, calibration factor, etc)
(C)Random Errors
These are accidental errors that occur in all measurement. They are
characterized by their inconsistent nature and their origin cannot be
determined in the measurement process (corrected by statistical analysis)
fluctuations, noise. Etc.
(D) Elemental Error
Large numbers of errors have their source in the measurement system itself. Errors in
measurement systems are often referred to as elemental errors, including:
1. Calibration Error also include known but uncorrected calibration errors such as
hysteresis and non-linearity
2. Data Acquisition Error
3. Data Reduction Error, interpolation, curve fits, differentiation of data curves,
erroneous modeling.
Definition of Error:-In the following, let us first define the measurement errors.
Absolute Error
1. Error total = measured value – true value
2. Precision error = reading – average of readings
3. Bias error = average of readings – true value
Relative Error
Relative Error = (Absolute error – true value)/ true value x 100%
• Reporting Measurements
When we report a measured value of some parameter, X, we write it as

Where represents our best estimate of the measured parameter, and


is the uncertainty that we associate with the measurement
Alternatively, with some algebraic manipulation, the above equation can be
written as

Where the ratio of the uncertainty, , to our best estimate, , is referred to as the
fractional uncertainty.
Typically, the uncertainty is small compared to the measured value, so it is
convenient to multiply the fractional uncertainty by 100 and report the percent
uncertainty.
Percent Uncertainty = Fractional Uncertainty × 100
• If multiple trials are performed to measure X, the best estimate is the average
value, , of all the trials (the bar over the variable denotes an average value).
The average value is computed by summing up all the measured values and
then dividing by the number of trails, N.
Mathematically, we write this as

Note that:-When experimental or time limitations only permit one trial for a
measurement, then the best estimate is the most careful measurement you can
perform.
Even the best measurements have some degree of uncertainty associated with
them.
Uncertainty in measurements arises from many sources including the limited
precision of the measurement tool, the manner in which the measurement is
performed, and the person performing the measurement.
The inclusion of an estimate of the uncertainty when reporting an experimental
measurement permits others to determine how much confidence should be placed
in the measured value.
Additionally, measurements should also be accompanied by a statement of how the
uncertainty was determined.
Without uncertainties and an explanation of how they were obtained, measured
parameters have little meaning.
A simple approach for estimating the uncertainty in a measurement is to report the
limiting precision of the measurement tool.
For example, if a balance is calibrated to report masses to 0.1 g, then the actual
mass of a sample could be up to 0.05 g greater or less than the measured mass, and
the balance would still read out the same value.
 Thus, the uncertainty associated with mass measurements using this balance
would be ± 0.05 g.
Note: This method for estimating the uncertainty of a measurement is a good choice
when only a single trial is performed.
• NOTE: A useful rule of thumb for reporting the uncertainty associated with a
measurement tool is to determine the smallest increment the device can
measure and divide that value by 2.
If on the other hand, the best estimate of a parameter is determined by
making repeated measurements and computing the average value from the
multiple trials, the uncertainty associated with each measurement can be
determined from the standard deviation .
Mathematically, the standard deviation can be expressed as

Where N is the number of times the measurement is performed,


corresponds to the ith measurement of the parameter and,
is the average value of X.
• The standard deviation provides an estimate of the average uncertainty
associated with any one of the N measurements that were performed.
When the uncertainties are random and multiple trials are performed to
obtain the best estimate of a parameter, the standard deviation is an
appropriate choice for describing the uncertainty in the measurement.
Thus, in this case, the measured value would be reported as

Thank You

You might also like