Dip Unit 6
Dip Unit 6
Image Degradation/ restoration Model, Noise Models, and Restoration in Presence of Noise in
spatial Domain. Inverse Filtering, wiener filtering, Introduction to Image reconstruction from
Image Degradation: The process by which the original image is usually very complex and often
unknown. To simplify the calculations, the degradation is often modeled as linear function which
is often referred as point spread function. The different causes of image degradation are
2) Atmospheric turbulence
3) Misfocus of lens
4) Relative motion between camera and object which causes motion blur. Blurring is a form of
bandwidth reduction of an ideal image owing to the imperfect image formation process.
Image Restoration: Image restoration aim at reversing the degradation undergone by an image to
Image may be corrupted by degradation such as linear frequency distortion, noise and
blocking artifacts. The degradation consists of two distinct processes- the deterministic blur and
the random noise. The noise may originate in the image formation process, the transmission
process or a combination of them. Most restoration techniques model the degradation process and
attempt to apply an inverse procedure to obtain an approximation of the original image. Iterative
minimizing some measures of degradation such as maximum likelihood, constrained least square,
etc. Blind restoration techniques attempt to solve the restoration problem without knowing the
blurring function.
together with an additive noise term, operates on an input image f (x, y ) to produce a degraded
image g(x, y ). Given g(x, y), some knowledge about the degradation function H, and some
knowledge about the additive noise term ᵰ (x, y), the objective of restoration is to obtain an
estimate of the original image. We want the estimate to be as close as possible to the
original input image and, in general, the more we know about H and ᵰ, the closer will be
to f (x, y). The approach used is based on various types of image restoration filters. If H is a linear,
position-invariant process, then the degraded image is given in the spatial domain by
where h(x, y ) is the spatial representation of the degradation function and the symbol " * "
indicates convolution. We know that convolution in the spatial domain is equal to multiplication
in the frequency domain, so we may write the model in Eq. (6.1-1) in an equivalent frequency
domain representation:
where the terms in capital letters are the Fourier transforms of the corresponding terms in Eq .
(6.1-1).
The principal sources of noise in digital images arise during image acquisition (digitization)
and/or transmission. The performance of imaging sensors is affected by a variety of factors such
as environmental conditions during image acquisition and by the quality of the sensing elements
themselves. For instance, in acquiring images with a CCD camera, light levels and sensor
temperature are major factors affecting the amount of noise in the resulting image. Images are
corrupted during transmission principally due to interference in the channel used for transmission.
For example, an image transmitted using a wireless network might be corrupted as a result of
Basically, there are three standard noise models which model the types of noise
encountered in most images. They are additive noise, multiplicative noise and impulse noise.
1) Additive noise: Let f(x, y) be the original image, f ‘(x, y) be the noise digitized version and ᵰ(x,
y) be the noise function which returns values coming from an arbitrary distribution. Then additive
Additive noise is independent of the pixel values in the original image. Typically, ᵰ(x, y) is
symmetric about zero. This has the effect of not altering the average brightness of the image.
Additive noise is a good model for the thermal noise within photo-electronic sensors.
2) Multiplicative noise: Multiplicative noise or speckle noise is a signal dependent form of noise
whose magnitude is related to the value of the original pixel. The simple mathematical expression
the extreme intensities 0 and 1 results in salt and pepper noise. The source of impulse noise is
The parameters that define the spatial characteristics of noise, and whether the noise is correlated
with the image. Frequency properties refer to the frequency content of noise in the Fourier sense
(i.e. as opposed to the electromagnetic spectrum). For example, when the Fourier spectrum of
noise is constant, the noise usually is called while noise. This terminology is a carryover from the
physical properties of white light, which contains nearly all frequencies in the visible spectrum in
equal proportions. It is not difficult to show that the Fourier spectrum of a function containing all
With the exception of spatially periodic noise, we assume that noise is independent of spatial
coordinates, and that it is uncorrelated with respect to the image itself (that is, there is no
correlation between pixel values and the values of noise components). Although these
assumptions are at least partially invalid in some applications (quantum-limited imaging, such as
gray-level values in the noise component of the model. These may be considered random
variables characterized by a probability density function (PDF). The following are among the
a) Gaussian noise
Because of its mathematical tractability in both the spatial and frequency domains. Gaussian (also
called normal) noise models are used frequently in practice. In fact, this tractability is so
convenient that it often results in Gaussian models being used in situations in which they are
…………………… 6.2.1
where z represents gray level, µ is the mean of average value of z, and σ is its standard deviation.
The standard deviation squared is called the variance of z. A plot of this function is shown in Fig.
6.2(a). When z is described by Eq. (6.2.1) approximately 70% of its values will be in the range [(µ
b) Rayleigh noise
………………… 6.2.2
and
Figure 6.2(b) shows a plot of the Rayleigh density. Note the displacement from the origin and the
fact that the basic shape of this density is skewed to the right. The Rayleigh density can be quite
………………………… 6.2.3
where the parameters are such that a > 0, b is a positive integer, and “ ! " indicates
Figure 6.2(c) shows a plot of this density. Although Eq. (6.2-3) often is referred to as the gamma
density. Strictly speaking this is correct only when the denominator is the gamma function, (b).
When the denominator is as shown, the density is more appropriately called the Erlang density.
d) Exponential noise:
………………………….. 6.2.4
where a > 0.The mean and variance of this density function are
and
Note that this PDF is a special case of the Erlang PDF, with b = 1. Figure 6.2(d) shows a plot of
e) Uniform noise:
………………… 6.2.5
……………………………. 6.2.6
If b > a, gray-level b will appear as a light dot in the image. Conversely, level a will appear like a
dark dot. If either Pa or Pb is zero, the impulse noise is called unipolar. If neither probability is
zero, and especially if they are approximately equal, impulse noise values will resemble salt-and-
pepper granules randomly distributed over the image. For this reason, bipolar impulse noise also
is called salt-and-pepper noise. Shot and spike noise also are terms used to refer to this type of
noise. In our discuss ion we will use the terms impulse or salt-and-pepper noise interchangeably.
Noise impulses can be negative or positive. Scaling usually is part of the image digitizing process.
Because impulse corruption usually is large compared with the strength of the image signal,
impulse noise generally is digitized as extreme (pure black or white) values in an image. Thus, the
assumption usually is that a and b are “saturated" values, in the sense that they are equal to the
minimum and maximum allowed values in the digitized image. As a result, negative impulses
appear as black (pepper) points in an image. For the same reason, positive impulses appear white
(salt) noise. For an 8-bit image this means that a = 0 (black) and b = 255 (white). Figure 6.2 (f)
D) Periodic Noise:
Periodic noise in image irises typically from electrical or electromechanical interference during
image acquisition. This is the only type of spatially dependent noise. Periodic noise can be
The restoration process means to recover the original image f(x, y) in presence of noise by using
spatial filtering. The objective of restoration is to obtain an estimate of the original image.
where h(x, y ) is the spatial representation of the degradation function and the symbol " * "
indicates convolution. We know that convolution in the spatial domain is equal to multiplication
in the frequency domain, so we may write the model in Eq. (6.3-1) in an equivalent frequency
domain representation:
where the terms in capital letters are the Fourier transforms of the corresponding terms in Eq .
(6.3-1). The degradation present in an image is noise the equation 6.3-1 and 6.3-2 becomes as
g(x, y ) = f( x, y ) + (x, y ) ………………………………….. (6.3-3)
and
The noise terms are unknown, so subtracting them from g(x, y) or G (u, v) is not a realistic option.
In the case of periodic noise, it usually is possible to estimate N (u, v) from the spectrum of G(u,
v). In this case N (u, v) can be subtracted from G (u, v) to obtain an estimate of the original image.
In general, however, this type of knowledge is the exception, rather than the rule. Spatial filtering
is the method of choice in situations when only additive noise is present. Spatial filtering consists
A) Mean filters
B) Order-statistics Filters
C) Adaptive Filters
A) Mean Filters: In this section we discuss briefly the noise-reduction, spatial filters
introduced and develop several other filters whose performance is in many cases superior to the
This is the simplest of the mean filters. Let Sxy represent the set of coordinates in a rectangular
subimage window of size m x n, centered at point (x, y). The arithmetic mean filtering process
computes the average value of the corrupted image g (x, y) in the area defined by Sxy. The value
of the restored image at any point (x, y ) is simply the arithmetic mean computed using the pixels
………………………………… (6.3-5)
This operation can be implemented using a convolution mask in which all coefficients have value
1/ mn. A mean filter simply smoothes local variations in an image. Noise is reduced as a result of
blurring.
…………………………………… (6.3-6)
Here, each restored pixel is given by the product of the pixels in the subimage window, raised to
the power 1/ mn. A geometric mean filter achieves smoothing comparable to the arithmetic mean
The harmonic mean filters work s well for salt noise, but fails for pepper noise. It does well also
The contraharmonic mean filtering operation yields a restored image based on the expression
………………………………….. (6.3-8)
Where, Q is caIled the order of the filter. This filter is well suited for reducing or virtually
eliminating the effects of salt-and-pepper noise. For positive values of Q, the filter eliminates
pepper noise. For negative values of Q, it eliminates salt noise. It cannot do both simultaneously.
Note that the contraharmonic filter reduces to the arithmetic mean filter if Q = 0, and to the
based on ordering (ranking) the pixels contained in the image area encompassed by the filter. The
The best-known order-statistics filter is the median filter, which, as its name implies, replaces the
value of a pixel by the median of the gray levels in the neighborhood of that pixel:
……………………………………….. (6.3-9)
The original value of the pixel is included in the computation of the median. Median filters are
quite popular because, for certain types of random noise, they provide excellent noise- reduction
capabilities, with considerably less blurring than linear smoothing filters of similar size. Median
filters are particularly effective in the presence of both bipolar and unipolar impulse noise. In fact,
the median filter yields excellent results for images corrupted by this type of noise.
Although the median filter is by far the order-statistics filter most used in image Processing, it is
by no means the only one. The median represents the 50th percentile of a ranked set of numbers,
but the reader will recall from basic statistics that ranking lends itself to many other possibilities.
For example, using the 100th percentile results in the so -called max filter, given by
…………………………………………. (6.3-10)
This filter is useful for finding the brightest points in an image. Also, because pepper noise has
very low values, it is reduced by this filter as a result of the max selection process in the subimage
area Sxy.
………………………………………. (6.3-11)
This filter is useful for finding the darkest points in an image. Also, it reduces salt noise as a
c) Midpoint filter
The midpoint filter simply computes the midpoint between the maximum and
………………………. (6.3-12)
Note that this filter combines order statistics and averaging. This filter works best for randomly
neighborhood Sxy. Let gr(s, t) represent the remaining mn – d pixels. A filter formed by averaging
…………………………………………. (6.3-13)
Where, the value of d can range from 0 to /mn-1. When d = 0, the alpha trimmed filter reduces
to the arithmetic mean filter. If we choose d = (mn - 1)/2, the filter becomes a median filter. For
other values of d, the alpha-trimmed filter is useful in situations involving multiple types of noise,
C) Adaptive Filters: These filters are applied to an image without regard for how image
characteristics vary from one point to another. Two simple adaptive filters whose behavior
changes are based on statistical characteristics of the image inside the filter region defined by the
m x n rectangular window Sxy. Adaptive filters are capable of performance superior to that of the
filters. The price paid for improved filtering power is an increase in filter complexity. Keep in
mind that we still are dealing with the case in which the degraded image is equal to the original
image plus noise. No other types of degradations are being considered yet.
reasonable parameters on which to base an adaptive filter because they are quantities closely
related to the appearance of an image. The mean gives a measure of average gray level in the
region over which the mean is computed and the variance gives a measure of average contrast in
that region.
Our filter is to operate on a local region Sxy. The response of the filter at any point ( x, y ) on
1. If is zero, the filter should return simply the value of g(x, y ).This is the trivial, zero-noise case
2. If the local variance is high relative to, the filter should return a value close to
g(x, y). A high local variance typically is associated with edges, and these should be preserved.
3, If the two variances are equal, we want the filter to return the arithmetic mean value of the
pixels in Sxy . This condition occurs when the local area has the same properties as the overall
…………………………………… (6.3-14)
Adaptive median filter: Adaptive median filtering can handle impulse noise with probabilities
even larger than median filter. An additional benefit of the adaptive median filter is that it seeks
to preserve detail while smoothing nonimpulse noise, something that the "traditional" median
filter does not do. The adaptive median filter also works in a rectangular window area Sxy . Unlike
those filters , however, the adaptive median filter changes (increases) the size of Sxy during filter
operation. Keep in mind that the output of the filter is a single value used to replace the value of
the pixel at (x, y), the particular point on which the window Sxy is centered at a given time.
follows:
The key to understanding the mechanics of this algorithm is to keep in mind that it has three main
purposes: to remove salt-and-pepper (impulse) noise, to provide smoothing of other noise that
may not be impulsive, and to reduce distortion such as excessive thinning or thickening of object
boundaries. The values Zmin and Zmax are considered statistically by the algorithm to be
"impulselike” noise components , even if these are not the lowest and highest possible pixel values
in the image.
6.4 Inverse Filtering:
The inverse filter is a straightforward image restoration method. If we know the exact point
spread function model in the image degradation system and ignore the noise effect, the degraded
image can be restored using the inverse filter approach. In practice, the PSF model of the blurred
image is usually unknown and the degraded process is also affected by the noise, so the
restoration result with inverse filter is not usually perfect. But the major advantage of inverse
filter based image restoration is that it is simple. The first step is to study restoration of images
filtering, where we compute an estimate, of the transform of the original image simply
by dividing the transform of the degraded image, G (u, v) by the degradation function h (u, v):
………………………………………. (6.4-1)
Substituting the right side of equation G (u, v) = H (u, v) F (u, v) + N (u, v) into equation (6.4-1),
then
…………………………. (6.4-2)
This is an interesting expression. It tells us that even if we know the degradation function we
cannot recover the undegraded image [the inverse Fourier transform of F (u, v)] exactly because
There is more bad news. If the degradation has zero or very small values, then the ratio
N (u, v)/ H (u, v) could easily dominate the estimate F (u, v).This, in fact, is frequently the case,
One approach to get around the zero or small-value problem is to limit the filter frequencies to
values near the origin. We know that H(0, 0) is equal to the average value of h(x , y) and that this
is usually the highest value of H(u, v) in the frequency domain. Thus, by limiting the analysis to
frequencies near the origin, we reduce the probability of encountering zero values. Thus, we
require the inverse Fourier transform and by using inverse Fourier transform, the restored image
=[ ]=[ ]
The inverse filter produces perfect reconstruction of the original image in the absence of noise.
Wiener filtering: The Wiener filter tries to build an optional estimate of the original image by
enforcing a minimum mean square error constraint between estimate and original image. The
wiener filter is an optimum filter. The objective of Wiener filter is to minimize the mean square
error. Therefore it is also called as Minimum Mean Square Error filtering. A Wiener filter has the
The method is founded on considering irnages and noise as random processes, and the Objective
is to find an estimate of the uncorrupted image f such that the mean square error between them
…………………………………… (6.5-1)
Where, E { . } is the expected value of the argument. It is assumed that the noise and the image
are uncorrelated; that one or the other has zero mean; and that the gray levels in the estimate are a
linear function of the levels in the degraded image. Based On these conditions, the minimum of
the error function in Eq. (6.5-1) is given in the frequency domain by the expression
……………. (6.5-2)
Where, we used the fact that the product of a complex quantity with its conjugate is equal to the
magnitude of the complex quantity squared . This result is known as the Wiener filter, after N.
The filter, which consists of the terms inside the brackets, also is commonly referred to as the
minimum mean square error filter or the least square error filter. Note from the first line in Eq.
(6.5-2) that the Wiener filter does not have the same problem as the inverse filter with zeros in
the degradation function, unless both H (u, v) and are zero for the same value(s) of u
and v.
As before, H (u, v) is the transform of the degradation function and G (u, v) is the transform of
the degraded image. The restored image in the spatial domain is given by the inverse Fourier
transform of the frequency-domain estimate . Note that if the noise is zero, then the noise
power spectrum vanishes and the Wiener filter reduces to the inverse filter.
When we are dealing with spectrally white noise, the spectrum is a constant, which
simplifies things considerably. However, the power spectrum of the undegraded image seldom is
known. An approach used frequently when these quantities are not known or cannot be estimated
There are three principal ways to estimate the degradation function for use in image restoration:
(1) observation, (2) experimentation, and (3) mathematical modeling. The process of restoring an
image by using a degradation function that has been estimated in some way sometimes is called
blind deconvolution, due to the fact that the true degradation function is seldom known
completely.
Suppose that we are given a degraded image without any knowledge about the degradation
function H. One way to estimate this function is to gather information from the image itself. For
example, if the image is blurred, we can look at a small section of the image containing simple
structures, like part of an object and the background. In order to reduce the effect of noise in our
observation, we would look for areas of strong signal content. Using sample gray levels of the
object and background, we can construct an unblurred image of the same size and characteristics
as the observed subimage. Le t the observed suhimage be denoted by g(x. y ), and let the
constructed subimage (which in reality is our estimate of the original image in that area) be
denoted by . Then, assuming that the effect of noise is negligible because of our choice of
From the characteristics of this function we then deduce the complete function H (u, v) by
making use of the fact that we are assuming position invariance. For example, suppose that a
radial plot of Hs (u v) turns out to have the shape of Butterworth low pass filter. We can use that
information to construct a function H (u, v) on a larger scale, but having the same shape.
b) Estimation by Experimentation
If equipment similar to the equipment used to acquire the degraded image is available, it is
possible in principle to obtain an accurate estimate of the degradation. Images similar to the
degraded image can be acquired with various system settings until they are degraded as closely as
possible to the image we wish to restore. Then the idea is to obtain the impulse response of the
degradation by imaging an impulse (small dot of light) using the same system settings. A linear
Where, as before, G (u, v) is the Fourier transform of the observed image and A is a constant
c) Estimation by Modeling
Degradation modeling has been used for many years because of the insight it affords into the
image restoration problem. In some cases, the model can even take into account environmental
conditions that cause degradations. For example, a degradation model is based on the physical
Where, k is a constant that depends on the nature of the turbulence. With the exception of the 5/6
power on the exponent, this equation has the same form as the Gaussian low pass filter. In fact,
for the visualization of the internal structures of an object without the superposition of over- and
1) For example, in a conventional chest radiograph, the heart, lungs, and ribs are all superimposed
on the same film, whereas a computed tomography (CT) slice captures each organ in its actual
three-dimensional position.
2) Tomography has found widespread application in many scientific fields, including physics,
3) While X-ray CT may be the most familiar application of tomography, tomography can be
performed, even in medicine, using other imaging modalities, including ultrasound, magnetic
a) Computed tomography CT: The number of x-ray photons transmitted through the
b) Nuclear medicine: The number of photons emitted from the patient along
physical quantity in the object. The quantities that can be reconstructed are:
a) CT: The distribution of linear attenuation coefficient in the slice being imaged.
being imaged.
6) Remarkably, under certain conditions, the measurements made in each modality can be
converted into samples of the Radon transform of the distribution that we wish to reconstruct. For
example, in CT, dividing the measured photon counts by the incident photon counts and taking
the negative logarithm yields samples of the Radon transform of the linear attenuation map.
7) The Radon transform and its inverse provide the mathematical basis for reconstructing
We will focus on explaining the Radon transform of an image function and discussing the
inversion of the Radon transform in order to reconstruct the image. We will discuss only the 2D
Radon transform, although some of the discussion could be readily generalized to the 3D Radon
transform.
The Radon transform (RT) of a distribution f(x, y) is given by
Where, is the Dirac delta function and the coordinates x, y, and Ø are defined in the figure below.
The function is often referred to as a sinogram because the Radon transform of an off-
f(x, y) given knowledge of . The sinogram has many nice mathematical properties,
……………………… (1)
C) Backprojection
The solution to the inverse Radon transform is based on the central slice theorem (CST), which
The CST theorem states that the value of the 2D FT of f(x, y) along a line at the inclination angle
Hence, with enough projections, P (v, Ø) can fill the vx – vy space to generate F(vx, vy).
…………………….. (2)
E) Reconstruction Approaches
There are many tomographic reconstruction techniques. Here, we consider only two of them.
Fig 6.7.4: Flow of direct Fourier reconstruction
scanning time, reducing noise, reducing motion artifact, reducing patient dose
– occurs when there is a sharp intensity change caused by, for example, bones.
Incomplete/Missing data
limitations.
- metal artifacts: In CT, metal blocks the radiation, leading to missing data in its
shadow.
Motion artifact
– caused by patient motion, such as respiration and heart beat, during data
acquistion
Noise
– lower noise level can be obtained by increasing the radiation dose, and/or the
acquisition time.
– as demonstrated below, perception of structures depends on the contrast, size, and noise
level:
The digital image restoration technique is widely used in the following fields of image processing:
1. Astronomical imaging
2. Medical imaging
3. Printing industry
4. Defense applications
1. Astronomical imaging: Astronomical images are degraded by motion blur which is due to
slow camera shutter speeds relative to rapid spacecraft motion. To minimize the noise as well as
degradation, astronomical imaging is one of the primary applications of digital image restoration.
2. Medical imaging: X-ray, mammograms and digital angiographic images are often corrupted
by Poisson noise. Additive noise is common in magnetic resonance imaging. These noises should
be removed for proper diagnostics of diseases. This can be accomplished through digital image
restoration techniques.
3. Printing industry Applications: Printing applications often require the use of restoration to
ensure that halftone reproductions of continuous images are of high quality. Image restoration can
4. Defense applications: The image obtained from guided missiles may be degraded due to the
effects of pressure differences around a camera mounted on the missile. Proper image restoration
1. Medical field: Digital image processing techniques like image segmentation and pattern
recognition is used in digital mammography to identify the tumours. Techniques like image
registration and fusion play a crucial role in extracting information from medical images. In the
field of telemedicine, lossless compression algorithms allow the medical images to be transmitted
effectively from one place to another. Some examples of image processing in medical field are:
a) X-ray Imaging: X-rays are among the oldest sources of EM radiation used for imaging. The
best known use of X-rays is medical diagnostics, but they also are used extensively in industry and
other areas, like astronomy. X-rays for medical and industrial imaging are generated using an X-
ray tube, which is a vacuum tube with a cathode and anode. The cathode is heated, causing free
electrons to be released. These electrons flow at high speed to the positively charged anode. When
the electrons strike a nucleus, energy is released in the form of X-ray radiation. The energy
(penetrating power) of the X-rays is controlled by a voltage applied across the anode, and the
number of X-rays is controlled by a current applied to the filament in the cathode. Angiography is
another major application in an area called contrast enhancement radiography. This procedure is
b) Gamma-Ray Imaging: Major uses of imaging based on gamma rays include nuclear medicine
and astronomical observations. In nuclear medicine, the approach is to inject a patient with a
radioactive isotope that emits gamma rays as it decays. Images are produced from the emissions
c) Imaging in the Ultraviolet Band: Applications of ultraviolet “light” are varied. They include
observations. We illustrate imaging in this band with examples from microscopy and astronomy.
Ultraviolet light is used in fluorescence microscopy, one of the fastest growing areas of
microscopy.
d) Imaging in the Radio Band: The major applications of imaging in the radio band are in
medicine and astronomy. In medicine, radio waves are used in magnetic resonance imaging (MRI).
This technique places a patient in a powerful magnet and passes radio waves through his or her
body in short pulses. Each pulse causes a responding pulse of radio waves to be emitted by the
patient’s tissues. The location from which these signals originate and their strength is determined
by a computer, which produces a two-dimensional picture of a section of the patient. MRI can
used in personal identification is face, fingerprint, iris, vein pattern, etc. Preprocessing of the
input data is necessary for efficient personal identification. Commonly used preprocessing
3. Remote sensing field: Remote sensing is the use of remote observation to make useful
radiation with different wavelengths of the radiation carrying a variety of information about the
earth’s surface and atmosphere. Remote sensing is being used increasingly to provide data for
diverse applications like planning, hydrology, agriculture, geology and forestry. The image
processing techniques used in the field of remote sensing include image enhancement, image
classification.
easily transmitted through the Internet. Video conferencing helps the people in different locations
to interact lively. For video conferencing to be effective, the information has to be transmitted
fast. Effective image and video compression algorithm like JPEG, JPEG2000, H.26X standards
helps to transmit the data effectively for live video conference. Some examples of communication
are:
The dominant application of imaging in the microwave band is radar. The unique feature of
imaging radar is its ability to collect data over virtually any region at any time, regardless of
weather or ambient lighting conditions. Some radar waves can penetrate clouds, and under certain
conditions can also see through vegetation, ice, and extremely dry sand. In many cases, radar is
the only way to explore inaccessible regions of the Earth’s surface. Imaging radar works like a
flash camera in that it provides its own illumination (microwave pulses) to illuminate an area on
the ground and take a snapshot image. Instead of a camera lens, radar uses an antenna and digital
computer processing to record its images. In a radar image, one can see only the microwave
b) Imaging in the Visible and Infrared Bands: The infrared band often is used in conjunction
with visual imaging, so we have grouped the visible and infrared bands used in light microscopy,
5. Automotives fields: The latest development in the automotive sector is ‘night vision system’.
Night vision system helps to identify obstacles during night time to avoid accidents. Infrared
cameras are invariably used in a night vision system. The image processing techniques commonly
used in a night vision system include image enhancement, boundary detection and object
recognition.