0% found this document useful (0 votes)
49 views

Dip Unit 6

The document discusses image restoration and reconstruction techniques. It covers image degradation models, noise models including additive, multiplicative and impulse noise, and restoration methods like inverse filtering and Wiener filtering. Spatial and frequency properties of different noise models are explained along with their probability density functions.

Uploaded by

Adwait Borikar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

Dip Unit 6

The document discusses image restoration and reconstruction techniques. It covers image degradation models, noise models including additive, multiplicative and impulse noise, and restoration methods like inverse filtering and Wiener filtering. Spatial and frequency properties of different noise models are explained along with their probability density functions.

Uploaded by

Adwait Borikar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Unit 6: Image restoration and reconstruction

Image Degradation/ restoration Model, Noise Models, and Restoration in Presence of Noise in

spatial Domain. Inverse Filtering, wiener filtering, Introduction to Image reconstruction from

projections, applications of Image Processing.

6.1 A Model of the Image Degradation / Restoration Process:

Image Degradation: The process by which the original image is usually very complex and often

unknown. To simplify the calculations, the degradation is often modeled as linear function which

is often referred as point spread function. The different causes of image degradation are

1) Improper opening and closing of the shutter

2) Atmospheric turbulence

3) Misfocus of lens

4) Relative motion between camera and object which causes motion blur. Blurring is a form of

bandwidth reduction of an ideal image owing to the imperfect image formation process.

Image Restoration: Image restoration aim at reversing the degradation undergone by an image to

recover the original image.

Image may be corrupted by degradation such as linear frequency distortion, noise and

blocking artifacts. The degradation consists of two distinct processes- the deterministic blur and
the random noise. The noise may originate in the image formation process, the transmission

process or a combination of them. Most restoration techniques model the degradation process and

attempt to apply an inverse procedure to obtain an approximation of the original image. Iterative

image restoration techniques often attempt to restore an image linearly or non-linearly by

minimizing some measures of degradation such as maximum likelihood, constrained least square,

etc. Blind restoration techniques attempt to solve the restoration problem without knowing the

blurring function.

Fig.6.1: A model of the image degradation / restoration process.

As Fig.6.1 shows, the degradation process is modeled as a degradation function that,

together with an additive noise term, operates on an input image f (x, y ) to produce a degraded

image g(x, y ). Given g(x, y), some knowledge about the degradation function H, and some

knowledge about the additive noise term ᵰ (x, y), the objective of restoration is to obtain an

estimate of the original image. We want the estimate to be as close as possible to the
original input image and, in general, the more we know about H and ᵰ, the closer will be

to f (x, y). The approach used is based on various types of image restoration filters. If H is a linear,

position-invariant process, then the degraded image is given in the spatial domain by

g(x, y ) = h(x , y) * f( x, y ) + (x, y ) ………………………………….. (6.1-1)

where h(x, y ) is the spatial representation of the degradation function and the symbol " * "

indicates convolution. We know that convolution in the spatial domain is equal to multiplication

in the frequency domain, so we may write the model in Eq. (6.1-1) in an equivalent frequency

domain representation:

G(u, v ) = H (u , v )F(u , v ) + N (u, v ) ……………………………………..(6.1-2)

where the terms in capital letters are the Fourier transforms of the corresponding terms in Eq .

(6.1-1).

6.2 Noise Models:

The principal sources of noise in digital images arise during image acquisition (digitization)

and/or transmission. The performance of imaging sensors is affected by a variety of factors such

as environmental conditions during image acquisition and by the quality of the sensing elements

themselves. For instance, in acquiring images with a CCD camera, light levels and sensor

temperature are major factors affecting the amount of noise in the resulting image. Images are
corrupted during transmission principally due to interference in the channel used for transmission.

For example, an image transmitted using a wireless network might be corrupted as a result of

lightning or other atmospheric disturbance.

A) Types of noise model:

Basically, there are three standard noise models which model the types of noise

encountered in most images. They are additive noise, multiplicative noise and impulse noise.

1) Additive noise: Let f(x, y) be the original image, f ‘(x, y) be the noise digitized version and ᵰ(x,

y) be the noise function which returns values coming from an arbitrary distribution. Then additive

noise is given by the equation

f ‘(x, y) = f(x, y) + ᵰ(x, y)

Additive noise is independent of the pixel values in the original image. Typically, ᵰ(x, y) is

symmetric about zero. This has the effect of not altering the average brightness of the image.

Additive noise is a good model for the thermal noise within photo-electronic sensors.

2) Multiplicative noise: Multiplicative noise or speckle noise is a signal dependent form of noise

whose magnitude is related to the value of the original pixel. The simple mathematical expression

for a multiplicative noise model is given by

f ‘(x, y) = f(x, y) + ᵰ(x, y) f(x, y)

f ‘(x, y) = f(x, y) [1 + ᵰ(x, y)]


3) Impulse noise: Impulse noise has the property of either leaving a pixel unmodified with

probability 1- p or replacing it altogether with a probability p. Restoring ᵰ(x, y) to produce only

the extreme intensities 0 and 1 results in salt and pepper noise. The source of impulse noise is

usually the result of an error in transmission or an atmospheric or man-made disturbance.

B) Spatial and Frequency Properties of Noise

The parameters that define the spatial characteristics of noise, and whether the noise is correlated

with the image. Frequency properties refer to the frequency content of noise in the Fourier sense

(i.e. as opposed to the electromagnetic spectrum). For example, when the Fourier spectrum of

noise is constant, the noise usually is called while noise. This terminology is a carryover from the

physical properties of white light, which contains nearly all frequencies in the visible spectrum in

equal proportions. It is not difficult to show that the Fourier spectrum of a function containing all

frequencies in equal proportions is a constant.

With the exception of spatially periodic noise, we assume that noise is independent of spatial

coordinates, and that it is uncorrelated with respect to the image itself (that is, there is no

correlation between pixel values and the values of noise components). Although these

assumptions are at least partially invalid in some applications (quantum-limited imaging, such as

in X-ray and nuclear-medicine imaging. is a good example).

C) Types of noise with its Noise Probability Density Functions


The spatial noise descriptor with which we shall be concerned is the statistical behavior of the

gray-level values in the noise component of the model. These may be considered random

variables characterized by a probability density function (PDF). The following are among the

most common PDFs found in image processing applications.

a) Gaussian noise

Because of its mathematical tractability in both the spatial and frequency domains. Gaussian (also

called normal) noise models are used frequently in practice. In fact, this tractability is so

convenient that it often results in Gaussian models being used in situations in which they are

marginally applicable at best. The PDF of a Gaussian random variable, z is-given by

…………………… 6.2.1

where z represents gray level, µ is the mean of average value of z, and σ is its standard deviation.

The standard deviation squared is called the variance of z. A plot of this function is shown in Fig.

6.2(a). When z is described by Eq. (6.2.1) approximately 70% of its values will be in the range [(µ

- σ), (µ + σ)], and about 95% will be in the range

[(µ - 2σ), (µ + 2σ)].


z

Fig. 6.2(a) PDF of Gaussian noise versus z

b) Rayleigh noise

The PDF of Rayleigh noise is given by

………………… 6.2.2

The mean and variance of this density are given by

and
Figure 6.2(b) shows a plot of the Rayleigh density. Note the displacement from the origin and the

fact that the basic shape of this density is skewed to the right. The Rayleigh density can be quite

useful for approximating skewed histograms.

Fig. 6.2(b) PDF of Rayleigh noise versus z.

c) Erlang (Gamma) noise:

The PDF of Erlang noise is given by

………………………… 6.2.3

where the parameters are such that a > 0, b is a positive integer, and “ ! " indicates

factorial. The mean and variance of this density are given by


and

Figure 6.2(c) shows a plot of this density. Although Eq. (6.2-3) often is referred to as the gamma

density. Strictly speaking this is correct only when the denominator is the gamma function, (b).

When the denominator is as shown, the density is more appropriately called the Erlang density.

Fig. 6.2 (c) PDF of Gamma noise versus z

d) Exponential noise:

The PDF of exponential noise is given by

………………………….. 6.2.4

where a > 0.The mean and variance of this density function are
and

Note that this PDF is a special case of the Erlang PDF, with b = 1. Figure 6.2(d) shows a plot of

this density function.

Fig. 6.2 (d) PDF of exponential noise versus z

e) Uniform noise:

The PDF of uniform noise is given by

………………… 6.2.5

The mean of this density function is given by


and its variance is

Figure 6.2(e) shows a plot of the uniform density.

Fig. 6.2 (e) PDF of uniform noise versus z

f) Impulse (salt-and-pepper) noise:

The PDF of (bipolar) impulse noise is given by

……………………………. 6.2.6

If b > a, gray-level b will appear as a light dot in the image. Conversely, level a will appear like a

dark dot. If either Pa or Pb is zero, the impulse noise is called unipolar. If neither probability is
zero, and especially if they are approximately equal, impulse noise values will resemble salt-and-

pepper granules randomly distributed over the image. For this reason, bipolar impulse noise also

is called salt-and-pepper noise. Shot and spike noise also are terms used to refer to this type of

noise. In our discuss ion we will use the terms impulse or salt-and-pepper noise interchangeably.

Noise impulses can be negative or positive. Scaling usually is part of the image digitizing process.

Because impulse corruption usually is large compared with the strength of the image signal,

impulse noise generally is digitized as extreme (pure black or white) values in an image. Thus, the

assumption usually is that a and b are “saturated" values, in the sense that they are equal to the

minimum and maximum allowed values in the digitized image. As a result, negative impulses

appear as black (pepper) points in an image. For the same reason, positive impulses appear white

(salt) noise. For an 8-bit image this means that a = 0 (black) and b = 255 (white). Figure 6.2 (f)

shows the PDF of impulse noise.


Fig. 6.2 (e) PDF of impulse noise versus z

D) Periodic Noise:

Periodic noise in image irises typically from electrical or electromechanical interference during

image acquisition. This is the only type of spatially dependent noise. Periodic noise can be

reduced significantly via frequency domain filtering.

6.3 Restoration in Presence of Noise in Spatial Domain (Spatial filtering)

The restoration process means to recover the original image f(x, y) in presence of noise by using

spatial filtering. The objective of restoration is to obtain an estimate of the original image.

The degraded image is given in the spatial domain by

g(x, y ) = h(x , y) * f( x, y ) + (x, y ) ………………………………….. (6.3-1)

where h(x, y ) is the spatial representation of the degradation function and the symbol " * "

indicates convolution. We know that convolution in the spatial domain is equal to multiplication

in the frequency domain, so we may write the model in Eq. (6.3-1) in an equivalent frequency

domain representation:

G(u, v ) = H (u , v )F(u , v ) + N (u, v ) ……………………………………..(6.3-2)

where the terms in capital letters are the Fourier transforms of the corresponding terms in Eq .

(6.3-1). The degradation present in an image is noise the equation 6.3-1 and 6.3-2 becomes as
g(x, y ) = f( x, y ) + (x, y ) ………………………………….. (6.3-3)

and

G(u, v ) = F(u , v ) + N (u, v ) …………………………………….. (6.3-4)

The noise terms are unknown, so subtracting them from g(x, y) or G (u, v) is not a realistic option.

In the case of periodic noise, it usually is possible to estimate N (u, v) from the spectrum of G(u,

v). In this case N (u, v) can be subtracted from G (u, v) to obtain an estimate of the original image.

In general, however, this type of knowledge is the exception, rather than the rule. Spatial filtering

is the method of choice in situations when only additive noise is present. Spatial filtering consists

of three types of filtering such as

A) Mean filters

B) Order-statistics Filters

C) Adaptive Filters

A) Mean Filters: In this section we discuss briefly the noise-reduction, spatial filters

introduced and develop several other filters whose performance is in many cases superior to the

filters discussed in that section:

a) Arithmetic mean filter

This is the simplest of the mean filters. Let Sxy represent the set of coordinates in a rectangular

subimage window of size m x n, centered at point (x, y). The arithmetic mean filtering process
computes the average value of the corrupted image g (x, y) in the area defined by Sxy. The value

of the restored image at any point (x, y ) is simply the arithmetic mean computed using the pixels

in the region defined by Sxy . In other words,

………………………………… (6.3-5)

This operation can be implemented using a convolution mask in which all coefficients have value

1/ mn. A mean filter simply smoothes local variations in an image. Noise is reduced as a result of

blurring.

b) Geometric mean filter

An image restored using a geometric mean filter is given by the expression

…………………………………… (6.3-6)

Here, each restored pixel is given by the product of the pixels in the subimage window, raised to

the power 1/ mn. A geometric mean filter achieves smoothing comparable to the arithmetic mean

filter, but it tends to lose less image detail in the process.

c) Harmonic mean filter

The harmonic mean filtering operation is given by the expression


………………………………………. (6.3-7)

The harmonic mean filters work s well for salt noise, but fails for pepper noise. It does well also

with other types of noise like Gaussian noise.

d) Contraharmonic mean filter:

The contraharmonic mean filtering operation yields a restored image based on the expression

………………………………….. (6.3-8)

Where, Q is caIled the order of the filter. This filter is well suited for reducing or virtually

eliminating the effects of salt-and-pepper noise. For positive values of Q, the filter eliminates

pepper noise. For negative values of Q, it eliminates salt noise. It cannot do both simultaneously.

Note that the contraharmonic filter reduces to the arithmetic mean filter if Q = 0, and to the

harmonic mean filter if Q = -1.

B) Order-Statistics Filters: Order-statistics filters are spatial filters whose response is

based on ordering (ranking) the pixels contained in the image area encompassed by the filter. The

response of the filter at any point is determined by the ranking result.


a) Median filter

The best-known order-statistics filter is the median filter, which, as its name implies, replaces the

value of a pixel by the median of the gray levels in the neighborhood of that pixel:

……………………………………….. (6.3-9)

The original value of the pixel is included in the computation of the median. Median filters are

quite popular because, for certain types of random noise, they provide excellent noise- reduction

capabilities, with considerably less blurring than linear smoothing filters of similar size. Median

filters are particularly effective in the presence of both bipolar and unipolar impulse noise. In fact,

the median filter yields excellent results for images corrupted by this type of noise.

b) Max and min filters

Although the median filter is by far the order-statistics filter most used in image Processing, it is

by no means the only one. The median represents the 50th percentile of a ranked set of numbers,

but the reader will recall from basic statistics that ranking lends itself to many other possibilities.

For example, using the 100th percentile results in the so -called max filter, given by

…………………………………………. (6.3-10)
This filter is useful for finding the brightest points in an image. Also, because pepper noise has

very low values, it is reduced by this filter as a result of the max selection process in the subimage

area Sxy.

The 0th percentile filter is the min filter:

………………………………………. (6.3-11)

This filter is useful for finding the darkest points in an image. Also, it reduces salt noise as a

result of the min operation.

c) Midpoint filter

The midpoint filter simply computes the midpoint between the maximum and

minimum values in the area encompassed by the filter:

………………………. (6.3-12)

Note that this filter combines order statistics and averaging. This filter works best for randomly

distributed noise, like Gaussian or uniform noise.

d) Alpha-trimmed mean filter


Suppose that we delete the d / 2 lowest and the d/ 2 highest gray-level values of g (s, t) in the

neighborhood Sxy. Let gr(s, t) represent the remaining mn – d pixels. A filter formed by averaging

these remaining pixels is called an alpha trimmed mean filter:

…………………………………………. (6.3-13)

Where, the value of d can range from 0 to /mn-1. When d = 0, the alpha trimmed filter reduces

to the arithmetic mean filter. If we choose d = (mn - 1)/2, the filter becomes a median filter. For

other values of d, the alpha-trimmed filter is useful in situations involving multiple types of noise,

such as a combination of salt-and-pepper and Gaussian noise.

C) Adaptive Filters: These filters are applied to an image without regard for how image

characteristics vary from one point to another. Two simple adaptive filters whose behavior

changes are based on statistical characteristics of the image inside the filter region defined by the

m x n rectangular window Sxy. Adaptive filters are capable of performance superior to that of the

filters. The price paid for improved filtering power is an increase in filter complexity. Keep in

mind that we still are dealing with the case in which the degraded image is equal to the original

image plus noise. No other types of degradations are being considered yet.

a) Adaptive, local noise reduction filter


The simplest statistical measures of a random variable are its mean and variance. These are

reasonable parameters on which to base an adaptive filter because they are quantities closely

related to the appearance of an image. The mean gives a measure of average gray level in the

region over which the mean is computed and the variance gives a measure of average contrast in

that region.

Our filter is to operate on a local region Sxy. The response of the filter at any point ( x, y ) on

which the region is centered is to be based on four quantities:

(i) g(x, y ), the value of the noisy image at (x , y );

(ii) the variance of the noise corrupting f(x , y ) to form g( x , y)

(iii) mL, the local mean of the pixels in Sxy and

(iv), the local variance of the pixels in Sxy

We want the behavior of the filter to be as follows:

1. If is zero, the filter should return simply the value of g(x, y ).This is the trivial, zero-noise case

in which g(x, y) is equal to f (x, y).

2. If the local variance is high relative to, the filter should return a value close to

g(x, y). A high local variance typically is associated with edges, and these should be preserved.
3, If the two variances are equal, we want the filter to return the arithmetic mean value of the

pixels in Sxy . This condition occurs when the local area has the same properties as the overall

image, and local noise is to be reduced simply by averaging.

An adaptive expression for obtaining based on these assumptions may be written as

…………………………………… (6.3-14)

b) Adaptive median filter

Q.b) Explain adaptive median filter for reducing noise.

Adaptive median filter: Adaptive median filtering can handle impulse noise with probabilities

even larger than median filter. An additional benefit of the adaptive median filter is that it seeks

to preserve detail while smoothing nonimpulse noise, something that the "traditional" median

filter does not do. The adaptive median filter also works in a rectangular window area Sxy . Unlike

those filters , however, the adaptive median filter changes (increases) the size of Sxy during filter

operation. Keep in mind that the output of the filter is a single value used to replace the value of

the pixel at (x, y), the particular point on which the window Sxy is centered at a given time.

Consider the following notation:


The adaptive median filtering algorithm works in two levels, denoted level A and level B, as

follows:

The key to understanding the mechanics of this algorithm is to keep in mind that it has three main

purposes: to remove salt-and-pepper (impulse) noise, to provide smoothing of other noise that

may not be impulsive, and to reduce distortion such as excessive thinning or thickening of object

boundaries. The values Zmin and Zmax are considered statistically by the algorithm to be

"impulselike” noise components , even if these are not the lowest and highest possible pixel values

in the image.
6.4 Inverse Filtering:

The inverse filter is a straightforward image restoration method. If we know the exact point

spread function model in the image degradation system and ignore the noise effect, the degraded

image can be restored using the inverse filter approach. In practice, the PSF model of the blurred

image is usually unknown and the degraded process is also affected by the noise, so the

restoration result with inverse filter is not usually perfect. But the major advantage of inverse

filter based image restoration is that it is simple. The first step is to study restoration of images

degraded by a degradation function H. The simplest approach to restoration is direct inverse

filtering, where we compute an estimate, of the transform of the original image simply

by dividing the transform of the degraded image, G (u, v) by the degradation function h (u, v):

………………………………………. (6.4-1)

Substituting the right side of equation G (u, v) = H (u, v) F (u, v) + N (u, v) into equation (6.4-1),
then

…………………………. (6.4-2)
This is an interesting expression. It tells us that even if we know the degradation function we

cannot recover the undegraded image [the inverse Fourier transform of F (u, v)] exactly because

N (u, v) is a random function whose Fourier transform is not known.

There is more bad news. If the degradation has zero or very small values, then the ratio

N (u, v)/ H (u, v) could easily dominate the estimate F (u, v).This, in fact, is frequently the case,

as will be demonstrated shortly.

One approach to get around the zero or small-value problem is to limit the filter frequencies to

values near the origin. We know that H(0, 0) is equal to the average value of h(x , y) and that this

is usually the highest value of H(u, v) in the frequency domain. Thus, by limiting the analysis to

frequencies near the origin, we reduce the probability of encountering zero values. Thus, we

require the inverse Fourier transform and by using inverse Fourier transform, the restored image

in the spatial domain is obtained as

=[ ]=[ ]

The inverse filter produces perfect reconstruction of the original image in the absence of noise.

6.5 Wiener filtering: [OR Minimum Mean Square Error filtering]

Q.6.5) Write short note on Wiener filtering.

Wiener filtering: The Wiener filter tries to build an optional estimate of the original image by

enforcing a minimum mean square error constraint between estimate and original image. The
wiener filter is an optimum filter. The objective of Wiener filter is to minimize the mean square

error. Therefore it is also called as Minimum Mean Square Error filtering. A Wiener filter has the

capability of handling both the degradation function as well as noise.

The method is founded on considering irnages and noise as random processes, and the Objective

is to find an estimate of the uncorrupted image f such that the mean square error between them

is minimized. This error measure is given by

…………………………………… (6.5-1)

Where, E { . } is the expected value of the argument. It is assumed that the noise and the image

are uncorrelated; that one or the other has zero mean; and that the gray levels in the estimate are a

linear function of the levels in the degraded image. Based On these conditions, the minimum of

the error function in Eq. (6.5-1) is given in the frequency domain by the expression

……………. (6.5-2)

Where, we used the fact that the product of a complex quantity with its conjugate is equal to the

magnitude of the complex quantity squared . This result is known as the Wiener filter, after N.
The filter, which consists of the terms inside the brackets, also is commonly referred to as the

minimum mean square error filter or the least square error filter. Note from the first line in Eq.

(6.5-2) that the Wiener filter does not have the same problem as the inverse filter with zeros in

the degradation function, unless both H (u, v) and are zero for the same value(s) of u

and v.

The term s in Eq.(6.5-2) are as follows:

As before, H (u, v) is the transform of the degradation function and G (u, v) is the transform of

the degraded image. The restored image in the spatial domain is given by the inverse Fourier

transform of the frequency-domain estimate . Note that if the noise is zero, then the noise

power spectrum vanishes and the Wiener filter reduces to the inverse filter.

When we are dealing with spectrally white noise, the spectrum is a constant, which

simplifies things considerably. However, the power spectrum of the undegraded image seldom is

known. An approach used frequently when these quantities are not known or cannot be estimated

is to approximate Eq. (6.5-2) by the expression as


………………………….. (6.5-3)

where K is a specified constant. Wiener filter act as an all pass filter.

6.6 Estimating the Degradation Function ( H )

There are three principal ways to estimate the degradation function for use in image restoration:

(1) observation, (2) experimentation, and (3) mathematical modeling. The process of restoring an

image by using a degradation function that has been estimated in some way sometimes is called

blind deconvolution, due to the fact that the true degradation function is seldom known

completely.

a) Estimation by Image Observation:

Suppose that we are given a degraded image without any knowledge about the degradation

function H. One way to estimate this function is to gather information from the image itself. For

example, if the image is blurred, we can look at a small section of the image containing simple

structures, like part of an object and the background. In order to reduce the effect of noise in our

observation, we would look for areas of strong signal content. Using sample gray levels of the
object and background, we can construct an unblurred image of the same size and characteristics

as the observed subimage. Le t the observed suhimage be denoted by g(x. y ), and let the

constructed subimage (which in reality is our estimate of the original image in that area) be

denoted by . Then, assuming that the effect of noise is negligible because of our choice of

a strong-signal area, it follows that

From the characteristics of this function we then deduce the complete function H (u, v) by

making use of the fact that we are assuming position invariance. For example, suppose that a

radial plot of Hs (u v) turns out to have the shape of Butterworth low pass filter. We can use that

information to construct a function H (u, v) on a larger scale, but having the same shape.

b) Estimation by Experimentation

If equipment similar to the equipment used to acquire the degraded image is available, it is

possible in principle to obtain an accurate estimate of the degradation. Images similar to the

degraded image can be acquired with various system settings until they are degraded as closely as

possible to the image we wish to restore. Then the idea is to obtain the impulse response of the

degradation by imaging an impulse (small dot of light) using the same system settings. A linear

space-invariant system is described completely by its impulse response. An impulse is simulated


by a bright dot of light, as bright as possible to reduce the effect of noise. Then, recalling that the

Fourier transform of an impulse is a constant, it follows that

Where, as before, G (u, v) is the Fourier transform of the observed image and A is a constant

describing the strength of the impulse.

c) Estimation by Modeling

Degradation modeling has been used for many years because of the insight it affords into the

image restoration problem. In some cases, the model can even take into account environmental

conditions that cause degradations. For example, a degradation model is based on the physical

characteristics of atmospheric turbulence. This model has a familiar form:

Where, k is a constant that depends on the nature of the turbulence. With the exception of the 5/6

power on the exponent, this equation has the same form as the Gaussian low pass filter. In fact,

the Gaussian LPF is used sometimes to model mild, uniform blurring.

6.7 Introduction to Image reconstruction from projections:

A) Tomography Image reconstruction from projections:


Tomography Image reconstruction from projections is a non-invasive imaging technique allowing

for the visualization of the internal structures of an object without the superposition of over- and

under-lying structures that usually plagues conventional projection images.

1) For example, in a conventional chest radiograph, the heart, lungs, and ribs are all superimposed

on the same film, whereas a computed tomography (CT) slice captures each organ in its actual

three-dimensional position.

2) Tomography has found widespread application in many scientific fields, including physics,

chemistry, astronomy, geophysics, and, of course, medicine.

3) While X-ray CT may be the most familiar application of tomography, tomography can be

performed, even in medicine, using other imaging modalities, including ultrasound, magnetic

resonance, nuclear-medicine, and microwave techniques.

4) Each tomography modality measures a different physical quantity:

a) Computed tomography CT: The number of x-ray photons transmitted through the

patient along individual projection lines.

b) Nuclear medicine: The number of photons emitted from the patient along

individual projection lines.

c) Ultrasound diffraction tomography: The amplitude and phase of scattered waves

along a particular line connecting the source and detector.


5) The task in all cases is to estimate from these measurements the distribution of a particular

physical quantity in the object. The quantities that can be reconstructed are:

a) CT: The distribution of linear attenuation coefficient in the slice being imaged.

b) Nuclear medicine: The distribution of the radiotracer administered to the patient

in the slice being imaged.

c) Ultrasound diffraction tomography: the distribution of refractive index in the slice

being imaged.

6) Remarkably, under certain conditions, the measurements made in each modality can be

converted into samples of the Radon transform of the distribution that we wish to reconstruct. For

example, in CT, dividing the measured photon counts by the incident photon counts and taking

the negative logarithm yields samples of the Radon transform of the linear attenuation map.

7) The Radon transform and its inverse provide the mathematical basis for reconstructing

tomographic images from measured projection or scattering data.

B) The Radon Transform

We will focus on explaining the Radon transform of an image function and discussing the

inversion of the Radon transform in order to reconstruct the image. We will discuss only the 2D

Radon transform, although some of the discussion could be readily generalized to the 3D Radon

transform.
The Radon transform (RT) of a distribution f(x, y) is given by

Where, is the Dirac delta function and the coordinates x, y, and Ø are defined in the figure below.

Fig 6.7.1: Coordinate systems for the Radon transform

The function is often referred to as a sinogram because the Radon transform of an off-

center point source is a sinusoid. The task of tomographic reconstruction is to find

f(x, y) given knowledge of . The sinogram has many nice mathematical properties,

an important one of which is

……………………… (1)
C) Backprojection

Mathematically, the backprojection operation is defined as:


Geometrically, the backprojection operation simply propagates the measured sinogram back into

the image space along the projection paths:

Fig 6.7.2: Geometrical interpretation of backprojection

D) Central Slice Theorem

The solution to the inverse Radon transform is based on the central slice theorem (CST), which

relates , the 2D Fourier transform (FT) of f(x, y), and , the 1D FT of .

Mathematically, the CST is given by

The CST theorem states that the value of the 2D FT of f(x, y) along a line at the inclination angle

Ø is given by the 1D FT of the projection profile of the sinogram acquired at angle Ø.


Fig 6.7.3: Central slice theorem

Hence, with enough projections, P (v, Ø) can fill the vx – vy space to generate F(vx, vy).

In the Fourier space, Eq. (1) becomes

…………………….. (2)

E) Reconstruction Approaches

There are many tomographic reconstruction techniques. Here, we consider only two of them.
Fig 6.7.4: Flow of direct Fourier reconstruction

Using Eq. (2), we have


Fig 6.7.4: Flow of the FBP algorithm
F) Practical Issues and Artifacts

Aliasing - Insufficient angular sampling

– reducing the number of projections is desirable for the purpose of reducing

scanning time, reducing noise, reducing motion artifact, reducing patient dose

Aliasing - Insufficient radial sampling

– occurs when there is a sharp intensity change caused by, for example, bones.

Incomplete/Missing data

– portion of data cannot be acquired due to physical or instrumental limitations.

_ limited angles: Some views cannot be acquired due to physical or instrumental

limitations.

- metal artifacts: In CT, metal blocks the radiation, leading to missing data in its

shadow.

Motion artifact

– caused by patient motion, such as respiration and heart beat, during data

acquistion

Noise

– photon detection is a stochastic process

– lower noise level can be obtained by increasing the radiation dose, and/or the
acquisition time.

– as demonstrated below, perception of structures depends on the contrast, size, and noise

level:

6.8 Applications of Digital Image Restoration:

The digital image restoration technique is widely used in the following fields of image processing:

1. Astronomical imaging

2. Medical imaging

3. Printing industry

4. Defense applications

1. Astronomical imaging: Astronomical images are degraded by motion blur which is due to

slow camera shutter speeds relative to rapid spacecraft motion. To minimize the noise as well as

degradation, astronomical imaging is one of the primary applications of digital image restoration.

2. Medical imaging: X-ray, mammograms and digital angiographic images are often corrupted

by Poisson noise. Additive noise is common in magnetic resonance imaging. These noises should
be removed for proper diagnostics of diseases. This can be accomplished through digital image

restoration techniques.

3. Printing industry Applications: Printing applications often require the use of restoration to

ensure that halftone reproductions of continuous images are of high quality. Image restoration can

improve the quality of continuous image generated from halftone images.

4. Defense applications: The image obtained from guided missiles may be degraded due to the

effects of pressure differences around a camera mounted on the missile. Proper image restoration

technique is a must to restore these images.

6.9 Applications of Image Processing:

Digital image processing is widely used in different fields as given below:

1. Medical field: Digital image processing techniques like image segmentation and pattern

recognition is used in digital mammography to identify the tumours. Techniques like image

registration and fusion play a crucial role in extracting information from medical images. In the

field of telemedicine, lossless compression algorithms allow the medical images to be transmitted

effectively from one place to another. Some examples of image processing in medical field are:
a) X-ray Imaging: X-rays are among the oldest sources of EM radiation used for imaging. The

best known use of X-rays is medical diagnostics, but they also are used extensively in industry and

other areas, like astronomy. X-rays for medical and industrial imaging are generated using an X-

ray tube, which is a vacuum tube with a cathode and anode. The cathode is heated, causing free

electrons to be released. These electrons flow at high speed to the positively charged anode. When

the electrons strike a nucleus, energy is released in the form of X-ray radiation. The energy

(penetrating power) of the X-rays is controlled by a voltage applied across the anode, and the

number of X-rays is controlled by a current applied to the filament in the cathode. Angiography is

another major application in an area called contrast enhancement radiography. This procedure is

used to obtain images (called angiograms) of blood vessels.

b) Gamma-Ray Imaging: Major uses of imaging based on gamma rays include nuclear medicine

and astronomical observations. In nuclear medicine, the approach is to inject a patient with a

radioactive isotope that emits gamma rays as it decays. Images are produced from the emissions

collected by gamma ray detectors.

c) Imaging in the Ultraviolet Band: Applications of ultraviolet “light” are varied. They include

lithography, industrial inspection, microscopy, lasers, biological imaging, and astronomical

observations. We illustrate imaging in this band with examples from microscopy and astronomy.
Ultraviolet light is used in fluorescence microscopy, one of the fastest growing areas of

microscopy.

d) Imaging in the Radio Band: The major applications of imaging in the radio band are in

medicine and astronomy. In medicine, radio waves are used in magnetic resonance imaging (MRI).

This technique places a patient in a powerful magnet and passes radio waves through his or her

body in short pulses. Each pulse causes a responding pulse of radio waves to be emitted by the

patient’s tissues. The location from which these signals originate and their strength is determined

by a computer, which produces a two-dimensional picture of a section of the patient. MRI can

produce pictures in any plane.

2. Forensics field: Security can be enhanced by personal identification. Different biometrics

used in personal identification is face, fingerprint, iris, vein pattern, etc. Preprocessing of the

input data is necessary for efficient personal identification. Commonly used preprocessing

techniques include edge enhancement, denoising, skeletonisation, etc. Template matching

algorithms are widely used for proper identification.

3. Remote sensing field: Remote sensing is the use of remote observation to make useful

inferences about a target. Observations usually consist of measurements of electromagnetic

radiation with different wavelengths of the radiation carrying a variety of information about the

earth’s surface and atmosphere. Remote sensing is being used increasingly to provide data for
diverse applications like planning, hydrology, agriculture, geology and forestry. The image

processing techniques used in the field of remote sensing include image enhancement, image

merging, image classification techniques, multispectral image processing, and texture

classification.

4. Communications fields: With the growth of multimedia technology, information can be

easily transmitted through the Internet. Video conferencing helps the people in different locations

to interact lively. For video conferencing to be effective, the information has to be transmitted

fast. Effective image and video compression algorithm like JPEG, JPEG2000, H.26X standards

helps to transmit the data effectively for live video conference. Some examples of communication

are:

a) Imaging in the Microwave Band:

The dominant application of imaging in the microwave band is radar. The unique feature of

imaging radar is its ability to collect data over virtually any region at any time, regardless of

weather or ambient lighting conditions. Some radar waves can penetrate clouds, and under certain

conditions can also see through vegetation, ice, and extremely dry sand. In many cases, radar is

the only way to explore inaccessible regions of the Earth’s surface. Imaging radar works like a

flash camera in that it provides its own illumination (microwave pulses) to illuminate an area on

the ground and take a snapshot image. Instead of a camera lens, radar uses an antenna and digital
computer processing to record its images. In a radar image, one can see only the microwave

energy that was reflected back toward the radar antenna.

b) Imaging in the Visible and Infrared Bands: The infrared band often is used in conjunction

with visual imaging, so we have grouped the visible and infrared bands used in light microscopy,

astronomy, remote sensing, industry, and law enforcement.

5. Automotives fields: The latest development in the automotive sector is ‘night vision system’.

Night vision system helps to identify obstacles during night time to avoid accidents. Infrared

cameras are invariably used in a night vision system. The image processing techniques commonly

used in a night vision system include image enhancement, boundary detection and object

recognition.

You might also like