0% found this document useful (0 votes)
8 views260 pages

Dsp

The document discusses image enhancement techniques in the spatial domain, focusing on methods such as point processing transformations, histogram processing, and neighborhood processing. It covers various transformations including image negative, thresholding, contrast stretching, and smoothing filters, as well as image segmentation techniques like edge-based and region-based segmentation. Additionally, it introduces the Hough Transform for detecting geometric shapes in images and highlights the advantages and disadvantages of region growing for segmentation.

Uploaded by

shahharshil686
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views260 pages

Dsp

The document discusses image enhancement techniques in the spatial domain, focusing on methods such as point processing transformations, histogram processing, and neighborhood processing. It covers various transformations including image negative, thresholding, contrast stretching, and smoothing filters, as well as image segmentation techniques like edge-based and region-based segmentation. Additionally, it introduces the Hough Transform for detecting geometric shapes in images and highlights the advantages and disadvantages of region growing for segmentation.

Uploaded by

shahharshil686
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 260

Chapter 5

Image Enhancement in
Spatial Domain
What is Image Enhancement?

• Image enhancement is the process of adjusting digital images so


that the results are more suitable for display or further image
analysis.

• For example, you can remove noise, sharpen, or brighten an


image, making it easier to identify key features.
Spatial Domain Methods
▪ Point Processing Transformation

• Neighbourhood Processing Transformation


Point Processing Transformations/ Zero
Memory Operations

▪ Image Negative
▪ Thresholding
▪ Clipping
▪ Intensity Level Slicing Without Background
▪ Intensity Level Slicing With Background
▪ Bit Plane Slicing
▪ Contrast Stretching
▪ Log Transformation
Point Processing Transformations/ Zero
Memory Operations
▪ Image Negative

Hence when r=0,s=255


and when r=255,s=0

In general,
s=(L−1)−r

Example Applications
• - Useful in display of medical images
• - Can be used in producing negative prints of an
image
Point Processing Transformations/ Zero
Memory Operations
▪ Image Negative
Point Processing Transformations/ Zero
Memory Operations
▪ Thresholding
▪ Thresholding transformations are particularly useful for
segmentation in which we want to isolate an object of
interest from a background
Point Processing Transformations/ Zero
Memory Operations
▪ Thresholding
Point Processing Transformations/ Zero
Memory Operations
▪ Thresholding
Point Processing Transformations/ Zero
Memory Operations
▪ Clipping ( Grey Level Slicing Without Background)
▪ Useful in highlighting features in an image
Point Processing Transformations/ Zero
Memory Operations
▪ Clipping ( Grey Level Slicing Without Background)
Point Processing Transformations/ Zero
Memory Operations
▪ Grey Level Slicing With Background
▪ Useful in highlighting features in an image while retaining
the background
Point Processing Transformations/ Zero
Memory Operations
▪ Grey Level Slicing
▪ Example
Point Processing Transformations/ Zero
Memory Operations
▪ Bit Plane Slicing
▪ Often by isolating particular bits of the pixel values in an
image we can highlight interesting aspects of that image
▪ Higher-order bits usually contain most of the significant
visual information
▪ Lower-order bits contain subtle details
Point Processing Transformations/ Zero
Memory Operations
▪ Bit Plane Slicing
▪ Example
Point Processing Transformations/ Zero
Memory Operations
▪ Bit Plane Slicing
▪ Example
Point Processing Transformations/ Zero
Memory Operations
▪ Contrast Stretching ( Piecewise Linear
Transformation)
▪ Can be used to add contrast to a poor quality image
Point Processing Transformations/ Zero
Memory Operations
▪ Contrast Stretching ( Piecewise Linear Transformation)
Point Processing Transformations/ Zero
Memory Operations
▪ Contrast Stretching ( Piecewise Linear Transformation)
Point Processing Transformations/ Zero
Memory Operations
▪ Contrast Stretching ( Piecewise Linear Transformation)
Point Processing Transformations/ Zero
Memory Operations
▪ Contrast Stretching ( Piecewise Linear Transformation)
Point Processing Transformations/ Zero
Memory Operations
▪ Log Transformation( Dynamic Range Compression)
▪ Log functions are particularly useful when the input grey level values
may have an extremely large range of values

▪ s = c log(r + 1)

▪ During log transformation, the dark pixels in an image are expanded as


compared to the higher pixel values. The higher pixel values are kind of
compressed in log transformation. This result in following image
enhancement
Point Processing Transformations/ Zero
Memory Operations
▪ Log Transformation( Dynamic Range Compression)
•In the following example the Fourier transform of an image
is put through a log transform to reveal more detail
Histogram Processing
▪ Histogram
▪ A graph showing the number of pixels in an image at each
different intensity value found in that image.
Histogram Equalization
▪ Histogram equalization is a technique for adjusting image
intensities to enhance contrast.
▪ It tries to produce an output image that has a uniform
histogram.
▪ Let f be a given image having pixel intensities ranging from
0 to L − 1
Histogram Equalization

▪Disadvantage of histogram equalization


▪This approach is good but for some cases, this does not work
well. One such case is when we have skewed image histogram
i.e. large concentration of pixels at either end of greyscale.
Histogram Specification ( Histogram
Matching)
▪ Histogram matching or histogram specification is the
transformation of an image so that its histogram matches a
specified histogram.

▪ First equalize both original and specified histogram


using the Histogram Equalization method.
▪ Map equalized histogram to specified histogram
Histogram Specification ( Histogram
Matching)
▪ For Example
▪ Suppose the pixel value 10 in the original image gets mapped to
20 in the equalized image. Then we will see what value in
Specified image gets mapped to 20 in the equalized image and
let’s say that this value is 28. So, we can say that 10 in the
original image gets mapped to 28 in the specified image
Neighbourhood Processing
▪ Spatial Filtering
▪ Smoothening Filters
▪ Sharpening Filters
▪ Median Filters
Image Filtering in Spatial Domain
Image Filtering in Spatial Domain
Image and Filter Mask Convolution
Computing the Filtered Image
Boundary Effects
Boundary Effects- 3X3 Mask
Neighbourhood Processing
▪ Smoothening Filters
▪ Low Pass Filters ( Averaging Filters)
▪ Used to remove high spatial frequency noise from a digital image.
▪ The low-pass filters usually employ moving window operator which
affects one pixel of the image at a time, changing its value by some
function of a local region (window) of pixels.
▪ The idea is simply to replace each pixel value in an image with the
mean (`average') value of its neighbours, including itself.
▪ 3x3 LPF Mask
Neighbourhood Processing
▪ Low Pass Filters ( Averaging Filters)
Neighbourhood Processing
▪ Applying Low Pass Filters ( Averaging Filters)
Neighbourhood Processing
▪ Low Pass Filters ( Averaging Filters)

▪ Example
Neighbourhood Processing
▪ Smoothening Filters
▪ Median Filters
▪ Instead of simply replacing the pixel value with the mean of neighbouring
pixel values, it replaces it with the median of those values.
▪ The median is calculated by first sorting all the pixel values from the
surrounding neighbourhood into numerical order and then replacing the
pixel being considered with the middle pixel value.
▪ Advantages of median filter over averaging filter
▪ The median is a more robust average than the mean and so a single very
unrepresentative pixel in a neighbourhood will not affect the median value
significantly.
▪ Since the median value must actually be the value of one of the pixels in
the neighbourhood, the median filter does not create new unrealistic pixel
values when the filter straddles an edge. For this reason the median filter is
much better at preserving sharp edges than the mean filter.
Neighbourhood Processing
▪ Median Filters
Neighbourhood Processing
▪ Sharpening Filters
▪ High Pass Filter
▪ A high pass filter tends to retain the high frequency information within an image while
reducing the low frequency information.
▪ The kernel of the high pass filter is designed to increase the brightness of the centre pixel
relative to neighbouring pixels.
▪ The kernel array usually contains a single positive value at its centre, which is completely
surrounded by negative values.
▪ The following array is an example of a 3 by 3 kernel for a high pass filter:
Neighbourhood Processing
▪ Sharpening Filters
▪ High Pass Filter
Chapter 6

Image Segmentation
The Segmentation Problem
•Segmentation attempts to partition the pixels of
an image into groups that strongly correlate with
the objects in an image
•Typically the first step in any automated
computer vision application
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Segmentation Examples
Image Segmentation
Discontinuity Based Approach
•There are three basic types of grey level
discontinuities that we tend to look for in digital
images:
– Points
– Lines
– Edges
•We typically find discontinuities using masks
and correlation
Point Detection
Point detection can be achieved simply using the
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

mask below:

Points are detected at those pixels in the


subsequent filtered image that are above a set
threshold
Point Detection (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

X-ray image of a Result of point Result of


turbine blade detection thresholding
Line Detection
•The next level of complexity is to try to detect
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

lines
•The masks below will extract lines that are one
pixel thick and running in a particular direction
Line Detection (cont…)
Binary image of a wire bond
mask
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

After
Result of
processing with
thresholding
-45° line
filtering result
detector
Edge Detection
•An edge is a set of connected pixels that lie on
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

the boundary between two regions


Edges & Derivatives
•1st derivative tells us where an edge is
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

•2nd derivative can be used to show edge


direction
•Derivative based edge detectors are extremely
sensitive to noise
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Derivatives & Noise


Types of Image Segmentation Techniques

• Edge-Based Segmentation
• Region-Based Segmentation
• Thresholding Segmentation
• Watershed Segmentation
Edge Based Segmentation
•Given a 3*3 region of an image the following
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

edge detection filters can be used


Edge Detection Example
Original Image Horizontal Gradient Component
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Vertical Gradient Component Combined Edge Image


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Edge Detection Example


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Edge Detection Example


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Edge Detection Example


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Edge Detection Example


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Applying Edge Detectors


Edge Detection Solved Problems
Perform Edge detection on the given input
image by applying Sobel Edge Detector.
Edge Detection Solved Problems
Edge Detection Solved Problems
Edge Detection Solved Problems
Edge Detection Solved Problems
Edge Detection Solved Problems
Edge Detection Solved Problems
Perform Edge detection on the given input
image by applying Prewitt Edge Detector.
Edge Detection Solved Problems
Edge Detection Solved Problems
Edge Detection Solved Problems
Edge Detection Solved Problems
Edge Detection Problems
•Often, problems arise in edge detection is that
there are too much detail
•For example, the brickwork in the previous
example
•One way to overcome this is to smooth images
prior to edge detection
Edge Detection Example With Smoothing
Original Image Horizontal Gradient Component
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Vertical Gradient Component Combined Edge Image


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Operator)
Laplacian Edge Detection ( 2nd Derivative
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Laplacian Edge Detection


Laplacian Edge Detection
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

•Variations of Laplacians

•The Laplacian is typically not used by itself as it is too


sensitive to noise
•Usually when used for edge detection the Laplacian
is combined with a smoothing Gaussian filter
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Laplacian of Gaussian Mask


Hough Transform
• Used for detecting simple geometric shapes like lines, circles, and
ellipses in images.
• It is especially effective in identifying distorted, incomplete, or
partially obscured shapes
• Transforms image space into parameter space, enabling the
detection of shapes through pattern identification in this
transformed space.
• Uses a voting mechanism in the parameter space to highlight
potential object candidates.
• Robust tool in various applications like medical imagery and
manufactured part analysis.
• While the classical HT focuses on standard curves and is
computationally less intensive, the generalized HT caters to more
complex shapes where simple analytic descriptions are not feasible.
Hough Transform Working
• The Hough transform in image processing works by
transforming the image space into a parameter space.
• For example, in the case of detecting lines in images, the
image space is transformed into a parameter space
consisting of two parameters: the slope and the y-
intercept of the line.
• Each pixel in the image space is then mapped to a curve in
the parameter space that represents all the possible lines
that could pass through that pixel.
• The curves in the parameter space are then analyzed to
detect the presence of lines in the image.
Hough Transform
Hough Transform
Hough Transform
Hough Transform Practice Problem
• Given set of points, use Hough transform to
join these points. A(1,4),
B(2,3) ,C(3,1) ,D(4,1) ,E(5,0)
Hough Transform
Region Based Segmentation
•Region Growing
•Region Splitting and Merging
Region Growing Approach
• Region growing breaks down images into distinct areas.
• It starts by picking individual pixels (seeds) and then
incorporates their neighbors that share similar
characteristics, like color or intensity.
• This process continues until well-defined regions emerge
• Logic behind region growing algorithm is principle of
similarity i.e. region is coherent if all pixels of that region
are homogeneous.
• Similarity of regions is used as the main segmentation
criterion in region growing.
Region Growing Approach
• Major steps of region growing algorithm are:
– Selection of initial seed
– Seed growing criteria
– Termination of segmentation process
• For example, from one image, select one pixel p1 as an RGB
value (91,134,234) and one pixel p2 as an RGB value
(231,105,100). Consider euclidean distance as similarity
measure.
Region Growing Approach
• Define a homogeneity threshold (T) between pixels.
• If the distance is less than or equal to the threshold, then
pixels are homogeneous; if the distance is greater than the
threshold value, then pixels are not homogeneous.
Region Growing Approach
Region Growing Approach
• Advantages:
- Can accurately divide regions with the same attributes we specify.
- Simple concept: we only need a few seed points to symbolize the desired
attribute, and then we can expand the region.
- We can specify the seed points and criteria we want to use.
- Can select many criteria at the same time.
• Disadvantages:
- Unless a threshold function has been applied to the image, there may be a
continuous trail of color-related dots connecting any two points in the image.
- Practically random memory access slows down the method, therefore
adaptation may be necessary.
• Applications:
• Object tracking and shape analysis
• Medical image analysis: In medical images, the algorithm can segment
organs, tumors, or other structures with similar intensity or texture. This
helps doctors diagnose diseases.
Region Splitting and Merging Approach
• The opposite approach to region growing is region
shrinking ( splitting ).
• It is a top-down approach and it starts with the
assumption that the entire image is homogeneous
• If this is not true , the image is split into four sub
images
• This splitting procedure is repeated recursively until
we split the image into homogeneous regions
Region Splitting and Merging Approach
• If the original image is square N x N, having
dimensions that are powers of 2(N = 2n):
• All regions produced but the splitting algorithm are
squares having dimensions M x M , where M is a
power of 2 as well (M=2m,M<= n).
• Since the procedure is recursive, it produces an
image representation that can be described by a
tree whose nodes have four sons each
• Such a tree is called a Quadtree.
Region Splitting and Merging Approach
Region Splitting and Merging Approach

• If a region R is inhomogeneous (P(R)= False)


then is split into four sub regions
• If two adjacent regions Ri,Rj are homogeneous
(P(Ri U Rj) = TRUE), they are merged
• The algorithm stops when no further splitting
or merging is possible
Thresholding Segmentation
• Simplest method for segmentation
• It divides the pixels in an image by comparing
the pixel’s intensity with threshold.
•It is useful when the required object has a
higher intensity than the background
(unnecessary parts).
• Used in medical diagnosis to look for
abnormalities, or for noise reduction
Thresholding Segmentation
• Simple Thresholding
– Replace the image’s pixels with either white or
black.
–If the intensity of a pixel at a particular position is
less than the threshold value, replace it with black.
– If it’s higher than the threshold, replace it with
white.
Thresholding Segmentation
• Otsu’s Binarization/ Otsu’s Method for Image
Thresholding
– Automatic Thresholding method
– It calculates the threshold by maximizing the between-
class variance of pixel value, which effectively separates
foreground and background regions.
– This method is particularly useful when dealing with
images that have bimodal or multimodal intensity
distributions, as it can accurately identify the threshold
that best separates different objects or regions in the
image.
Otsu’s Method Solved Example
• Apply image segmentation using Otsu’s
method on the given input image
Otsu’s Method Solved Example
•Step 1- Compute the Grayscale Histogram
Otsu’s Method Solved Example
•Step 2- Compute CDF for foreground and
background ( Considering Each gray level as mid
point separating foreground and background)
•Eg- For gray level 2
Otsu’s Method Solved Example
•Step 2- Compute CDF for foreground and
background ( Considering Each gray level as mid
point separating foreground and background)
Otsu’s Method Solved Example
•Step 3- Compute the Mean Grayscale Intensity Value for
foreground and background ( Considering Each gray level as mid
point separating foreground and background)
•Eg- For gray level 2
Otsu’s Method Solved Example
•Step 3- Compute the Mean Grayscale Intensity Value for
foreground and background ( Considering Each gray level as mid
point separating foreground and background)
Otsu’s Method Solved Example
•Step 4- Compute the Between-Class Variance for Each Possible Threshold
Value using the CDF and mean

•Eg- For gray level 2


Otsu’s Method Solved Example
•Step 4- Compute the Between-Class Variance for Each Possible Threshold Value using
the CDF and mean

•Highest Between Class- Variance is found for Gray Level 3. Therefore, Set Threshold=3
Otsu’s Method Solved Example
•Step 5- Set all pixel values < Threshold to 0 and all pixel values >= Threshold to
1.
Practice Question
Variable Thresholding Using Moving
Averages
•Thresholding based on moving averages works well when the objects are small
with respect to the image size
• Quite useful in document processing
•The scanning (moving) typically is carried out line by line in zigzag pattern to
reduce illumination bias
Variable Thresholding Using Moving
Averages
Variable Thresholding Using Moving
Averages

You might also like