0% found this document useful (0 votes)
45 views

Chapter 10

The document discusses image segmentation through the detection of discontinuities using edge detection techniques. It describes two main approaches to segmentation - based on discontinuities and similarity. It then focuses on segmentation based on the detection of discontinuities, which can be points, lines, or edges. Various edge detection techniques are discussed, including the use of masks to find derivatives and detect edges based on changes in intensity values.

Uploaded by

Sunny
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

Chapter 10

The document discusses image segmentation through the detection of discontinuities using edge detection techniques. It describes two main approaches to segmentation - based on discontinuities and similarity. It then focuses on segmentation based on the detection of discontinuities, which can be points, lines, or edges. Various edge detection techniques are discussed, including the use of masks to find derivatives and detect edges based on changes in intensity values.

Uploaded by

Sunny
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

Image Segmentation

Edge & Line Detection


Segmentation: Principal approaches
 Segmentation algorithms generally are based on
one of 2 basis properties of intensity values

 Discontinuity (abrupt changes )


to partition an image based on abrupt changes
in intensity (such as edges)

 Similarity (homogeneity)
to partition an image into regions that are
similar according to a set of predefined criteria.
Segmentation based
on
Detection of
Discontinuities
Detection of Discontinuities
 There are three basic types of grey level
discontinuities that we tend to look for in digital
images:
1. Points
2. Lines
3. Edges

 We typically find discontinuities using masks and


correlation
Discrete form of derivative

f(x-1,y) f(x,y) f(x+1,y) 2 f


 f ( x  1, y)  f ( x  1, y)  2 f ( x, y)
x 2

f(x,y-1)

f(x,y) 2 f
 f ( x, y  1)  f ( x, y  1)  2 f ( x, y)
y 2

f(x,y+1)
2-Dimensional Laplacian
The digital implementation of the
2-Dimensional Laplacian is obtained by
summing 2 components
 2
f  2
f
 f  2  2
2

x y
 2 f  f ( x  1, y )  f ( x  1, y )  f ( x, y  1)  f ( x, y  1)  4 f ( x, y )

0 1 0

1 -4 1

0 1 0
Mask used to implement an extension of the
Laplacian previous mask. It includes the diagonal
neighbours.

The mask yields isotropic


results for increments of
45º

0 1 0 1 1 1

1 -4 1 1 -8 1

0 1 0 1 1 1

1 0 1

0 -4 0

1 0 1
What will be the effect of applying the Laplacian mask on the image

9 9 9 9 9 9 9 2 2
9 8 9 9 9 9 2 2 2
9 9 9 9 9 9 3 2 2
9 9 9 9 9 2 2 2 2
7 9 9 9 9 2 2 2 2
9 9 9 9 2 2 2 2 2
9 9 9 9 2 2 2 4 2
9 9 9 2 2 2 2 2 2
9 9 2 2 2 2 1 2 2
First and Second Order Derivatives
First and Second Order Derivatives …

 Analysis
1) The 1st-order derivative is nonzero along the
entire ramp, while the 2nd-order derivative is
nonzero only at the onset and end of the ramp.
2) Edges in an image represent this type (ramp) of
transition. Therefore,
1st make thick edge and 2nd make thin, much
finer edges
3) The response at and around the point is much
stronger for the 2nd- than for the 1st-order
derivative.
1st and 2nd Derivative Comparison

 First-order derivatives generally produce thicker


edges in an images.
 Second-order derivatives have a stronger response to
fine detail (e.g. thin lines or isolated points).
 First-order derivatives generally have a stronger
response to a gray-level step.
 Second-order derivatives produce a double response
at step changes in gray level
Point Detection
 A point has been detected at
the location on which the mask is
centered if
|R|  T
 where
 T is a nonnegative threshold
 R is the sum of products of the
coefficients with the gray levels
contained in the region
encompassed by the mark.
Point Detection (cont…)

X-ray image of Result of point Result of


a turbine blade detection thresholding
Line Detection
 The next level of complexity is to try to detect lines
 The masks below will extract lines that are one pixel

thick and running in a particular direction


Line Detection (cont…)
 Apply every masks on the image
 let R1, R2, R3, R4 denotes the response of the
horizontal, +45 degree, vertical and -45
degree masks, respectively.
 if, at a certain point in the image

|Ri| > |Rj|,


 for all ji, that point is said to be more likely
associated with a line in the direction of mask i.
Line Detection (cont…)
 Alternatively, if we are interested in detecting
all lines in an image in the direction defined by
a given mask, we simply run the mask through
the image and threshold the absolute value of
the result.
 The points that are left are the strongest
responses, which, for lines one pixel thick,
correspond closest to the direction defined by
the mask.
Line Detection (cont…)
Binary image of a wire bond mask

After
Result of
processing
thresholding
with -45° line
filtering result
detector
Edge
 The ability to measure gray-level transitions in a
meaningful way
 Edges characterize boundaries and are therefore a
problem of fundamental importance in image
processing.
 Edges in images are areas with strong intensity
contrasts – a jump in intensity from one pixel to the
next.
 Edge detecting in an image significantly reduces the
amount of data and filters out useless information,
while preserving the important structural properties in
an image.
Edge: Example
Edges are caused by a variety of factors

surface normal discontinuity

depth discontinuity

surface color discontinuity

illumination discontinuity
Edges are caused by a variety of factors …
Edges are caused by a variety of factors …
Edges are caused by a variety of factors …
Edge detection
 Convert a 2D image into a set of curves
 Extracts salient features of the scene
 More compact than pixels

Problem …
How can you tell that a pixel is on an edge?
Edge detection
 Edges can be modeled according to their intensity profiles:
 Step edge:
 Image intensity abruptly changes from one value to one side of
discontinuity to a different value on opposite side.
 Ramp edge:
 A step edge where the intensity change is not instantaneous but
occurs over a finite distance.
 Roof edge:
 A ridge edge where the intensity change is not instantaneous but
occurs over a finite distance.
Profiles of Image Intensity Edges
Difference between an edge & a line???

A typical edge might for instance be the


border between a block of blue color and a
block of yellow.

In contrast a line can be a small number of


pixels of a different color on an otherwise
unchanging background. For a line, there
may therefore usually be one edge on each
side of the line.
Edge detection is a non-trivial task

Consider the problem of detecting edges in the following one-


dimensional image. Here, we may intuitively say that there should
be an edge between the 4th and 5th pixels.
5 7 6 4 152 148 149

‘Significant’ intensity variations(differential) represents an edge

To firmly state a specific threshold on how large the intensity


change between two neighbouring pixels must be for us to say that
there should be an edge between these pixels is not always a
simple problem.
Edge Detection
 Edges are the most common approach for detecting
meaningful discontinuities in gray level.

 The process of edge detection can be broadly


classified into two categories:
 Derivative approach
 Pattern fitting approach
Derivative Approach
 Edge pixels (or edgels) are detected by taking
derivative followed by thresholding
 e.g. Roberts operator & 4-neighbour
operator

 They occasionally incorporate noise cleaning


scheme
 e.g. Prewitt operator & Sobel operator
Pattern Fitting Approach
 A series of edge approximating functions in the
form of edge templates over a small
neighbourhood are analysed.
 Parameters along with their properties

corresponding to the best fitting function are


determined.
 Based on this information, it is decided whether

an edge is present or not. These are edge


filters.
The complete process of
edge map generation
involves some or all of the following steps

Noise smoothing
Uncontrolled illumination in &
around the scene
Digital image could have
intensity differential at almost
every point
So some of these insignificant
intensity differentials need to
This is noise
be smoothened out
smoothing
Edge Localization
Noise smoothing has a blurring
effect to some extent on the
intensity map

Step changes in intensity


corresponding to edges are
averaged out & the peaks of
differentials become flattened.

Edge localization process marks


the edge pixels by placing
them as faithfully as possible.
Edge Enhancement
Edge value corresponding to
edge strength is determined
along with the orientation of
the edge pixel

 Weak edges correspond to


weak changes in intensity
values
 Strong edges correspond to
Edge enhancement process significant intensity
filters out edgels due to noise gradient
pixels by looking around the
particular edgel for
orientation & continuity
Edge Linking & Edge Following

Edge Linking
process takes an unordered set of edge pixels produced
by an edge detector as i/p to form an ordered list of
edgels

Edge Following
process takes the entire edge strength or gradient image
as i/p & produces geometric primitives such as lines or
curves
Edge Detection (Derivative Operators)

 First Order Derivative / Gradient Methods


 Roberts Operator
 Sobel Operator
 Prewitt Operator
 Second Order Derivative
 Laplacian
 Laplacian of Gaussian
 Difference of Gaussian
 Optimal Edge Detection
 Canny Edge Detection
Gray-Level Transition
Edges & Derivatives
 how derivatives are used to
Find discontinuities?
 1st derivative tells us

where an edge is
 2nd derivative can
be used to show
edge direction
Derivatives & Noise
 Derivative based edge detectors are extremely
sensitive to noise
 We need to keep this in mind
Intensity profile along a scanline of
Step edge Ramp edge
corrupted with
noise

An ideal edge has a A more realistic edge


step like cross section which has a shape of
ramp function
corrupted with noise
Gradient-Based Methods
• Motivation
– Detect sudden changes in image intensity
– Gradient: sensitive to intensity changes

image Gradient edge


Thresholding
operator map
f(x,y) g(x,y) e(x,y)

edge pixel threshold

1 | g ( x, y ) | T
T
 f f 
Gradient: f   e( x, y )  
 x y  0 otherwise
non edge pixel
Gradient Operators
 The gradient of the image I(x,y) at location (x,y), is the
vector:
 I x, y 
Gx   x 
I      I x, y 
G y   
 y 

 The magnitude of the gradient: I  I  G 2


x  G y2 

 Gy 
 The direction of the gradient vector:   x, y   tan 1  
 x
G
The Meaning of the Gradient

The direction of the edge at location (x,y) is


perpendicular to the gradient vector at that point
The discrete gradient Operators

For digital image,


the first derivative is obtained by first differences
The discrete gradient Operators …

The magnitude of gradient f’(x,y) is then given as:

The direction of the greatest steepness is:


Edge Detection Example
Original Image Horizontal Gradient Component

Vertical Gradient Component Combined Edge Image


Gradient Operators ...

The gradient image f’(x,y) can be obtained from


the input image f(x,y) by expressing it as a
transformation

An edge element is deemed present


at (x,y) if f’(x,y) exceeds a
predefined threshold
Gradient Operators ...

Let f0 represent f(x,y) i.e. the gray level at pixel (x,y) &
f1 to f8 represent the gray levels of its neighbours.

f1 = f(x-1, y)
Row 1 f2 f1 f8
Row 2 f3 f0 f7 f2 = f(x-1, y-1)

Row 3 f4 f5 f6 & so on
Gradient Operators ...

Ordinary Operator

Therefore,

d1 = f0 – f1 & d2 = f0 – f3
This can be implemented with the following masks
0 -1 0 0
0 1 -1 1
Gradient Operators ...

Roberts Operator

Here,

d1 = f0 – f2 & d2 = f1 – f3
This can be implemented with the following masks

-1 0 0 1
0 1 -1 0
Common Edge Detectors
 Given a 3*3 region of an image the following edge
detection filters can be used
Find the strength & the direction of the edge at the
highlighted pixel
y
Pixels in gray have value 0 & in white have
value 1.

Derivative is computed by using a 3x3


neighbourhood – subtract the pixels in the
top row from the pixels in the bottom row
x to get the derivative in the x direction.
Similarly obtain the derivative in the y
direction.
-45° is same as 135° measured in the positive
direction wrt the x axis.

θ
θ
Edge
direction

The direction of an edge at any arbitrary point (x,


y) is orthogonal to the direction θ(x, y), of the
gradient vector at that point.
The sum of coefficients of each of these
masks is zero

Why do you
think is it
so??
Edges are abrupt discontinuities in
gray levels

Edges are high frequency


regions

The sum of coefficients of each of


these masks is zero

They eliminate all the low


frequency components of
the image

Hence these masks give edges without any low


frequency regions in the final o/p image
Common Edge Detectors
 Prewitt operator can detect vertical &
horizontal edges better than the Sobel
operator.

 Sobel operator detects diagonal edges


better than Prewitt operator.

 Robert & 4-neighbour operators are sensitive


to noise.
Diagonal edges with Prewitt and Sobel masks

Sobel masks have


slightly superior
noise-suppression
characteristics which
is an important issue
when dealing with
derivatives.
Simple Edge Detection Using Gradients

 A simple edge detector using gradient magnitude


 Compute gradient vector at each pixel by
convolving image with horizontal and vertical
derivative filters
 Compute gradient magnitude at each pixel
 If magnitude at a pixel exceeds a threshold, report
a possible edge point.
Simple Edge Detection Using Gradients …
Simple Edge Detection Using Gradients …

Issues to Address
Simple Edge Detection Using Gradients …

Issues to Address
Advanced Techniques
for Edge Detection

Edge detection methods


discussed so far are based
on using small operators
Practical Issues for Edge Detection

Canny has good


answers to all!
Optimal Edge Detection: Canny
 Developed by John F. Canny (JFC)
in 1986
 This is probably the most widely
used edge detector in computer
vision.
 Canny’s Edge Detector is optimal
for a certain class of edges (known
John F. Canny
as step edges) Professor
 Canny’s approach bases on 3 basic UC Berkeley
objectives:
 Detection (Low error rate)
 Localization
 Number of responses
Optimal Edge Detection: Canny …
 Detection (Low error rate)
 All edges should be found
 There should be no spurious responses
 Edge points should be well localized
 The edges detected should be as close as possible to the true
edges.
 The distance between a point marked as an edge by the detector
& the centre of the true edge should be minimum.
 Single Edge point response
 The detector should return only one point for each true edge
point.
 The number of local maxima around the true edge point should
be minimum.
 The detector should not identify multiple edge pixels where only
a single edge point exists.
Optimal Edge Detection: Canny …

image

Canny
Edge smoothing with Gaussian
Detector
differentiating along x and y axis

finds peaks in gradient image

thresholding

thinning (locating edge strings)

edge map
Algorithm - Canny Edge Detector
Step 1
Convolve the image f(x, y) with a Gaussian function to
get a smooth image f′(x, y).
f′(x, y) = f(x, y) * G(x, y; σ)

Step 2
Apply first difference gradient operator to compute the
edge strength. Any of the filter masks (Roberts, Prewitt
etc.) can be used to calculate d1 & d2.
Algorithm - Canny Edge Detector …
Step 3
Apply non-maximal suppression to the gradient magnitude.
 The purpose of this step is to convert the “blurred” edges
in the image of the gradient magnitudes to “sharp”
edges.
 This is achieved by suppressing the edge magnitudes not
in the direction of the gradient. In fact, in Cranny’s
approach, the edge direction is reduced to any one of the
four directions.
 To perform this task for a given point, its gradient is
compared with that of points of its 3x3 neighbourhood.
 If the candidate magnitude is greater than that of its
neighbourhood, the edge strength is maintained, else it is
discarded.
Algorithm - Canny Edge Detector …
Non-Maxima Suppression

Non-maximum suppression:
Select the single maximum point across the width of
an edge.
Algorithm - Canny Edge Detector …
Non-Maxima Suppression

 Thin edges by keeping large values of Gradient


 not always at the location of an edge
 there are many thick edges
0 0 0 0 1 1 1 3
3 0 0 1 2 1 3 1
0 0 2 1 2 1 1 0
0 1 3 2 1 1 0 0
0 3 2 1 0 0 1 0
2 3 2 0 0 1 0 1
2 3 2 0 1 0 2 1
Algorithm - Canny Edge Detector …
Non-Maxima Suppression …
 Thin the broad ridges in M[i,j] into ridges that are only one
pixel wide
 Find local maxima in M[i,j] by suppressing all values along the
line of the Gradient that are not peak values of the ridge

0 0 0 0 1 1 1 3
3 0 0 1 2 1 3 1
0 0 2 1 2 1 1 0
false 0 1 3 2 1 1 0 0
edges 0 3 2 1 0 0 1 3
2 3 2 0 0 1 0 1 gaps
2 3 2 0 1 0 2 1
Algorithm - Canny Edge Detector …
Non-Maxima Suppression …

 Reduce angle of Gradient θ[i,j] to one of the 4 sectors


 Check the 3x3 region of each M[i,j]
 If the value at the center is not greater than the 2
values along the gradient, then M[i,j] is set to 0
Algorithm - Canny Edge Detector …
Non-Maxima Suppression …

0 0 0 0 1 1 1 3 local
0 0 0 1 2 1 3 1 maxima

0 0 2 1 2 1 1 0
removed
0 1 3 2 1 1 0 0
0 3 2 1 0 0 1 0 depends
on condition
2 3 2 0 0 1 0 1
2 3 2 0 1 0 2 1
Algorithm - Canny Edge Detector …
Non-Maxima Suppression …
0 0 0 0 0 0 0 3
0 0 0 0 2 1 3 0
0 0 2 1 2 0 0 0
0 0 3 0 0 0 0 0 false edges

0 3 2 0 0 0 0 0
0 3 0 0 0 1 0 1
0 3 0 0 1 0 2 0
 The suppressed magnitude image will contain many
false edges caused by noise or fine texture
Algorithm - Canny Edge Detector …

Step 4
 Hysteresis thresholding/Edge Linking

 False edges can be reduced by applying a single

threshold T
 all values below T are changed to 0
However,
 selecting a good values for T is difficult
 some false edges will remain if T is too low
 some edges will disappear if T is too high
Algorithm - Canny Edge Detector …

Step 5
 Hysteresis thresholding/Edge Linking
 Canny’s approach employs double thresholding, known as
hysteresis.
 In this process, two thresholds, upper & lower
thresholds are set by the user.
 For a given edgel chain, if the magnitude of any
one edgel of the chain is greater than the upper
threshold, all edgels above the lower threshold are
selected as edge points.
Algorithm - Canny Edge Detector …
Hysteresis thresholding/Edge Linking

1. Produce two thresholded images I1(i, j) and I2(i, j).


(note: since I2(i, j) was formed with a high threshold, it will
contain fewer false edges but there might be gaps in the
contours)
2. Link the edges in I2(i, j) into contours
2.1 Look in I1(i, j) when a gap is found.
2.2 By examining the 8 neighbors in I1(i, j), gather edge
points from I1(i, j) until the gap has been bridged to an
edge in I2(i, j).
Algorithm - Canny Edge Detector …
Hysteresis thresholding/Edge Linking
T2=2 T1=1
0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 3
0 0 0 0 2 0 3 0 0 0 0 0 2 1 3 0
0 0 2 0 2 0 0 0 gaps 0 0 2 1 2 0 0 0
0 0 3 0 0 0 0 0 filled 0 0 3 0 0 0 0 0
0 3 2 0 0 0 0 0 from 0 3 2 0 0 0 0 0
T1 0 3 0 0 0 1 0 1
0 3 0 0 0 0 0 0
0 3 0 0 0 0 2 0 0 3 0 0 1 0 2 0

• Linking: search in a 3x3 of each pixel and connect the


pixel at the center with the one having greater value
• Search in the direction of the edge (direction of Gradient)
•Mark as valid edge pixels all the weak pixels in T1, that are 8- connecte
Canny Edge Detector: Example
Edge linking
 Edge detection should yield sets of pixels lying only
on edges.
 But in practice, these pixels seldom characterize
edges completely.
 Breaks in edges due to non uniform illumination
 Other effects that introduce spurious discontinuities in
intensity values
Therefore, edge detection is usually followed by linking
algorithms designed to assemble edge pixels into meaningful
edges
Hough Transform
 Performed after Edge Detection
 It is a technique to isolate the curves of a given
shape / shapes in a given image
 Classical Hough Transform can locate regular
curves like straight lines, circles, parabolas,
ellipses, etc.
 Requires that the curve be specified in some
parametric form
Hough Transform …
• Given marked edge pixels, find examples of
specific shapes
– Line segments
– Circles
– Generalized shapes
• Basic idea - Patented 1962
– Every edge pixel is a point that votes for all shapes
that pass through it.
– Votes are collected in “parameter space” - look for
peaks
– “Parameter space” is a k-dimensional histogram!
Advantages of Hough Transform

 The Hough Transform is tolerant of gaps


in the edges

 It is relatively unaffected by noise

 It is also unaffected by occlusion in the


image
The basic idea

Each straight line in this image


can be described by an
equation

Each white point if considered


in isolation could lie on an
infinite number of straight
lines
The basic idea

In the Hough transform


each point votes for
every line it could be on

The lines with the most votes


win
Image and Parameter Space
Image and Parameter Space

y = mx + c
where m is the slope & c the y-intercept
The equation can be rewritten in c–m space as
c = -xm + y
Suppose we have several edge points (x1, y1), ..., (xn,
yn) in x–y space that we want to fit in a line.
Each point in x–y space maps to a line in c–m
space.
Finding lines in an image
y b

b0

x m0 m
image space Hough space

 Connection between image (x,y) and Hough (m,b)


spaces
 A line in the image corresponds to a point in Hough space
 To go from image space to Hough space:
 given a set of points (x,y), find all (m,b) such that y = mx + b
Any Questions ?

You might also like