6 - Morphological Image Processing PDF
6 - Morphological Image Processing PDF
Preview
The word morphology commonly denotes a branch of biology that deals with
the form and structure of animals and plants. We use the same word here in
the context of mathematical morphology as a tool for extracting image com-
ponents that are useful in the representation and description of region shape,
such as boundaries, skeletons, and the convex hull. We are interested also in
morphological techniques for pre- or postprocessing, such as morphological
filtering, thinning, and pruning.
In Section 10.1 we define several set theoretic operations and discuss binary
sets and logical operators. In Section 10.2 we define two fundamental morpho-
logical operations, dilation and erosion, in terms of the union (or intersection)
of an image with a translated shape called a structuring element. Section 10.3
deals with combining erosion and dilation to obtain more complex morpho-
logical operations. Section lOA introduces techniques for labeling connected
components in an image. This is a fundamental step in extracting objects from
an image for subsequent analysis.
Section 10.5 deals with morphological reconstruction, a morphologi-
cal transformation involving two images, rather than a single image and a
structuring element, as is the case in Sections 10.1 through lOA. Section 10.6
extends morphological concepts to gray-scale images by replacing set union
and intersection with maxima and minima. Many binary morphological opera-
tions have natural extensions to gray-scale processing. Some, like morphologi-
cal reconstruction, have applications that are unique to gray-scale images, such
as peak filtering.
The material in this chapter begins a transition from image-processing
methods whose inputs and outputs are images, to image analysis methods,
whose outputs attempt to describe the contents of the image. Morphology is
486
10.1 • Preliminaries 487
II!II Preliminaries
In this section we introduce some basic concepts from set theory and discuss
the application of MATLAB 's logical operators to binary images.
wEA
Similarly, if w is not an element of A , we write
w~A
B={wlcondition}
For example, the set of all pixel coordinates that do not belong to set A ,
C
denoted A is given by ,
t Th e Cartesian product of a set of integers, Z , is the set of all o rdered pairs of elements (z" Zj )' with z, a nd
Zj being integers from Z. It is customary to denote the Cartesian product by Z '-
488 Chapter 10 • Morphological Image Processing
abc
d e
FIGURE 10.1
(a) Two sets A
and B. (b) The A
union of A and B. AUB AnB
(c) The
B
intersection of
A and B . (d) The
complement of A.
(e) The difference
between A and B.
A- B
(A)C
The difference of sets A and B, denoted A - B, is the set of all elements that
belong to A but not to B:
A - B = { wi W E A , wrt. B}
Figure 10.1 illustrates the set operations defined thus far. The result of each
operation is shown in gray.
In addition to the preceding basic operations, morphological operations
often require two operators that are specific to sets whose elements are pixel
coordinates. The reflection of a set B, denoted 13, is defined as
Figure 10.2 illustrates these two definitions using the sets from Fig. 10.1. The
black dot denotes the origin of the sets (the origin is a user-defined reference
point).
a b
FIGURE 10.2
(a) Reflection of
B. (b) Transla tion
of A by z. The sets
A and B are from
Fig. 10.1, and the
black dot denotes
their origin. l...-_ _ _ _ _- - ' (A) ,
10.1 • Preliminaries 489
where, as mentioned previously regarding the set point of view, the elements
of A and B are I-valued. Thus, we see that the function point of view deals with
both foreground (1) and background (0) pixels simultaneously. The set point
of view deals only with foreground pixels, and it is understood that all pixels
that are not foreground pixels constitute the background. Of course, results
using either point of view are the same. The set operations defined in Fig. 10.1
can be performed on binary images using MATLAB 's logical operators OR
(I ),AND (&), and NOT (- ), as Table 10.1 shows.
As an illustration, Fig. 10.3 shows the results of applying several logical
operators to two binary images containing text. (We follow the Image Pro-
cessing Toolbox convention that foreground (I-valued) pixels are displayed as
white.) The image in Fig.10.3(d) is the union of the "UTK" and "GT" images;
it contains all the foreground pixels from both. In contrast, the intersection of
the two images [Fig.10.3(e)] shows the pixels where the letters in "UTK" and
"GT" overlap. Finally, the set difference image [Fig.10.3(f)] shows the letters in
"UTK" with the pixels "GT" removed.
UTK
abc
d e f
FIGURE 10.3 (a) Binary image A. (b) Binary image B. (c) Complement -A. (d) Union A I-B. (e) Intersection A & B.
(f) Set difference A & -B.
10.2.1 Dilation
Dilation is an operation that "grows" or "thickens" objects in an image. The spe-
cific manner and extent of this thickening is controlled by a shape referred to as
a structuring element. Figure lOA illustrates how dilation works. Figure 10A(a)
shows a binary image containing a rectangular object. Figure 10A(b) is a struc-
turing element, a five-pixel-long diagonal line in this case. Graphically, structur-
ing elements can be are represented either by a matrix of Os and Is or as a set
of foreground (I-valued) pixels, as in Fig. 10A(b). We use both representations
interchangeably in this chapter. Regardless of the representation, the origin of
the structuring element must be clearly identified. Figure 10A(b) indicates the
10.2 • Preliminaries 491
o 000 0 0 0 0 0 0 0 0 0 0 0 0 0 a b
o0 0 0 0 0 0 0 000 0 0 0 0 0 0 c
000 000 0 0 000 0 0 0 0 0 0 d
o 0 0 0 0 000 000 000 0 0 0
FIGURE 10.4
o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Origin\ I I Illustration of
o0 0 0 0 1 1 1 1 1 1 1 0 0 000
o0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 IT] dilation.
00000 111111 1 000 0 0 I (a) Original image
o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I with rectangular
000 0 0 0 0 0 000 0 0 0 0 0 0 object.
00000 000 0 0 0 000 0 0 0 (b) Structuring
o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 element with five
o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 pixels arranged
in a diagonal line.
The structuring element translated to The origin, or
th ese locations does not overlap any center, of the
I-valued pixels in the original image. structuring
r---... element is shown
1/ r--
....... -.... with a dark
border.
/ ••••••• (c) Structuring
I~ •••••••• element
1 1 1 1 f N..-I • • translated to
•11111 11- • several locations
• • 1 1 1 1 1 1 1 > When th e origin is in the image.
• • • • • • ·1/
•• translate d to the (d) Output image.
•••••• ". " loeat ions, the
structuri ng element
The shaded region
shows the location
overlaps I-valued
pixels in the original of Is in the
tmage. original image.
o0 0 0 0 0 0 0 0 0 000 0 0 0 0
o0 0 0 0 0 0 000 0 0 0 0 000
o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
o 000 000 1 1 1 1 1 1 1 00 0
000000 1 1 1 1 1 1 1 1 000
00000 1 1 1 1 1 1 1 1 1 000
o0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0
000 1 1 1 1 1 1 1 1 1 0 0 000
000 1 1 1 1 1 1 1 1 0 0 0 0 0 0
000 1 1 1 1 1 1 1 000 0 0 0 0
o 0 0 0 0 000 0 0 000 0 0 0 0
o0 0 0 0 0 0 0 0 0 0 0 0 0 000
o 0 0 0 000 0 0 0 0 0 0 0 0 0 0
origin of the structuring element by a black box. Figure IO.4(c) depicts dilation
as a process that translates the origin of the structuring element throughout
the domain of the image and checks to see where the element overlaps I-val-
ued pixels. The output image [Fig.IO.4(d)] is I at each location of the origin of You can see he re an
example of the
the structuring element such that the structuring element overlaps at least one importance of the origin
I-valued pixel in the input image. of a structuring e lement.
Changing the location
The dilation of A by B , denoted A EB B , is defined as the set operation of the defined origin
generally changes the
AEBB = {z I(B), nkt"0} result of a morphological
operation.
492 Chapter 10 • Morphological Image Processing
where 0 is the empty set and B is the structuring element. In words, the dila-
tion of A by B is the set consisting of all the structuring element origin loca-
tions where the reflected and translated B overlaps at least one element of A.
It is a convention in image processing to let the first operand of A EB B be the
image and the second operand be the structuring element, which usually is
much smaller than the image. We follow this convention from this point on.
The translation of the structuring element in dilation is similar to the
mechanics of spatial convolution discussed in Chapter 3. Figure lOA does
not show the structuring element's reflection explicitly because the structur-
ing element is symmetrical with respect to its origin in this case. Figure 10.5
shows a nonsymmetric structuring element and its reflection. Toolbox function
reflect can be used to compute the reflection of a structuring element.
Dilation is associative,
A EB(BEB C) = (A EB B)EB C
and commutative:
AEBB=BEBA
Toolbox function imdilate performs dilation. Its basic calling syntax is
D = imdilate(A, B)
For the moment, the inputs and output are assumed to be binary, but the same
syntax can deal with gray-scale functions, as discussed in Section 10.6. Assum-
ing binary quantities for now, B is a structuring element array of Os and Is
whose origin is computed automaticaLLy by the toolbox as
floor((size(B) + 1)/2)
This operation yields a 2-D vector containing the coordinates of the center
of the structuring element. If you need to work with a structuring element in
which the origin is not in the center, the approach is to pad B with zeros so that
the original center is shifted to the desired location .
EXAMPLE 10.1: • Figure 10.6(a) shows a binary image containing text with numerous broken
An application of characters. We want to use imdilate to dilate the image with the following
dilation. structuring element:
a b 1
FIGURE 10.S Origin \ 1 1
(a) Nonsymmetric 1
structuring 1 1 1 [I] 1 1 1 1 [I] 1 1 1
element. 1 1 1
(b) Structuring
element reflected 1
about its origin.
10.2 • Preliminaries 493
a b
FIGURE 10.6
An example of
dilation.
(a) Input image
containing
broken text.
(b) Dilated image.
010
1 [1] 1
010
The following commands read the image from a file, form the structuring ele-
ment matrix, perform the dilation, and display the result.
»A imread('broken_text.tif');
»B [010;111;010];
»D imdilate(A, B);
» imshow(D)
Then, because dilation is associative, A Ef) B = A Ef) (BI Ef) B 2 ) = (A Ef) B 1 ) Ef) B2 • In
other words, dilating A with B is the same as first dilating A with B1 and then
dilating the result with B2 • We say that B can be decomposed into the structur-
ing elements 8 1 and B2•
The associative property is important because the time required to compute
dilation is proportional to the number of nonzero pixels in the structuring ele-
ment. Consider, for example, dilation with a 5 X 5 array of Is:
494 Chapter 10 • Morphological Image Processing
1 1 1 1 1
1 1 1 1 1
1 1 [1] 1 1
1 1 1 1 1
1 1 1 1 1
The number of elements in the original structuring element is 25, but the total
number of elements in the row-column decomposition is only 10. This means
that dilation with the row structuring element first, followed by dilation with
the column element, can be performed 2.5 times faster than dilation with the
5 X 5 array of 1s. In practice, the speed-up will be somewhat less because usually
there is some overhead associated with each dilation operation. However, the
gain in speed with the decomposed implementation is still significant.
se = strel(shape, parameters)
where shape is a string specifying the desired shape, and parameters is a list
of parameters that specify information about the shape, such as its size. For
example, st rel ( 'diamond', 5) returns a diamond-shaped structuring ele-
ment that extends ± 5 pixels along the horizontal and vertical axes. Table 10.2
summarizes the various shapes that strel can create.
In addition to simplifying the generation of common structuring element
shapes, function strel also has the important property of producing struc-
turing elements in decomposed form. Function imdilate automatically uses
the decomposition information to speed up the dilation process. The following
example illustrates how st rel returns information related to the decomposi-
tion of a structuring element.
10.2 • Preliminaries 495
EXAMPLE 10.2: • Consider the creation of a diamond-shaped structuring element using func-
Structuring tion strel:
element
decomposition
using function
» se = strel( 'diamond', 5)
strel. se =
Flat STREL object containing 61 neighbors.
Decomposition: 4 STREL objects containing a total of 17
neighbors
Neighborhood:
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0
0 0
1 1 1
0 1 1 0
0 0 1 1 0 0
0 0 0 1 1 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
» decomp getsequence(se);
» whos
Name Size Bytes Class Att ri butes
decomp 4x1 1716 strel
se 1x1 3309 strel
The output of whos shows that se and decomp are both strel objects and,
further, that decomp is a four-element vector of strel objects. The four structur-
ing elements in the decomposition can be examined individually by indexing
into decomp:
» decomp(1)
ans =
Flat STREL object containing 5 neighbors.
10.2 • Preliminaries 497
Neighborhood:
010
1 1 1
010
» decomp(2)
ans =
Flat STREL object containing 4 neighbors.
Neighborhood:
010
o
o o
» decomp(3)
ans =
Flat STREL object containing 4 neighbors.
Neighborhood:
0 0 1 0 0
0 0 0 0 0
1 0 0 0 1
0 0 0 0 0
0 0 0 0
» decomp(4)
ans =
Flat STREL object containing 4 neighbors.
Neighborhood:
o 1 o
o 1
o o
Function imdilate uses the decomposed form of a structuring element au-
tomatically, performing dilation approximately three times faster ('" 61/17) in
this case than with the non-decomposed form. •
10.2.4 Erosion
Erosion "shrinks" or "thins" objects in a binary image. As in dilation, the man-
ner and extent of shrinking is controlled by a structuring element. Figure 10.7
illustrates the erosion process. Figure 10.7(a) is the same as Fig. lO.4(a). Figure
498 Chapter 10 • Morphological Image Processing
a b 00000 0 0 0 0 0 0 0 0 0 0 0 0
c o0 0 0 0 0 0 0 0 0 0 0 0 0 000
d o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FIGURE 10.7 o0 0 0 0 0 0 0 0 000 0 0 0 0 0
Illustration of o 000 0 0 0 0 0 0 0 0 0 0 0 0 0
00000 11 1 1 1 1 1 000 0 0 I
erosion.
(a) Original image o 0 0 00 1 1 1 1 1 1 1 00 000 [I]
with rectangular
o 0 000 1 1 1 1 1 1 1 000 0 0 I
000 0 0 0 0 0 000 0 000 0 0
object. o 0 000 0 0 0 0 0 0 0 0 0 0 0 0
(b) Structuring o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
element with o0 0 0 0 0 0 0 0 0 0 0 0 0 000
three pixels o0 0 0 0 0 0 0 0 0 0 000 000
arranged in a
vertical line. The The result is 0 at these locations in the output
origin of the image because all or part of the structuring
element overlaps the background.
structuring
element is shown 1\
with a dark /
border. \
J \
(c) Structuring
element I~
translated to 1 1 1 1 1 1 1
several locations 1 1 1 1 1 1 1
in the image. 1 1 1 1 1 1 1
(d) Output image.
The result is 1 at ~h'is location in the output
The shaded region
shows the location image because the structuring element fi ts
of Is in the entirely within the foreground.
original image.
o0 0 0 0 0 0 0 0 0 0 000 000
000 0 0 0 0 0 0 0 0 0 0 0 0 0 0
o0 0 0 0 0 0 0 0 000 0 0 0 0 0
o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
o0 0 0 0 0 0 0 0 0 0 0 0 0 000
o0 0 0 0 0 0 000 0 0 0 000 0
00000 11 111 1 0 0 0 0 0
o0 0 0 0 0 0 0 0 0 0 0 000 0 0
o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
o0 0 0 0 0 0 0 0 0 0 0 0 0 000
o0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
IO.7(b) is the structuring element, a short vertical line. Figure IO.7(c) depicts
erosion graphically as a process of translating the structuring element through-
out the domain of the image and checking to see where it fits entirely within
the foreground of the image. The output image in Fig. IO.7(d) has a value of I
at each location of the origin of the structuring element, such that the element
overlaps only I-valued pixels of the input image (i.e., it does not overlap any
of the image background).
10.2 • Preliminaries 499
A e B = {zl (B) z nA c
= 0}
Here, erosion of A by B is the set of all structuring element origin locations
where no part of B overlaps the background of A .
• Erosion is performed by toolbox function imerode , whose syntax is the EXAMPLE 10.3:
same as the syntax of imdilate discussed in Section 10.2.1. Suppose that An illustration of
we want to remove the thin wires in the binary image in Fig. 10.S(a), while erosion.
a b
c d
FIGURE 10.8
An illustration of
erosion.
(a) Original
image of size
486 X 486 pixels.
(b) Erosion with a
disk of radius 10.
(c) Erosion with
a disk of radius 5.
(d) Erosion with a
disk of radius 20.
500 Chapter 10 • Morphological Image Processing
» A = imread('wirebond_mask.tif');
» se = strel( 'disk', 10);
» E10 = imerode(A, se);
» imshow(E10)
As Fig. 10.8(b) shows, these commands successfully removed the thin wires in
the mask. Figure 10.8(c) shows what happens if we choose a structuring ele-
ment that is too small:
Some of the wire leads were not removed in this case. Figure 10.8(d) shows
what happens if we choose a structuring element that is too large:
The wire leads were removed, but so were the border leads.
•
II!IJ Combining Dilation and Erosion
In image-processing applications, dilation and erosion are used most often
in various combinations. An image will undergo a series of dilations and/or
erosions using the same, or sometimes different, structuring elements. In this
section we consider three of the most common combinations of dilation and
erosion: opening, closing, and the hit-or-miss transformation. We also intro-
duce lookup table operations and discuss bwmorph , a toolbox function that can
perform a variety of morphological tasks.
fit entirely within A. Figure 10.9 illustrates this interpretation. Figure 1O.9(a)
10.3 • Combining Dila tion and Erosion 501
A 0 B
Translates of B
c5U
abc
d e
FIGURE 10.9 Opening and closing as unions of translated structuring elements. (a) Set A and structuring
element B. (b) Translations of B that fit entirely within set A. (c) The complete opening (shaded). (d) Transla-
tions of B outside the border of A. (e) The complete closing (shaded).
C = imopen(A, B)
and
C imclose(A, B)
502 Chapter 10 • Morphological Image Processing
a b
c d
FIGURE 10.10
Illustration of
opening and
closing.
(a) Original
image.
(b) Opening.
(c) Closing.
(d) Closing of (b).
where, for now, A is a binary image and B is a matrix of Os and Is that specifies
the structuring element. An strel object from Table 10.2 can be used instead
of B.
EXAMPLE 10.4: • This example illustrates the use of functions imopen and imclose . The
Working with image shapes. ti f shown in Fig. 10.1O(a) has several features designed to
functions imopen illustrate the characteristic effects of opening and closing, such as thin protru-
and imclose .
sions, a thin bridge, several gulfs, an isolated hole, a small isolated object, and
a jagged boundary. The following commands open the image with a 20 X 20
square structuring element:
» f = imread( 'shapes.tif');
» se = strel( 'square ', 20);
» fo = imopen(f, se);
» imshow(fo)
Figure 10.10(b) shows the result. Note that the thin protrusions and outward-
pointing boundary irregularities were removed. The thin bridge and the small
isolated object were removed also. The commands
» fc = imclose(f, se);
» imshow(fc)
produced the result in Fig. 10.10(c). Here, the thin gulf, the inward-pointing
boundary irregularities, and the small hole were removed. Closing the result of
the earlier opening has a smoothing effect:
abc
FIGURE 10.11 (a) Noisy fingerprint image. (b) Opening of image. (c) Opening followed by closing. (Original
image courtesy of the U. S. National Institute of Standards and Technology.)
» imshow(foc)
» f = imread('Fig1011(a)
.tif') ;
»se = strel('square', 3);
» fo = imopen(f, se);
» imshow(fo)
produced the image in Fig. lO.l1(b). The noisy spots were removed by open-
ing the image, but this process introduced numerous gaps in the ridges of the
fingerprint. Many of the gaps can be bridged by following the opening with a
closing:
» foc = imclose(fo,se);
» imshow(foc)
Figure lO.l1(c) shows the final result, in which most of the noise was removed
(at the expense of introducing some gaps in the fingerprint ridges). •
a b 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0
c 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0
d e 0 0 1 0 0 0 1 1 1 1
0 0 0 0 0 0 B1
f 0 1 1 1 0 0 0 0 0 0
0 0 1 1 0 0
g 0 0 1 0 0 1 1 1 1
FIGURE 10.12 0
0
0
0
0
0
0
0
0
1
0
1
1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1 rn1 1
(a) Original image
A. (b) Structuring 0 o 0 0 0 1 (J (J (J (J 0 (J 0 0 0 (J
element 8 1. 0 o 0 0 0 0 0 0 0 0 0 0 0 0 0 0
(c) Erosion of A
by8!. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
(d) Complement 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
of the original 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
image, Ac. 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
(e) Structuring 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
element 8 2 • 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
(f) Erosion of A C 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
by B 2 . (g) Output 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
image. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 0 1 1 1 1 1 1 1 1 1 1 1 1
1 1 0 1 0 1 1 0 0 0 1 1 1 1 1
B2
1 0 0 o 1 1 1 1 1 1 1 1 0 0 1
1 1 0 1 1 1 1 1 1 1 1 1 0 0 0 1 1
1 1 1 1 1 1 0 1 1 1 1 1 1 o1
1 1 1 1 o 0 0 1 1 1 1 1 1 1 1 1 D
1 1 1 1 1 o 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 111 1 1 1 1
1 0 0 1 1 1 1 1 1 1 1 1 1 1 1
1 0 1 0 1 0 0 0 0 0 0 1 1 1 1 1
0 0 o 0 0 1 1 1 1 1 1 0 0 0 0 1
1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 o
0 0 1 0 1 1 1 0 0 0 0 1
1 0 0 0 0 0 0 1 1 0 0 0 0 0
1 1 1 0 1 0 1 1 1 1 0 1 0 1
1 1 0 0 0 0 0 1 1 1 1 1 1 1
1 1 1 0 1 0 1 1 1 1 l 1 1 1
0 0 0 0 0 0 0 0 00 0 0 0 0 0 0
0 0 0 0 0 0 0 0 00 0 0 0 0 0 0
0 0 0 0 0 0 0 0 00 0 0 0 0 0 0
() 0 1 () 0 0 0 0 00 0 0 0 0 0 0
0 0 0 0 0 0 0 0 00 0 o 0 0 0 0
0 0 0 0 0 0 0 0 00 0 o 0 0 0 0
0 0 0 () () 1 () () () 0 0 o 0 () () ()
0 () 0 0 0 0 0 () 0 0 0 o 0 0 0 0
0 0 0 0 0 0 0 0 0 () 0 o 0 0 0 0
E - F
10.3 • Combining Dilation and Erosion 505
010
111
010
Figure 10.12(a) contains this configuration of pixels in two different loca-
tions. Erosion with structuring element B1 determines the locations of fore-
ground pixels that have north, east, south, and west foreground neighbors [Fig.
1O.12(c)]. Erosion of the complement of Fig. 10.12(a) with structuring ele-
ment B2 determines the locations of all the pixels whose northeast, southeast,
southwest, and northwest neighbors belong to the background [Fig. 10.12(f)].
Figure 1O.12(g) shows the intersection (logical AND) of these two operations.
Each foreground pixel of Fig. 10.12(g) is the location of the center of a set of
pixels having the desired configuration.
The name "hit-or-miss transformation" is based on how the result is affected
by the two erosions. For example, the output image in Fig. 10.12 consists of all
locations that match the pixels in B, (a "hit") and that have none of the pixels
in B2(a "miss"). Strictly speaking, the term hit-and-miss transformation is more
accurate, but hit-or-miss transformation is used more frequently.
The hit-or-miss transformation is implemented by toolbox function
bwhitmiss , which has the syntax
where C is the result, A is the input image, and B1 and B2 are the structuring
elements just discussed .
• Consider the task of locating upper-left-comer pixels of square objects in EXAMPLE 10.5:
an image using the hit-or-miss transformation. Figure 1O)3(a) shows an image U sing function
containing squares of various sizes. We want to locate foreground pixels that bwhitmiss .
a b
FIGURE 10.13
(a) Original
image.
(b) Result of
applying the hit-
or-miss
transformation
(the dots shown
were enlarged
to facilitate
viewing).
506 Chapter 10 • Morphological Image Processing
have east and south neighbors (these are "hits") and that have no northeast,
north, northwest, west, or southwest neighbors (these are "misses"). These re-
quirements lead to the two structuring elements:
Note that neither structuring element contains the southeast neighbor, which
is called a don't care pixel. We use function bwhitmiss to compute the trans-
formation , where A is the input image shown in Fig. 10.13(a):
c
10.3 • Combining Dilation and Erosion 507
and then sum all the products. This procedure assigns a unique value in the
range [0, 511] to each different 3 X 3 neighborhood configuration. For example,
the value assigned to the neighborhood
1 1 0
1 0 1
1 0 1
is 1(1) + 2(1) + 4(1) + 8(1) + 16(0) + 32(0) + 64(0) + 128(1) + 256(1) == 399,
where the first number in these products is a coefficient from the preceding
matrix and the numbers in parentheses are the pixel values, taken column-
wise.
The Image Processing Toolbox provides two functions, makelut and
applylut (illustrated later in this section), that can be used to implement this
technique. Function makelut constructs a lookup table based on a user-sup-
plied function, and applylut processes binary images using this lookup table.
Continuing with the 3 X 3 case, using makelut requires writing a function that
accepts a 3 X 3 binary matrix and returns a single value, typically either a 0 or
1. Function makelut calls the user-supplied function 512 times, passing it each
possible 3 X 3 neighborhood configuration, and returns all the results in the
form of a 512-element vector.
As an illustration, we write a function, endpOints . m, that uses mak e -
lut and applylut to detect end points in a binary image. We define an
end point as a foreground pixel whose neighbor configuration matches the
hit-or-miss interval matrix [0 1 0; -1 1 -1; -1 -1 -1] or any of its
90-degree rotations; or a foreground pixel whose neighbor configuration
matches the hit-or-miss interval matrix [1 -1 -1; -1 1 -1; -1 -1 -1]
or any of its 90-degree rotations (Gonzalez and Woods [2008]). Function
endpOints computes and then applies a lookup table for detecting end points
in an input image. The line of code
perSistent lut
function g = endpoints(f)
%ENDPOINTS Computes end points of a binary image.
% G = ENDPOINTS(F) computes the end pOints of the binary image F
- endpOints
if isempty(lut)
lut = makelut(@endpoint_fcn, 3);
end
g = applylut(f,lut);
%- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- %
function is_end_point = endpoint_fcn(nhood)
% Determines if a pixel is an end point.
% IS_END_POINT = ENDPOINT_FCN(NHOOD) accepts a 3 - by-3 binary
% neighborhood, NHOOD, and returns a 1 if the center element is an
% end point; otherwise it returns a O.
EXAMPLE 10.6: • An interesting and instructive application of lookup tables is the implemen-
Playing Conway's tation of Conway's "Game of Life," which involves "organisms" arranged on
Game of Life a rectangular grid (see Gardner [1970, 1971]). We include it here as another
using binary
images and look- illustration of the power and simplicity of lookup tables. There are simple
up-table-based rules for how the organisms in Conway's game are born, survive, and die from
computations. one "generation" to the next. A binary image is a convenient representation
for the game, where each foreground pixel represents a living organism in that
location.
Conway's genetic rules describe how to compute the next generation (next
>
a b
FIGURE 10.14
(a) Image of a
morphological
skeleton.
(b) Output of
function
endpOints. The
pixels in (b) were
enlarged for
clarity.
» bw1 = [0 0 0 0 o0 0 0 o 0
o0 o0 0 o 0 0 0 0
0 0 0 o0 0 0 0
0 0 0 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0
0 0 0 1 1 0 o0
0 0 1 o0 0 0 0 0
0 0 0 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0] ;
The following commands carry the computation and display up to the third
generation:
The paramete rs » imshow (bw1, 'I ni tialMagnif ication " , fit ' ) , title ( 'Generation 1 ' )
'InitialMagnification I,
» bw2 = applylut(bw1, lut);
, fit ' forces the image
being displayed to fit in » figure, imshow(bN2, 'InitialMagnification', 'fit'); title ( 'Generation2')
the available display area. » bw3 = applylut(bw2, lut);
» figure, imshow(lJ.N3, 'InitialMagnification', 'fit'); title ( 'Generation 3')
We leave it as an exercise to show that after a few generations the cat fades to
a "grin" before finally leaving a "paw print." •
10.3 • Combining Dilation and Erosion 511
9 = bwmorph(f, operation, n)
» f = imread('fingerprint_cleaned.tif');
»g1 = bwmorph(f, 'thin', 1);
» g2 = bwmorph(f, 'thin', 2);
» imshow(g1); figure, imshow(g2)
fiGURE 10.15 (a) Fingerprint image fro m Fig.IO.lI(c) thinned once. (b) Image thinned twice. (c) Image thinned
until stability.
» for k = 1:5
fs fs & -endpoints(fs);
end
Figure lO.16(c) shows the result. We would obtain similar results using the
'spur' option from Table 10.3
abc
FIGURE 10.16 (a) Bone image. (b) Skeleton obtained using function bwmorph. (c) Resulting skeleton after
pruning with function endpoints.
514 Chapter 10 • Morphological Image Processing
The results would not be the exactly the same because of differences in algo-
rithm implementation. Using lnf instead of 5 in bwmorph would reduce the
image to a single point.
a b 0 1 1 1 0 0 0 0 0 0
FIGURE 10.17
(a) Image 0 0 1 1 0 0 0 0 0 0
containing ten 0 0 0 1 0 0 0 0 0 0
objects.
(b) A subset of 0 0 0 1 0 0 0 0 0 0
pixels from the 0 0 0 1 0 0 0 0 0 0
image.
0 0 0 0 0 0 0 0 0 0
0 () 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 1 1 0 0 0
10.4 • Labeling Connected Components 515
abc
d e
f g
'-I FIGURE 10.18
(a) Pixel p and
I p I its 4-neighbors,
~-~
(b) Pixel p and its
diagonal
neighbors,
(c) Pixel p and
its 8-neighbors,
(d) Pixelsp and
q are 4-adjacent
and 8-adjacent.
0 0 0 0 0 0 0 0 0 0 (e) Pixels p and q
0 0 I 1 1 0 0 1 1 1 are 8-adjacent but
not 4-adjacent.
0 0 1 0 0 0 0 1 0 0 (f) The shaded
pixels are both
1 I 1 0 0 1 1 0 0 0
4-connected and
0 0 0 0 0 0 0 0 0 0 8-connected.
(g) The shaded
pixels are
8-connected but
not 4-connected.
Two pixels p and q are said to be 4-adjacent if q E N 4 (p). Similarly,p and q are
said to be 8-adjacent if q E N g(p). Figures 10.18( d) and (e) illustrate these con-
cepts. A path between pixels PI and p" is a sequence of pixels Pp P2"'" P,, _I> Pn
such that Pk is adjacent to Pk+" for 1 ::s k < n. A path can be 4-connected or
8-connected, depending on the type of adjacency used.
Two foreground pixels P and q are said to be 4-connected if there exists
a 4-connected path between them, consisting entirely of foreground pixels
[Fig. 1O.18(f)]. They are 8-connected if there exists an 8-connected path be-
tween them [Fig. 10.18(g)]. For any foreground pixel, p, the set of all fore-
ground pixels connected to it is called the connected component containing p. See Section J2.1 for
further discussion of
A connected component was just defined in terms of a path, and the defini- connected components.
tion of a path in turn depends on the type of adjacency used. This implies that
the nature of a connected component depends on which form of adjacency
we choose, with 4- and 8-adjacency being the most common. Figure 10.19
illustrates the effect that adjacency can have on determining the number of
connected components in an image. Figure 10.19(a) shows a small binary
image with four 4-connected components. Figure 1O.19(b) shows that choosing
8-adjacency reduces the number of connected components to two.
Toolbox function bwlabel computes all the connected components in a
binary image. The calling syntax is
where f is an input binary image and conn specifies the desired connectivity
(either 4 or 8). Output L is called a label matrix, and num (optional) gives the
516 Chapter 10 • Morphological Image Processing
a b ~ ~
c d
FIGURE 10.19 f 1
~ 0 0 0 0 0
1ft 1
~ 0 0 0 0 0
Connected 1 I 1 0 VI 1" 0 0
I
1 1 1 0 II 1\ 0 0
components.
1 1 1 1 1 1\ 0
(a) Four
1 1 0 1,,1 1/ 0 0 0
~ 0
4-connected
components.
1 1 1 0 0 0 (1\ 0 1 1 1 0 0 0 ~ 0
(b) Two 1 1 1 0 0 0 1 0 1 1 1 0 0 0 1 0
8-connected
components.
1 1 1 0 0 0 \1 0 1 1 1 0 0 O
J Y 0
\
(c) Label matrix
obtained using
1 1 1 0 0 i" 0 u 1 1 1 0 0 CvVo 0
4-connectivity
(d) Label matrix
1 j
\ '---./ 0 0 0 0 0
I~ 1 j 0 0 0 0 0
'---./
obtained using
8-connectivity. ] 1 1 0 0 0 0 0 1 1 1 0 0 0 0 0
1 1 1 0 2 2 0 0 1 I 1 0 2 2 0 0
1 1 1 0 2 2 0 0 1 1 1 0 2 2 0 0
1 1 1 0 0 0 4 0 1 1 1 0 0 0 2 0
1 1 1 0 0 0 4 0 1 1 1 0 0 0 2 0
1 1 1 0 0 0 4 0 1 1 1 0 0 0 2 0
1 1 1 0 0 3 0 0 1 1 1 0 0 2 0 0
1 1 1 0 0 0 0 0 1 1 1 0 0 0 0 0
EXAMPLE 10.7: • This example shows how to compute and display the center of mass of each
Computing and connected component in Fig. 10.17(a). First, we use bwlabel to compute the
displaying the 8-connected components:
center of mass of
connected
» f = imr e ad('objects.tif');
components.
» [l, n] = bwlabel(f);
Function find (Section 5.2.2) is useful when working with label matrices.
For example, the following call to find returns the row and column indices for
10.4 • Labeling Connected Components 517
A loop can be used to compute and display the centers of mass of all the See Section 12.4.1 for a
discussion of function
objects in the image. To make the centers of mass visible when superimposed regionprops , which
on the image, we display them using a white" * " marker on top of a black- provides a faster and
more convenient way to
filled circle marker, as follows: compute object centroids.
» imshow(f)
» hold on % So later plotting commands plot on top of the image.
» for k = 1:n
[r, c] = find(L k);
rbar = mean(r);
cbar = mean(c);
plot(cbar, rbar, 'Marker ', ' 0', 'MarkerEdgeColor', 'k', ...
'MarkerFaceColor', ' k', 'MarkerSize', 10);
plot (cbar, rbar, 'Marker', '* ', 'MarkerEdgeColor', 'w');
end
1m Morphological Reconstruction
See Sections 11.4.2 and Reconstruction is a morphological transformation involving two images and a
11.4.3 for additional
app lications of
structuring element (instead of a single image and structuring element). One
morphol ogical image, the marker, is the starting point for the transformation. The other image,
reconstruction.
the mask, constrains the transformation. The structuring element used defines
connectivity. In this section we use 8-connectivity (the default), which implies
that B in the following discussion is a 3 X 3 matrix of 1s, with the center defined
at coordinates (2, 2). In this section we deal with binary images; gray-scale
reconstruction is discussed in Section 10.6.3.
If G is the mask and F is the marker, the reconstruction of G from F,
This definition of denoted RG(F), is defined by the following iterative procedure:
reconstruction is based
on dilation. It is possible 1. Initialize h to be the marker image, F.
J
to define a similar
operation using erosion. 2. Create the structuring element: B = ones (3) .
TIle results are dua ls of
each other with respect
3. Repeat:
to se t complementation.
These concepts are
developed in detail in
Gonzalez and Woods
[2008].
Fc;;,G
Figure 10.21 illustrates the preceding iterative procedure. Although this iter-
ative formulation is useful conceptually, much faster computational algorithms
exist. Toolbox function imreconstruct uses the "fast hybrid reconstruction"
algorithm described in Vincent [1993]. The calling syntax for imreconstruct
is
out = imreconstruct(marker, mask)
where marker and mask are as defined at the beginning of this section.
EXAMPLE 10.8: • A comparison between opening and opening by reconstruction for an im-
Opening by age containing text is shown in Fig. 10.22. In this example, we are interested in
reconstruction . extracting from Fig. 10.22(a) the characters that contain long vertical strokes.
G G sf 6 SF 5
>
abc
d e f
FIGURE 10.21 Morphological reconstruction. (a) Original image (the mask). (b) Marker image. (c)-(e) Interme-
diate result after 100,200, and 300 iterations, respectively. (f) Final result. (The outlines of the objects in the
mask image are superimposed on (b )-( e) as visual references.)
» f = imread( 'book_text_bw.tif');
»fe = imerode(f, ones(51, 1));
Figure lO.22(b) shows the result. The opening, shown in Fig. lO.22( c), is
computed using imopen:
» fo = imopen(f,ones(51 , 1));
Note that the vertical strokes were restored, but not the rest of the characters
containing the strokes. Finally, we obtain the reconstruction:
FIGURE 10.22
Morphological
reconstruction:
(a) Original
image.
(b) Image eroded
with vertical line;
(c) opened with a
vertical line; and
(d) opened by re-
construction with
a vertical line.
(e) Holes filled.
(f) Characters
touching the
border (see right
border).
(g) Border
characters
removed .
The result in Fig.10.22( d) shows that characters containing long vertical strokes
were restored exactly; all other characters were removed. The remaining parts
of Fig. 10.22 are explained in the following two sections. •
Then,
is a binary image equal to I with all holes filled, as illustrated in Fig. 10.22(e).
10.6 • Gray-Scale Morphology 521
where I is the original image. Then, using I as the mask image, the reconstruc-
tion
yields an image, H, that contains only the objects touching the border, as Fig.
10.22(f) shows. The difference, 1- H, shown in Fig. 10.22(g), contains only the
objects from the original image that do not touch the border. Toolbox function
imclearborder performs this entire procedure automatically. Its syntax is
In this case, the max operation is specified completely by the pattern of Os and
Is in binary matrix Db' and the gray-scale dilation equation simplifies to
a b
c d
FIGURE 10.23
Dilation and
erosion.
(a) Original
image. (b) Dil ated
image. (c) Eroded
image.
(d) Morphological
gradient.
(Original
image courtesy of
NASA.)
» se strel('square', 3);
» gd imdilate(f, se);
Figure lO.23(b) shows the result. As expected, the image is slightly blurred. The
rest of this figure is explained in the following discussion.
The gray-scale erosion of gray-scale image fby structuring element b, denoted
by feb , is defined as
» morph_grad = gd - ge;
produced the image in Fig. 10.23(d) , which is the morphological gradient of the
image in Fig. 10.23(a). This image has edge-enhancement characteristics simi-
lar to those that would be obtained using the gradient operations discussed in
Sections 7.6.1 and later in Section 11.1.3.
fob = (f 8 b)(JJb
where it is understood that erosion and dilation are the grayscale opera-
tions defined in Section 10.6.1. Similarly, the closing of f by b, denoted f - b, is
defined as dilation followed by erosion:
f -b= (f(JJb)8b
a
b
c
d
e
FIGURE 10.24
Opening and
closing in one
dimension.
(a) Originall-D
signal. (b) Flat
structuring
element pushed
up underneath the
signal.
(c) Opening.
(d) Flat
structuring
element pushed
down along the
top of the signal.
(e) Closing.
in Fig. lO.24(c). Because the structuring element is too large to fit inside the
upward peak on the middle of the curve, that peak is removed by the opening.
In general, openings are used to remove small bright details while leaving the
overall gray levels and larger bright features relatively undisturbed.
Figure lO.24( d) is a graphical illustration of closing. The structuring ele-
ment is pushed down on top of the curve while being translated to all locations.
The closing, shown in Fig. lO.24(e), is constructed by finding the lowest points
reached by any part of the structuring element as it slides against the upper
side of the curve. You can see that closing suppresses dark details smaller than
the structuring element.
526 Chapter 10 • Morphological Image Processing
EXAMPLE 10.9: • Because opening suppresses bright details smaller than the structuring ele-
Morphological ment, and closing suppresses dark details smaller than the structuring element,
smoothing using
openings and
they are used often in combination for image smoothing and noise removal. In
closings. this example we use imopen and imclose to smooth the image of wood dowel
plugs shown in Fig. 10.25(a). The key feature of these dowels is their wood
grain (appearing as dark streaks) superimposed on a reasonably uniform, light
background. When interpreting the results that follow, it helps to keep in mind
the analogies of opening and closing illustrated in Fig. 10.24.
Consider the following sequence of steps:
» f = imread('plugs.jpg');
» se = strel(' disk', 5);
» fo = imopen(f, se);
» foc = imclose(fo, se);
Figure 10.25(b) shows the opened image, fo. Here, we see that the light areas
have been toned down (smoothed) and the dark streaks in the dowels have
not been nearly as affected. Figure 10.25(c) shows the closing of the opening,
foc . Now we notice that the dark areas have been smoothed as well, result-
ing is an overall smoothing of the entire image. This procedure is often called
open-close filtering.
A similar procedure, called close-open filtering, reverses the order of the
operations. Figure 10.25(d) shows the result of closing the original image. The
dark streaks in the dowels have been smoothed out, leaving mostly light detail
(for example, note the light streaks in the background). The opening of Fig.
10.25(d) [Fig. 10.25( e)] shows a smoothing of these streaks and further smooth-
ing of the dowel surfaces. The net result is overall smoothing of the original
image.
Another way to use openings and closings in combination is in alternating
sequential filtering. One form of alternating sequential filtering is to perform
open-close filtering with a series of structuring elements of increasing size. The
following commands illustrate this process, which begins with a small structur-
ing element and increases its size until it is the same as the structuring element
used to obtain Figs.10.25(b) and (c):
» fasf = f;
» for k = 2:5
se = strel('disk' ,k);
fasf = imclose(imopen(fasf, se), se);
end
The result, shown in Fig. 10.25(f), yielded a slightly smoother image than using
a single open-close filter, at the expense of additional processing. When com-
paring the three approaches in this particular case, close-open filtering yielded
the smoothest result. •
10.6 • Gray-Scale Morphology 527
a b
c d
e f
FIGURE 10.25
Smoothing using
openings and
closings.
(a) Original image
of wood dowel
plugs. (b) Image
opened using a
disk of radius 5.
( c) Closing of the
opening.
(d) Closing of the
original image.
(e) Opening of
the closing.
(f) Result of
alternating
sequentia l filter.
• Openings can be used to compensate for nonuniform background illumi- EXAMPLE 10.10:
nation. Figure 10.26(a) shows an image, f, of rice grains in which the back- Compensating
ground is darker towards the bottom than in the upper portion of the image. for a nonuniform
background.
The uneven illumination makes image thresholding (Section 11.3) difficult.
Figure 10.26(b), for example, is a thresbolded version in which grains at the
top of the image are well separated from the background, but grains at the
bottom are extracted improperly from the background. Opening the image
can produce a reasonable estimate of the background across the image, as long
as the structuring element is large enough so that it does not fit entirely within
the rice grains. For example, the commands
528 Chapter 10 • Morphological Image Processing
resulted in the opened image in Fig. lO.26(c). By subtracting this image from
the original, we can generate an image of the grains with a reasonably uniform
background:
» f2 = f - fo;
Figure lO.26(d) shows the result, and Fig. lO.26(e) shows the new thresholded
image. Note the improvement over Fig. lO.26(b). •
abc
d e
FIGURE 10.26 Compensating for non-uniform illumination. (a) Original image. (b) Thresbolded image.
(c) Opened image showing an estimate of the background. (d) Result of subtracting the estimated back-
ground for the original image. (e) Result of thresholding the image in (d). (Original image courtesy of The
MathWorks, Inc.)
10.6 • Gray-Scale Morphology 529
» f2 = imtophat(f, se);
g = imtophat(f, NHOOD)
where NHOOD is an array of Os and Is that specifies the size and shape of the
structuring element. This syntax is the same as using the call
imtophat(f, strel(NHOOD))
» f = imread('plugs.jpg');
» sumpixels = zeros(1, 36);
» for k 0:35
se = strel( 'disk', k);
fo = imopen(f, se);
If v is a vector, th en
sumpixels(k + 1) = sum(fo(:)); di ff (v ) re turns a vecto r,
end one element shorter th an
v, of di ffe rences between
adj acent e leme nts. If X is
» plot(0:35, sumpixels), xlabel( 'k'), ylabel( 'Surface area') a matri x, th en di ff ( X)
return s a matrix of row
diffe rences:
Figure 10.27(a) shows the resulting plot of sumpixels versus k. More inter- [X(2:end, : ) -
esting is the reduction in surface area between successive openings: X( 1: end - 1 , :) ] .
» plot(-diff(sumpixels))
» xlabel(' k')
» ylabel('Surface area reduction')
Peaks in the plot in Fig. 10.27(b) indicate the presence of a large number of
530 Chapter 10 • Morphological Image Processing
objects having that radius. Because the plot is quite noisy, we repeat this proce-
dure with the smoothed version of the plugs image in Fig. 10.25(d). The result,
shown in Fig.10.27(c), indicates more clearly the two different sizes of objects
in the original image. •
10.6.3 Reconstruction
Gray-scale morphological reconstruction is defined by the same iterative pro-
cedure given in Section 10.5. Figure 10.28 shows how gray-scale reconstruc-
tion works in one dimension. The top curve of Fig. 10.28(a) is the mask while
the bottom, gray curve is the marker. In this case the marker is formed by
subtracting a constant from the mask, but in general any signal can be used
for the marker as long as none of its values exceed the corresponding val-
ue in the mask. Each iteration of the reconstruction procedure spreads the
peaks in the marker curve until they are forced downward by the mask curve
[Fig.10.28(b)].
The final reconstruction is the black curve in Fig. 10.28(c). Notice that the
two smaller peaks were eliminated in the reconstruction, but the two taller
peaks, although they are now shorter, remain. When a marker image is formed
by subtracting a constant h from the mask image, the reconstruction is called
a b X 107 X 106
c 3.5 2.5
FIGURE 10.27 3
c
Granulometry. .g 2
u
:>
(a) Surface area
versus structuring '"'".... 2.5 "0
'"
.... 1.5
'"u
....'....'"" '"'"
element radius. 2
....
(b) Reduction in :> '" 1
surface area C/l 1.5 '"
u
.$
.... 0.5
versus radius. 1 :>
C/l
(c) Reduction
in surface area 0.5
0 10 20 30 10 20 30
versus radius for a
smoothed image. k k
x 106
4
c
0
''5:> 3
"0
'"
....
'"'@" 2
....'"'"
u
....
:>
C/l
00 10 20 30
k
10.6 • Gray-Scale Morphology 531
a
b
c
FIGURE 10.28
Gray-scale
morphological
reconstruction in
one dimension.
(a) Mask (top)
and marker
curves.
(b) Iterative
computation of the
reconstruction.
(c) Reconstruction
result (black
curve).
» f = imread ( , plugs. j pg , ) ;
» se = strel( 'disk', 5) ;
» fe = imerode(f, se);
» fobr = imreconstruct(fe, f) ;
» fobrc = imcomplement(fobr);
» fobrce = imerode(fobrc, se);
» fobrcbr = imcomplement(imreconstruct(fobrce, fobrc));
532 Chapter 10 • Morphological Image Processing
a b
FIGURE 10.29
(a) Opening-by-
reconstruction.
(b) Opening-by-
reconstruction
followed by
closing-by-
reconstruction.
EXAMPLE 10.12: • Our concluding example uses gray-scale reconstruction in several steps. The
Using gray-scale objective is to isolate the text out of the image of calculator keys shown in
reconstruction to Fig. 1O.30(a). The first step is to suppress the horizontal reflections on the top
remove a complex
background. of each key. To accomplish this, we use the fact that these reflections are wider
than any single text character in the image. We perform opening-by-recon-
struction using a structuring element that is a long horizontal line:
» f = imread( 'calculator.jpg')j
» f obr = imreconstruct(imerode(f, ones(1, 71)), f)j
» f_o = imopen(f, ones(1, 71))j % For comparison.
» f_thr = f - f_obrj
» f_th = f - f_oj % Or imtophat(f,ones(1, 71))
In the result [Fig. lO.30(f)], the vertical reflections are gone, but so are thin,
vertical-stroke characters, such as the slash on the percent symbol and the
"I" in ASIN. We make use of the fact that the characters that have been sup-
10.6 • Gray-Scale Morphology 533
abc
d e f
g h
FIGURE 10.30 An application of gray-scale reconstruction. (a) Original image. (b) Opening-by-reconstruction.
(c) Opening. (d) Tophat-by-reconstruction. (e) Tophat. (f) Opening-by-reconstruction of (d) using a horizon-
tal line. (g) Dilation of (f) using a horizontal line. (h) Final reconstruction result.
pressed in error are very close spatially to other characters still present by first
performing a dilation [Fig. lO.30(g)],
Figure 1O.30(h) shows the final result. Note that the shading and reflections on
the background and keys were removed successfully. •
Summary
The morphological concepts and techniques introduced in this chapter constitute
a powerful set of tools for extracting features from an image. The basic operators of
erosion, dilation , and reconstruction -defined for both binary and gray-scale image
processing-can be used in combination to perform a wide variety of tasks. As shown
in the following chapter, morphological techniques can be used for image segmentation.
Moreover, they play an important role in algorithms for image description, as discussed
in Chapter 12.