0% found this document useful (0 votes)
14 views

Lecture 4 ETI 2507

Chapter 3 discusses neighbourhood processing in image modification, where a function is applied to a neighbourhood of each pixel using a mask or filter. It details the steps involved in spatial filtering and convolution, emphasizing the importance of handling image edges and providing examples of linear filters, particularly averaging filters. Additionally, it explains how to implement these processes in MATLAB using the filter2 function and the fspecial function for creating filters.

Uploaded by

obedabere782
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Lecture 4 ETI 2507

Chapter 3 discusses neighbourhood processing in image modification, where a function is applied to a neighbourhood of each pixel using a mask or filter. It details the steps involved in spatial filtering and convolution, emphasizing the importance of handling image edges and providing examples of linear filters, particularly averaging filters. Additionally, it explains how to implement these processes in MATLAB using the filter2 function and the fspecial function for creating filters.

Uploaded by

obedabere782
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Chapter 3

Neighbourhood Processing

3.1 Introduction

We have seen in chapter 2 that an image can be modified by applying a particular function to each
pixel value. Neighbourhood processing may be considered as an extension of this, where a function
is applied to a neighbourhood of each pixel.
The idea is to move a “mask”: a rectangle (usually with sides of odd length) or other shape over
the given image. As we do this, we create a new image whose pixels have grey values calculated
from the grey values under the mask, as shown in figure 3.1. The combination of mask and function

Mask

Pixel at position  Pixel at position 

Original image Image after filtering


Figure 3.1: Using a spatial mask on an image

is called a filter. If the function by which the new grey value is calculated is a linear function of all
the grey values in the mask, then the filter is called a linear filter.

elements in the neighbourhood, and adding up all these products. Suppose we have a
  
A linear filter can be implemented by multiplying all elements in the mask by corresponding
mask
as illustrated in figure 3.1. Suppose that the mask values are given by:

57
58 CHAPTER 3. NEIGHBOURHOOD PROCESSING

       

       

       

and that corresponding pixel values are

   ! "#$% &'! (&

)* +#! %  &,!  &#

&- .&' ! /&,$% .&'&'! .&'(&

We now multiply and add:


0 0 $ 5    6
 5   
31 2 ! 42 !

A diagram illustrating the process for performing spatial filtering is given in figure 3.2.
Spatial filtering thus requires three steps:
1. position the mask over the current pixel,
2. form all products of filter elements with the corresponding elements of the neighbourhood,
3. add up all the products.
This must be repeated for every pixel in the image.
Allied to spatial filtering is spatial convolution. The method for performing a convolution is the
87


same as that for filtering, except that the filter must be rotated by
adding. Using the
$ 
and
  before multiplying and
notation as before, the output of a convolution with a
  
mask for a single pixel is
0 0 $ 5   6
 5   
31 2 ! 42 !
3.1. INTRODUCTION 59

Product of neighbourhood
Mask
with mask

Pixel
Neighbourhood

Input image

Sum of all products


Current pixel

Output pixel

Output image
Figure 3.2: Performing spatial filtering
60 CHAPTER 3. NEIGHBOURHOOD PROCESSING

$
Note the negative signs on the indices of . The same result can be achieved with

0 0 $ 5    5  
31 2 ! 42 !

7


Here we have rotated the image pixels by ; this does not of course affect the result. The
importance of convolution will become apparent when we investigate the Fourier transform, and
the convolution theorem. Note also that in practice, most filter masks are rotationally symmetric,
so that spatial filtering and spatial convolution will produce the same output.

An example: One important linear filter is to use a mask and take the average of all nine
  
values within the mask. This value becomes the grey value of the corresponding pixel in the new
image. This operation may be described as follows:

        






 







where is grey value of the current pixel in the original image, and the average is the grey value of
the corresponding pixel in the new image.
  
To apply this to an image, consider the “image” obtained by:

>> x=uint8(10*magic(5))

x =

170 240 10 80 150


230 50 70 140 160
40 60 130 200 220
100 120 190 210 30
110 180 250 20 90

We may regard this array as being made of nine overlapping neighbourhoods. The output of
  
our working will thus consist only of nine values. We shall see later how to obtain 25 values in the
output.
Consider the top left

neighbourhood of our image x:
 

170 240 10 80 150


230 50 70 140 160
40 60 130 200 220
100 120 190 210 30
110 180 250 20 90

Now we take the average of all these values:


3.2. NOTATION 61

>> mean2(x(1:3,1:3))

ans =

111.1111

which can be rounded to 111. Now we can move to the second neighbourhood:

170 240 10 80 150


230 50 70 140 160
40 60 130 200 220
100 120 190 210 30
110 180 250 20 90

and take its average:

>> mean2(x(1:3,2:4))

ans =

108.8889

and this can be rounded either down to 108, or to the nearest integer 109. If we continue in this
manner, the following output is obtained:

111.1111 108.8889 128.8889


110.0000 130.0000 150.0000
131.1111 151.1111 148.8889

This array is the result of filtering x with the


   averaging filter.

3.2 Notation
It is convenient to describe a linear filter simply in terms of the coefficients of all the grey values of
pixels within the mask. This can be written as a matrix.
The averaging filter above, for example, could have its output written as

                  



and so this filter can be described by the matrix


   

    
      

  
62 CHAPTER 3. NEIGHBOURHOOD PROCESSING

An example: The filter


 

  
 

 
would operate on grey values as

   








  







Edges of the image


There is an obvious problem in applying a filter—what happens at the edge of the image, where
the mask partly falls outside the image? In such a case, as illustrated in figure 3.3 there will be a
lack of grey values to use in the filter function.

Figure 3.3: A mask at the edge of an image

There are a number of different approaches to dealing with this problem:


Ignore the edges. That is, the mask is only applied to those pixels in the image for with the
mask will lie fully within the image. This means all pixels except for the edges, and results
in an output image which is smaller than the original. If the mask is very large, a significant
amount of information may be lost by this method.
We applied this method in our example above.

“Pad” with zeros. We assume that all necessary values outside the image are zero. This gives us
all values to work with, and will return an output image of the same size as the original, but
may have the effect of introducing unwanted artifacts (for example, edges) around the image.

3.3 Filtering in Matlab


The filter2 function does the job of linear filtering for us; its use is

filter2(filter,image,shape)
and the result is a matrix of data type double. The parameter shape is optional, it describes the
method for dealing with the edges:

filter2(filter,image,’same’) is the default; it produces a matrix of equal size to the


original image matrix. It uses zero padding:
3.3. FILTERING IN MATLAB 63

>> a=ones(3,3)/9

a =

0.1111 0.1111 0.1111


0.1111 0.1111 0.1111
0.1111 0.1111 0.1111

>> filter2(a,x,’same’)

ans =

76.6667 85.5556 65.5556 67.7778 58.8889


87.7778 111.1111 108.8889 128.8889 105.5556
66.6667 110.0000 130.0000 150.0000 106.6667
67.7778 131.1111 151.1111 148.8889 85.5556
56.6667 105.5556 107.7778 87.7778 38.8889

filter2(filter,image,’valid’) applies the mask only to “inside” pixels. The result will
always be smaller than the original:

>> filter2(a,x,’valid’)

ans =

111.1111 108.8889 128.8889


110.0000 130.0000 150.0000
131.1111 151.1111 148.8889

The result of ’same’ above may also be obtained by padding with zeros and using ’valid’:

>> x2=zeros(7,7);
>> x2(2:6,2:6)=x

x2 =

0 0 0 0 0 0 0
0 170 240 10 80 150 0
0 230 50 70 140 160 0
0 40 60 130 200 220 0
0 100 120 190 210 30 0
0 110 180 250 20 90 0
0 0 0 0 0 0 0

>> filter2(a,x2,’valid’)
64 CHAPTER 3. NEIGHBOURHOOD PROCESSING

filter2(filter,image,’full’) returns a result larger than the original; it does this by


padding with zero, and applying the filter at all places on and around the image where the
mask intersects the image matrix.

>> filter2(a,x,’full’)

ans =

18.8889 45.5556 46.6667 36.6667 26.6667 25.5556 16.6667


44.4444 76.6667 85.5556 65.5556 67.7778 58.8889 34.4444
48.8889 87.7778 111.1111 108.8889 128.8889 105.5556 58.8889
41.1111 66.6667 110.0000 130.0000 150.0000 106.6667 45.5556
27.7778 67.7778 131.1111 151.1111 148.8889 85.5556 37.7778
23.3333 56.6667 105.5556 107.7778 87.7778 38.8889 13.3333
12.2222 32.2222 60.0000 50.0000 40.0000 12.2222 10.0000

The shape parameter, being optional, can be omitted; in which case the default value is ’same’.
There is no single “best” approach; the method must be dictated by the problem at hand; by
the filter being used, and by the result required.
We can create our filters by hand, or by using the fspecial function; this has many options
which makes for easy creation of many different filters. We shall use the average option, which
produces averaging filters of given size; thus

>> fspecial(’average’,[5,7])
  (
will return an averaging filter of size ; more simply

>> fspecial(’average’,11)
 
   . If we leave out the final number or vector, the
 
will return an averaging filter of size
averaging filter is returned.
For example, suppose we apply the
)  averaging filter to an image as follows:
>> c=imread(’cameraman.tif’);
>> f1=fspecial(’average’);
>> cf1=filter2(f1,c);

We now have a matrix of data type double. To display this, we can do any of the following:

transform it to a matrix of type uint8, for use with imshow,

divide its values by 255 to obtain a matrix with values in the   –  range, for use with
imshow,

use mat2gray to scale the result for display. We shall discuss the use of this function later.

Using the second method:


3.3. FILTERING IN MATLAB 65

>> figure,imshow(c),figure,imshow(cf1/255)

will produce the images shown in figures 3.4(a) and 3.4(b).


The averaging filter blurs the image; the edges in particular are less distinct than in the original.
The image can be further blurred by using an averaging filter of larger size. This is shown in
     
figure 3.4(c), where a averaging filter has been used, and in figure 3.4(d), where a
averaging filter has been used.

(a) Original image (b) Average filtering

     
(c) Using a filter (d) Using a filter

Figure 3.4: Average filtering

Notice how the zero padding used at the edges has resulted in a dark border appearing around
the image. This is especially noticeable when a large filter is being used. If this is an unwanted
artifact of the filtering; if for example it changes the average brightness of the image, then it may
66 CHAPTER 3. NEIGHBOURHOOD PROCESSING

be more appropriate to use the ’valid’ shape option.


The resulting image after these filters may appear to be much “worse” than the original. However,
applying a blurring filter to reduce detail in an image may the perfect operation for autonomous
machine recognition, or if we are only concentrating on the “gross” aspects of the image: numbers of
objects; amount of dark and light areas. In such cases, too much detail may obscure the outcome.

Separable filters
Some filters can be implemented by the successive application of two simpler filters. For example,
since
   

   
     
        
   
  


the

averaging filter can be implemented by first applying a averaging filter, and then
 

 
applying a averaging filter to the result. The averaging filter is thus separable into two

smaller filters. Separability can result in great time savings. Suppose an filter is separable
 
  
into two filters of size and . The application of an filter requires multiplications,
 

and additions for each pixel in the image. But the application of an filter only
requires multiplications and 
additions. Since this must be done twice, the total number of
multiplications and additions are and respectively. If is large the savings in efficiency
can be dramatic.
All averaging filters are separable; another separable filter is the laplacian
   

   
 

    
  
Other examples will be considered below.

3.4 Frequencies; low and high pass filters


It will be convenient to have some standard terminology by which we can discuss the effects a
filter will have on an image, and to be able to choose the most appropriate filter for a given image
processing task. One important aspect of an image which enables us to do this is the notion of
frequencies. Roughly speaking, the frequencies of an image are a measure of the amount by which
grey values change with distance. This concept will be given a more formal setting in chapter 4.
High frequency components are characterized by large changes in grey values over small distances;
example of high frequency components are edges and noise. Low frequency components, on the other
hand, are parts of the image characterized by little change in the grey values. These may include
backgrounds, skin textures. We then say that a filter is a

high pass filter if it “passes over” the high frequency components, and reduces or eliminates low
frequency components,

low pass filter if it “passes over” the low frequency components, and reduces or eliminates high
frequency components,
3.4. FREQUENCIES; LOW AND HIGH PASS FILTERS 67

For example, the


)  averaging filter is low pass filter, as it tends to blur edges. The filter
  

  
 

 
is a high pass filter.
We note that the sum of the coefficients (that is, the sum of all e elements in the matrix), in the
high pass filter is zero. This means that in a low frequency part of an image, where the grey values
are similar, the result of using this filter is that the corresponding grey values in the new image will
  
be close to zero. To see this, consider a block of similar values pixels, and apply the above
high pass filter to the central four:
   
   
 
%
        
(

       
  

    
The resulting values are close to zero, which is the expected result of applying a high pass filter to
a low frequency component. We shall see how to deal with negative values below.
High pass filters are of particular value in edge detection and edge enhancement (of which we
shall see more in chapter 8). But we can provide a sneak preview, using the cameraman image.

>> f=fspecial(’laplacian’)

f =

0.1667 0.6667 0.1667


0.6667 -3.3333 0.6667
0.1667 0.6667 0.1667

>> cf=filter2(f,c);
>> imshow(cf/100)
>> f1=fspecial(’log’)

f1 =

0.0448 0.0468 0.0564 0.0468 0.0448


0.0468 0.3167 0.7146 0.3167 0.0468
0.0564 0.7146 -4.9048 0.7146 0.0564
0.0468 0.3167 0.7146 0.3167 0.0468
0.0448 0.0468 0.0564 0.0468 0.0448

>> cf1=filter2(f1,c);
>> figure,imshow(cf1/100)

The images are shown in figure 3.5. Image (a) is the result of the Laplacian filter; image (b) shows
the result of the Laplacian of Gaussian (“log”) filter.
In each case, the sum of all the filter elements is zero.
68 CHAPTER 3. NEIGHBOURHOOD PROCESSING

(a) Laplacian filter (b) Laplacian of Gaussian (“log”) filtering

Figure 3.5: High pass filtering

Values outside the range 0–255

We have seen that for image display, we would like the grey values of the pixels to lie between 0
and 255. However, the result of applying a linear filter may be values which lie outside this range.
We may consider ways of dealing with values outside of this “displayable” range.

Make negative values positive. This will certainly deal with negative values, but not with val-
ues greater than 255. Hence, this can only be used in specific circumstances; for example,
when there are only a few negative values, and when these values are themselves close to zero.

Clip values. We apply the following thresholding type operation to the grey values produced by
the filter to obtain a displayable value :

  if
   


if
 
if   

This will produce an image with all pixel values in the required range, but is not suitable if
there are many grey values outside the 0–255 range; in particular, if the grey values are equally
spread over a larger range. In such a case this operation will tend to destroy the results of the
filter.


highest value is 
. We can transform all values in the range  –  
Scaling transformation. Suppose the lowest grey value produced by the filter if  and the
to the range 0–255 by
the linear transformation illustrated below:
3.4. FREQUENCIES; LOW AND HIGH PASS FILTERS 69

255

 

Since the gradient of the line is
     we can write the equation of the line as


  
  

and applying this transformation to all grey levels produced by the filter will result (after
any necessary rounding) in an image which can be displayed.

As an example, let’s apply the high pass filter given in section 3.4 to the cameraman image:

>> f2=[1 -2 1;-2 4 -2;1 -2 1];


>> cf2=filter2(f2,c);
   


Now the maximum and minimum values of the matrix cf2 are and respectively. The
mat2gray function automatically scales the matrix elements to displayable values; for any matrix
, it applies a linear transformation to to its elements, with the lowest value mapping to 0.0, and


the highest value mapping to 1.0. This means the output of mat2gray is always of type double.
The function also requires that the input type is double.

>> figure,imshow(mat2gray(cf2));

To do this by hand, so to speak, applying the linear transformation above, we can use:

>> maxcf2=max(cf2(:));
>> mincf2=min(cf2(:));
>> cf2g=(cf2-mincf2)/(maxcf2-mncf2);

The result will be a matrix of type double, with entries in the range  
– . This can be be viewed
with imshow. We can make it a uint8 image by multiplying by 255 first. The result can be seen in
figure 3.6.
We can generally obtain a better result by dividing the result of the filtering by a constant before
displaying it:

>> figure,imshow(cf2/60)

and this is also shown in figure 3.6.


High pass filters are often used for edge detection. These can be seen quite clearly in the right
hand image of figure 3.6.
70 CHAPTER 3. NEIGHBOURHOOD PROCESSING

Using mat2gray Dividing by a constant

Figure 3.6: Using a high pass filter and displaying the result

3.5 Edge sharpening


Spatial filtering can be used to make edges in an image slightly sharper and crisper, which gener-
ally results in an image more pleasing to the human eye. The operation is variously called “edge
enhancement”, “edge crispening”, or “unsharp masking”. This last term comes from the printing
industry.

Unsharp masking
The idea of unsharp masking is to subtract a scaled “unsharp” version of the image from the original.
In practice, we can achieve this affect by subtracting a scaled blurred image from the original. The
schema for unsharp masking is shown in figure 3.7.

Scale for
Original Subtract
display

Blur with

Scale with
low pass filter 
Figure 3.7: Schema for unsharp masking

Suppose an image x is of type uint8. The unsharp masking can be applied by the following
sequence of commands:
>> f=fspecial(’average’);
>> xf=filter2(f,x);
>> xu=double(x)-xf/1.5
>> imshow(xu/70)
3.5. EDGE SHARPENING 71

The last command scales the result so that imshow displays an appropriate image; the value may
need to be adjusted according to the input image. Suppose that x is the image shown in figure 3.8(a),
then the result of unsharp masking is given in figure 3.8(b). The result appears to be a better image
than the original; the edges are crisper and more clearly defined.

(a) Original image (b) The image after unsharp masking

Figure 3.8: An example of unsharp masking

To see why this works, we may consider the function of grey values as we travel across an edge,
as shown in figure 3.9.
As a scaled blur is subtracted from the original, the result is that the edge is enhanced, as shown
in graph (c) of figure 3.9.

linearity of the filter, and that the


)
filter

We can in fact perform the filtering and subtracting operation in one command, using the

 

  

is the “identity filter”.


Hence unsharp masking can be implemented by a filter of the form
  
   

       
          

  
where is a constant chosen to provide the best result. Alternatively, the unsharp masking filter
may be defined as
   
    

      
          

  
so that we are in effect subtracting a blur from a scaled version of the original; the scaling factor
may also be split between the identity and blurring filters.
72 CHAPTER 3. NEIGHBOURHOOD PROCESSING

(a) Pixel values over an edge

(b) The edge blurred

(c) (a) (b)

Figure 3.9: Unsharp masking


3.5. EDGE SHARPENING 73

The unsharp option of fspecial produces such filters; the filter created has the form
 

       


where is an optional parameter which defaults to 0.2. If   the filter is
    
   

      
     
 
    

 

 

 

     
Figure 3.10 was created using the Matlab commands

>> p=imread(’pelicans.tif’);
>> u=fspecial(’unsharp’,0.5);
>> pu=filter2(u,p);
>> imshow(p),figure,imshow(pu/255)

Figure 3.10(b), appears much sharper and “cleaner” than the original. Notice in particular the rocks
and trees in the background, and the ripples on the water.

(a) The original (b) After unsharp masking

Figure 3.10: Edge enhancement with unsharp masking

Although we have used averaging filters above, we can in fact use any low pass filter for unsharp
masking.

High boost filtering


Allied to unsharp masking filters are the high boost filters, which are obtained by

high boost  original low pass 


74 CHAPTER 3. NEIGHBOURHOOD PROCESSING


where is an “amplification factor”. If , then the high boost filter becomes an ordinary high

    
pass filter. If we take as the low pass filter the averaging filter, then a high boost filter will
have the form
 

  
    

  
where . If we put  
, we obtain a filtering very similar to the unsharp filter above, except  
for a scaling factor. Thus the commands:
>> f=[-1 -1 -1;-1 11 -1;-1 -1 -1]/9;
>> xf=filter2(x,f);
>> imshow(xf/80)

will produce an image similar to that in figure 3.8. The value 80 was obtained by trial and error to
produce an image with similar intensity to the original.
We can also write the high boost formula above as

high boost   original low pass


  original original high pass
   original  high pass 

Best results for high boost filtering are obtained if we multiply the equation by a factor so that
the filter values sum to 1; this requires
    
or
    
So a general unsharp masking formula is

  original    low pass
 
Another version of this formula is

  original    low pass


where for best results  is taken so that
  
 


%

If we take  
 , the formula becomes 

 
  
 
 original low pass original low pass
 
 

If we take    %
we obtain


 original  low pass

Using the identity and averaging filters, we can obtain high boost filters by:
3.5. EDGE SHARPENING 75

>> id=[0 0 0;0 1 0;0 0 0];


>> f=fspecial(’average’);
>> hb1=3*id-2*f

hb1 =

-0.2222 -0.2222 -0.2222


-0.2222 2.7778 -0.2222
-0.2222 -0.2222 -0.2222

>> hb2=1.25*id-0.25*f

hb2 =

-0.0278 -0.0278 -0.0278


-0.0278 1.2222 -0.0278
-0.0278 -0.0278 -0.0278

If each of the filters hb1 and hb2 are applied to an image with filter2, the result will have enhanced
edges. The images in figure 3.11 show these results; figure 3.11(a) was obtained with

>> x1=filter2(hb1,x);
>> imshow(x1/255)

and figure 3.11(b) similarly.

(a) High boost filtering with hb1 (b) High boost filtering with hb2

Figure 3.11: High boost filtering

Of the two filters, hb1 appears to produce the best result; hb2 produces an image not very much
crisper than the original.
76 CHAPTER 3. NEIGHBOURHOOD PROCESSING

3.6 Non-linear filters


Linear filters, as we have seen in the previous sections, are easy to describe, and can be applied very
quickly and efficiently by Matlab.
A non-linear filter is obtained by a non-linear function of the greyscale values in the mask.
Simple examples are the maximum filter, which has as its output the maximum value under the
mask, and the corresponding minimum filter, which has as its output the minimum value under the
mask.
Both the maximum and minimum filters are examples of rank-order filters. In such a filter, the
elements under the mask are ordered, and a particular value returned as output. So if the values
are given in increasing order, the minimum filter is a rank-order filter for which the first element is
returned, and the maximum filter is a rank-order filter for which the last element is returned
For implementing a general non-linear filter in Matlab, the function to use is nlfilter, which
applies a filter to an image according to a pre-defined function. If the function is not already defined,
we have to create an m-file which defines it.
Here are some examples; first to implement a maximum filter over a

)  neighbourhood:

>> cmax=nlfilter(c,[3,3],’max(x(:))’);

The nlfilter function requires three arguments: the image matrix, the size of the filter, and the
function to be applied. The function must be a matrix function which returns a scalar value. The
result of this operation is shown in figure 3.12(a).
A corresponding implementation of the minimum filter is:

>> cmin=nlfilter(c,[3,3],’min(x(:))’);

and the result is shown in figure 3.12(b).

(a) Using a maximum filter (b) Using a minimum filter

Figure 3.12: Using non-linear filters

Note that in each case the image has lost some sharpness, and has been brightened by the
maximum filter, and darkened by the minimum filter. The nlfilter function is very slow; in
3.6. NON-LINEAR FILTERS 77

general there is little call for non-linear filters except for a few which are defined by their own
commands. We shall investigate these in later chapters.
Non-linear filtering using nlfilter can be very slow. A faster alternative is to use the colfilt
function, which rearranges the image into columns first. For example, to apply the maximum filter
to the cameraman image, we can use

>> cmax=colfilt(c,[3,3],’sliding’,@max);

The parameter sliding indicates that overlapping neighbourhoods are being used (which of course
is the case with filtering). This particular operation is almost instantaneous, as compared with the
use of nlfilter.
To implement the maximum and minimum filters as rank-order filters, we may use the Matlab
function ordfilt2. This requires three inputs: the image, the index value of the ordered results to
choose as output, and the definition of the mask. So to apply the maximum filter on a

mask,
 
we use

>> cmax=ordfilt2(c,9,ones(3,3));

and the minimum filter can be applied with

>> cmin=ordfilt2(c,1,ones(3,3));

A very important rank-order filter is the median filter, which takes the central value of the ordered
list. We could apply the median filter with

>> cmed=ordfilt2(c,5,ones(3,3));

However, the median filter has its own command, medfilt2, which we discuss in more detail in
chapter 5.
Other non-linear filters are the geometric mean filter, which is defined as
    

 

 
where is the filter mask, and   its size; and the alpha-trimmed mean filter, which first orders
 

 
the values under the mask, trims off elements at either end of the ordered list, and takes the mean


   
of the remainder. So, for example, if we have a mask, and we order the elements as
 

and trim off two elements at either end, the result of the filter will be
       "
 
Both of these filters have uses for image restoration; again see chapters 5 and 6.

Exercises
1. The array below represents a small greyscale image. Compute the images that result when
the image is convolved with each of the masks (a) to (h) shown. At the edge of the image use
a restricted mask. (In other words, pad the image with zeroes.)
78 CHAPTER 3. NEIGHBOURHOOD PROCESSING

20 20 20 10 10 10 10 10 10
20 20 20 20 20 20 20 20 10
20 20 20 10 10 10 10 20 10
20 20 10 10 10 10 10 20 10
20 10 10 10 10 10 10 20 10
10 10 10 10 20 10 10 20 10
10 10 10 10 10 10 10 10 10
20 10 20 20 10 10 10 20 20
20 10 10 20 10 10 20 10 20
-1 -1 0 0 -1 -1 -1 -1 -1 -1 2 -1
(a) -1 0 1 (b) 1 0 -1 (c) 2 2 2 (d) -1 2 -1
0 1 1 1 1 0 -1 -1 -1 -1 2 -1

-1 -1 -1 1 1 1 -1 0 1 0 -1 0
(e) -1 8 -1 (f) 1 1 1 (g) -1 0 1 (h) -1 4 -1
-1 -1 -1 1 1 1 -1 0 1 0 -1 0
2. Check your answers to the previous question with Matlab.

3. Describe what each of the masks in the previous question might be used for. If you can’t do
this, wait until question 5 below.

4. Devise a
)  mask for an “identity filter”; which causes no change in the image.

5. Obtain a greyscale image of a monkey (a mandrill) with the following commands:


>> load(’mandrill.mat’);
>> m=im2uint8(ind2gray(X,map));

Apply all the filters listed in question 1 to this image. Can you now see what each filter does?

6. Apply larger and larger averaging filters to this image. What is the smallest sized filter for
which the whiskers cannot be seen?

7. Read through the help page of the fspecial function, and apply some of the other filters to
the cameraman image, and to the mandrill image.

8. Apply different laplacian filters to the mandrill and cameraman images. Which produces the
best edge image?
  

 
9. Is the

median filter separable? That is, can this filter be implemented by a filter


followed by a filter?

10. Repeat the above question for the maximum and minimum filters.

11. Apply a
)  averaging filter to the middle 9 values of the matrix
 




 



 
 

$  

  5

 

3.6. NON-LINEAR FILTERS 79

and then apply another


   averaging filter to the result.
) 
Using your answer, describe a filter which has the effect of two averaging filters.
Is this filter separable?

12. Matlab also has an imfilter function, which if x is an image matrix (of any type), and f is
a filter, has the syntax

imfilter(x,f);

It differs from filter2 in the different parameters it takes (read its help file), and in that the
output is always of the same class as the original image.

(a) Use imfilter on the mandrill image with the filters listed in question 1.
(b) Apply different sized averaging filters to the mandrill image using imfilter.
(c) Apply different laplacian filters to the mandrill image using imfilter. Compare the
results with those obtained with filter2. Which do you think gives the best results?

13. Display the difference between the cmax and cmin images obtained in section 3.6. You can do
this with

>> imshow(imsubtract(cmax,cmin))

What are you seeing here? Can you account for the output of these commands?

14. Using the tic and toc timer function, compare the use of nlfilter and colfilt functions.

15. Use colfilt to implement the geometric mean and alpha-trimmed mean filters.


filter after a
 
16. Can unsharp masking be used to reverse the effects of blurring? Apply an unsharp masking
averaging filter, and describe the result.

You might also like