Dip 1 To 8
Dip 1 To 8
1
Morphological imopen Performs opened_img =
Operations morphological imopen(img, strel('disk',
opening. 5));
imclose Performs closed_img = imclose(img,
morphological strel('disk', 5));
closing.
imerode Erodes an eroded_img =
image. imerode(img, strel('disk',
3));
imdilate Dilates an dilated_img =
image. imdilate(img, strel('disk',
3));
Edge Detection edge Detects edges edges = edge(gray_img,
using 'canny');
different
methods such
as Sobel,
Canny, etc.
Image imbinarize Converts an binary_img =
Segmentation image to a imbinarize(gray_img);
binary image
using
thresholding.
bwlabel Labels [L, num] =
connected bwlabel(binary_img);
components
in a binary
image.
regionprops Measures stats = regionprops(L,
properties of 'Area', 'Centroid');
image
regions.
Color Image rgb2hsv Converts an hsv_img = rgb2hsv(img);
Processing RGB image to
HSV color
space.
ind2rgb Converts an rgb_img =
indexed ind2rgb(indexed_img,
image to RGB colormap);
using a
colormap.
2
ASSIGNMENT 2
AIM: Write a program to read and display images using
MATLAB.
THEORY:
Image processing involves the manipulation of pixel data within an image to achieve a
desired outcome. Reading and displaying images are fundamental tasks in image processing.
MATLAB provides built-in functions like `imread` for reading images and `imshow` for
displaying them. These functions help in understanding the dimensions, data type, and basic
properties of the image.
ALGORITHM:
1. Start the program by clearing the workspace and command window.
4. Display the image using the `imshow` function within a figure window.
5. Retrieve and display the size (dimensions) and data type of the image.
CODE:
img = imread(filename); % Reading File %
MATHEMATICAL EXPRESSION:
No specific mathematical expressions are involved in this task. The code primarily handles
image data as arrays of pixel values, and MATLAB functions perform the underlying
operations.
3
ORIGINAL IMAGE:
OUTPUT:
CONCLUSION:
The program successfully reads and displays the specified image using MATLAB. The size
and data type of the image are also correctly determined and displayed.
4
TUTORIAL:
% Experiment 2 %
% Aryaman Purohit 22U02003%
clear;
clc;
filename = 'aryamanPhoto.jpg';
img = imread(filename);
figure; imshow(img);
title('Displayed Image');
5
ASSIGNMENT 3
AIM: To write a MATLAB program to convert an RGB
image to grayscale and display both the original and
grayscale images using the subplot function.
THEORY:
Grayscale conversion is a common image processing task where a color image is transformed
into a grayscale image, reducing the color channels to a single channel representing intensity.
The rgb2gray function in MATLAB performs this conversion. Displaying multiple images in
a single figure can be efficiently done using the subplot function, which allows arranging
images in a grid within the figure window.
ALGORITHM:
1. Start by clearing the workspace and command window.
5. Create a figure window and use `subplot` to display the original and grayscale images side
by side.
6. Display the image size for both the original and grayscale images in the console.
CODE:
color_img = imread(filename); % Reading File %
subplot(1, 2, 1);
imshow(color_img); % Showing Original Image %
subplot(1, 2, 2);
imshow(gray_img); % Showing Grayscale Image %
6
MATHEMATICAL EXPRESSION:
The grayscale image is calculated using a weighted ∑ of the RGB channels :
[ Gray=0.2989× R+0.5870 ×G+0.1140 × B ]
Where ( R ) , ( G ) ,∧( B ) are thered , green ,∧blue channels of the original image .
ORIGINAL IMAGE:
OUTPUT:
7
TUTORIAL:
% Experiment 2: RGB to Gray %
% Aryaman Purohit 22U02003%
clear;
clc;
filename = 'aryamanPhoto.jpg';
color_img = imread(filename);
gray_img = rgb2gray(color_img);
figure;
subplot(1, 2, 1);
imshow(color_img);
title('Original Color Image');
subplot(1, 2, 2);
imshow(gray_img);
title('Grayscale Image');
CONCLUSION:
The program successfully converts an RGB image to grayscale and displays both the original
and grayscale images using MATLAB's `subplot` function. The sizes of both images are
correctly reported.
8
ASSIGNMENT 4
AIM: Write a program for Histogram Calculation and
equalization.
THEORY:
1. Histogram Calculation
2. Histogram Equalization
ALGO:
How to Calculate Histogram:
For Grayscale Images: Each pixel value in a grayscale image ranges from 0 (black) to
255 (white). To calculate the histogram:
o Create a vector hist of size 256 (for 8-bit images) initialized to zero.
o For each pixel intensity value, increment the corresponding index in the
histogram vector.
2. Histogram Equalization
9
How it Works:
o The CDF is derived from the histogram and represents the cumulative
probability of each intensity level.
o Replace each pixel's intensity value with the corresponding value from the
normalized CDF to achieve the equalization effect.
MATLAB Functions:
10
MATHEMATICAL EXPRESSION:
i
∑ hist ( j )
CDF ( i ) = j=0
N
where hist ( j ) is the frequency of intensity level j,∧N is the total number of pixels .
ORIGINAL IMAGE:
TUTORIAL:
11
% EXPERIMENT 4(a) Image Histogram Calculation using MATLAB Functions
% Aryaman Puorhit 22U02003%
clear;
clc;
image = imread('aryamanPhoto.jpg');
if size (image, 3) == 3
imageGray = rgb2gray(image);
else
imageGray = image;
end
figure;
bar (binLocations, counts, 'k');
xlabel ('Pixel Intensity');
ylabel ('Frequency');
title ('Histogram of Grayscale Image');
grid on;
12
TUTORIAL:
% EXPERIMENT 4(b) Histogram Calculation and Equalization for Image Contrast
Enhancement
% Aryaman Purohit 22U02003%
clear;
clc;
image = imread('aryamanPhoto.jpg');
if size(image, 3) == 3
imageGray = rgb2gray(image);
else
imageGray = image;
end
[counts, binLocations] = imhist(imageGray);
imageEqualized = histeq(imageGray);
[countsEq, binLocationsEq] = imhist(imageEqualized);
figure;
imshow(imageGray);
title('Original Grayscale Image');
figure;
bar(binLocations, counts, 'k');
xlabel('Pixel Intensity');
ylabel('Frequency');
title('Histogram of Original Grayscale Image');
grid on;
figure;
imshow(imageEqualized);
title('Equalized Grayscale Image');
figure;
bar(binLocationsEq, countsEq, 'k');
xlabel('Pixel Intensity');
ylabel('Frequency');
title('Histogram of Equalized Grayscale Image');
grid on;
13
into pixel intensity distribution. This fundamental analysis is essential for further image
processing operations and understanding the visual characteristics of images. The skills
acquired through this experiment are applicable to a wide range of image processing
applications and will aid in more advanced analysis and manipulation of digital images.
In this experiment, we explored the process of calculating the histogram of an image using
MATLAB functions. The histogram is a crucial tool in image processing that provides insight
into the distribution of pixel intensities, which can be fundamental for various image analysis
tasks.
ASSIGNMENT 5
AIM: Write a program to execute arithmetic operations
on images.
THEORY:
In image processing, images are typically represented as matrices where each element corresponds to
a pixel value. MATLAB provides powerful tools for manipulating these matrices, allowing various
operations such as addition, subtraction, and contrast adjustment. These operations are fundamental
for tasks such as image blending, enhancement, and analysis.
o An image is read into MATLAB using the imread function, which loads the image
data into a matrix.
2. Image Addition:
o Adding two images involves adding the corresponding pixel values from each image.
This operation can be used for blending or averaging images.
o If the images are of different sizes, they need to be resized to match each other before
performing the operation.
3. Image Subtraction:
o Subtracting one image from another involves subtracting the pixel values of one
image from the corresponding pixels of the other image. This can highlight
differences between the images.
4. Adding a Constant:
14
5. Subtracting a Constant:
o Subtracting a constant value from an image decreases the brightness by reducing the
pixel values. If the pixel value goes below the minimum value (0), it is clipped to 0.
6. Image Negation:
o The negative of an image is obtained by subtracting each pixel value from 255. This
operation inverts the brightness levels, producing a negative image.
ALGO:
1. Read and Display Images:
o Use the + operator to add the pixel values of the two images.
o Add 50 to every pixel value of the first image using image + 50.
o Subtract 100 from every pixel value of the first image using image - 100.
MATHEMATICAL EXPRESSION:
Image Addition: Result=I1+I2
15
Image Subtraction: Result=I1−I2
Where I1 and I2 are the original images, and Cis the constant value
ORIGINAL IMAGES:
OUTPUT:
16
TUTORIAL:
% EXPERIMENT 5
% Aryaman Purohit 22U02003
% 1) Read 2 images and display the images
image1 = imread('aryamanPhoto.jpg');
image2 = imread('aryamanPhoto2.jpg');
figure;
subplot(2, 3, 1);
imshow(image1);
title('Image 1');
subplot(2, 3, 2);
imshow(image2);
title('Image 2');
% Ensure the images have the same size
image2 = imresize(image2, [size(image1, 1), size(image1, 2)]);
% Ensure the images have the same number of channels (grayscale or RGB)
if size(image1, 3) ~= size(image2, 3)
if size(image1, 3) == 3
image2 = cat(3, image2, image2, image2);
else
image1 = rgb2gray(image1);
image2 = rgb2gray(image2);
end
end
% 1) Add the two images and display
added_image = image1 + image2; % Element-wise addition
subplot(2, 3, 3);
imshow(added_image);
title('Added Image');
% 2) Subtract the two images and display
subtracted_image = image1 - image2; % Element-wise subtraction
subplot(2, 3, 4);
imshow(subtracted_image);
title('Subtracted Image');
% 4) Subtract a constant value of 100 from one of the images and display
constant_subtraction = image1 - 100; % Subtracting a constant
17
subplot(2, 3, 6);
imshow(constant_subtraction);
title('Image 1 - 100');
% 5) Obtain negative of one of the images (Subtract 255 from the image) and
display
negative_image = 255 - image2; % Negative of the image
figure;
imshow(negative_image);
title('Negative Image of Image 2');
CONCLUSION:
This experiment demonstrates fundamental operations in image processing using MATLAB.
By performing arithmetic operations on images, we can blend, enhance, and manipulate them
in various ways. Understanding these basic operations is crucial for more advanced image
processing tasks such as filtering, edge detection, and segmentation. The experiment also
highlights the importance of considering pixel value constraints (0 to 255) to avoid
unintended visual artifacts.
18
ASSIGNMENT 6
AIM:
To write and execute programs for performing the following image logical operations:
THEORY:
Logical operations on images involve pixel-wise comparisons between two images or
modifications to a single image using basic logical functions. These operations treat image
pixel values as binary numbers and apply logic gates to them.
1. AND Operation: Compares two images bit-by-bit. The result is 1 only if both bits are
1.
2. OR Operation: Compares two images bit-by-bit. The result is 1 if at least one of the
bits is 1.
3. NAND Operation: The opposite of the AND operation. The result is 1 only if at least
one of the bits is 0.
4. NOR Operation: The opposite of the OR operation. The result is 1 only if both bits
are 0.
5. EXOR (XOR) Operation: The result is 1 if the two bits being compared are
different.
6. EXNOR (XNOR) Operation: The result is 1 if the two bits being compared are the
same.
7. NOT Operation: Inverts the bits of the image (i.e., 1 becomes 0 and vice versa).
19
8. Intersection of Two Images: The intersection is computed by performing an AND
operation between the two images.
These operations are useful in tasks such as masking, feature extraction, and image
segmentation.
ALGORITHMS:
1. AND Operation:
Input two images.
Perform pixel-wise AND operation.
2. OR Operation:
Input two images.
Perform pixel-wise OR operation.
3. NAND Operation:
Input two images.
Perform pixel-wise AND operation, then invert the result.
4. NOR Operation:
Input two images.
Perform pixel-wise OR operation, then invert the result.
5. XOR Operation:
Input two images.
Perform pixel-wise XOR operation.
6. XNOR Operation:
Input two images.
Perform pixel-wise XOR operation, then invert the result.
20
7. NOT Operation:
Input an image.
Perform pixel-wise inversion (logical NOT).
MATHEMATICAL EXPRESSION:
Result= I1 ∧ I2
1. AND Operation:
Result= I1 ∨ I2
2. OR Operation:
Result= I1 ⊕ I2
5. XOR Operation:
7. NOT Operation:
Result= ¬I
Result= I1 ∧ I2
8. Intersection of two images:
ORIGINAL IMAGES:
21
OUTPUT:
22
MATLAB CODE:
% EXPERIMENT 6
% Aryaman Purohit 22U02003
% Read two binary images
img1 = imread('image1.png');
img2 = imread('image2.png');
% Convert images to grayscale if they are not already
if size(img1, 3) == 3
img1 = rgb2gray(img1);
end
if size(img2, 3) == 3
img2 = rgb2gray(img2);
end
% Convert images to binary
img1 = imbinarize(img1);
img2 = imbinarize(img2);
% 1. AND Operation
and_image = img1 & img2;
figure, imshow(and_image), title('AND Operation');
% 2. OR Operation
or_image = img1 | img2;
figure, imshow(or_image), title('OR Operation');
% 3. NAND Operation
nand_image = ~(img1 & img2);
figure, imshow(nand_image), title('NAND Operation');
% 4. NOR Operation
nor_image = ~(img1 | img2);
figure, imshow(nor_image), title('NOR Operation');
% 5. XOR Operation
xor_image = xor(img1, img2);
figure, imshow(xor_image), title('XOR Operation');
% 6. XNOR Operation
xnor_image = ~xor(img1, img2);
figure, imshow(xnor_image), title('XNOR Operation');
23
% Calculate and display mean value of img1
mean_value = mean(img1(:)); % Calculate mean of the binary image
fprintf('Mean value of image 1: %.4f\n', mean_value);
CONCLUSION:
In this lab exercise, we successfully applied various logical operations such as AND, OR,
NAND, NOR, XOR, XNOR, and NOT on images. These operations provide critical
functionality in image segmentation, masking, and feature extraction. Additionally, the
intersection of two images was computed using the AND operation. These operations enable
powerful image analysis and processing, facilitating tasks such as object detection and pattern
recognition.
TUTORIAL:
1.) Prepare any two images of size 256x256 in paint. Save it in JPEG format 256 gray
levels. Perform logical NOR, NAND operations between two images. Write
program and paste your results.
if size(img2, 3) == 3
img2 = rgb2gray(img2);
end
24
ASSIGNMENT 7
AIM:
To write and execute programs for performing the following geometric transformations
on an image:
Translation
Rotation
Scaling
Reflection
Shearing
Shrinking
Zooming
THEORY:
Geometric transformations modify the spatial arrangement of an image's pixels, allowing for
manipulation in ways that can change the perspective or orientation of the image. These
transformations are fundamental in image processing applications such as computer vision,
augmented reality, and graphics editing.
1. Translation: Moves the image in the x and y directions by specified amounts without
altering its size or orientation.
3. Scaling: Changes the size of the image by a specified scale factor, enlarging or
reducing its dimensions.
25
4. Reflection: Flips the image across a specified axis (horizontal or vertical).
5. Shearing: Image can be distorted (sheared) either in x direction or y direction.
ALGORITHMS:
1. Translation:
Define the translation vector (tx, ty).
Create a transformation matrix for translation.
Apply the transformation to the image.
2. Rotation:
Define the angle of rotation.
Create a transformation matrix for rotation.
Apply the transformation to the image.
3. Scaling:
Define the scale factors (sx, sy).
Create a transformation matrix for scaling.
Apply the transformation to the image.
4. Reflection:
Define the axis of reflection (horizontal or vertical).
Create a transformation matrix for reflection.
Apply the transformation to the image.
26
5. Shrinking:
Define the scale factor less than 1.
Create a transformation matrix for shrinking.
Apply the transformation to the image.
6. Zooming:
Define the scale factor greater than 1.
Create a transformation matrix for zooming.
Apply the transformation to the image.
MATHEMATICAL EXPRESSION:
1. Translation:
X’=x+tx
Y’=y+ty
Where tx and ty are the translation distances along the x and y axes respectively
2. Rotation:
X’=x.cos(θ)- y.sin(θ)
Y’=x.sin(θ) +y.cos(θ)
3. Scaling:
X’=sx.x
Y’=sy.y
Where sx and sy are the scaling factors along the x and y axes respectively
4. Reflection:
X’=-x
Y’=y
5. Shrinking:
X’=s.x
Y’=s.y
6. Zooming:
X’=s.x
Y’=s.y
27
Where s>1 is the zoom factor
MATLAB CODE:
% EXPERIMENT 7 - Geometric Transformations
% Aryaman Purohit 22U02003
% Read the input image
img = imread('image.png');
28
ORIGINAL IMAGE:
OUTPUT:
MATLAB CODE 2:
% EXPERIMENT 7
% Aryaman Purohit 22U02003
29
clc
x=imread('image1.png');x=rgb2gray(x);
subplot(2,2,1); imshow(x);
title('Orignial Image');y=imrotate(x,45,'bilinear','crop');
subplot(2,2,2);
imshow(y);
title('Image rotated by 45 degree');y=imrotate(x,90,'bilinear','crop');
subplot(2,2,3);
imshow(y);
title('Image rotated by 90 degree');
y=imrotate(x,-45,'bilinear','crop');
subplot(2,2,4);
imshow(y);
title('Image rotated by -45 degree');
x = imread('cameraman.tif');
tform = maketform('affine',[1 0 0; .5 1 0; 00 1]);
y = imtransform(x,tform);
figure;
subplot(2,2,1);
imshow(x);
title('Orignial Image');
subplot(2,2,2);
imshow(y);
title('Shear in X direction');
tform = maketform('affine',[1 0.5 0; 0 1 0; 0 0 1]);
y = imtransform(x,tform);
subplot(2,2,3);
imshow(y);
title('Shear in Y direction');
tform = maketform('affine',[1 0.5 0; 0.5 1 0; 0 0 1]);
y = imtransform(x,tform);
subplot(2,2,4); imshow(y); title('Shear in X-Y direction');
OUTPUT2:
OUTPUT 3:
30
CONCLUSION:
In this lab exercise, we successfully applied various geometric transformations, including
translation, rotation, scaling, reflection, shrinking, and zooming on images. Each
transformation modifies the spatial arrangement of pixels, enabling the manipulation of
images for various applications in image processing and computer vision. Understanding
these transformations is crucial for tasks like image alignment, object detection, and image
enhancement.
TUTORIAL:
1.) In above program, modify matrix for geometric transformation and use imtransform()
function for modified matrix. Show the results and your conclusions.
x = imread('image2.png');
31
MODIFIED RESULT:
ASSIGNMENT 8
32
AIM:
To understand and implement frequency domain filtering techniques for images, specifically
focusing on analyzing spatial and intensity resolution.
THEORY:
Frequency Domain Filtering involves transforming an image from the spatial domain to
the frequency domain using techniques such as the Fourier Transform. In the frequency
domain, various filtering techniques can be applied to enhance or suppress certain
frequencies, which can affect the overall quality and characteristics of the image.
Spatial Resolution refers to the detail an image holds. The higher the spatial
resolution, the more detail is present in the image, and this can be affected by the
filtering process.
Intensity Resolution is related to the number of possible intensity values
(grayscale levels) that each pixel can have. Higher intensity resolution leads to
smoother gradients and better overall image quality.
ALGORITHMS:
1. Read the input image.
2. Convert the image to the frequency domain using the Fourier Transform.
3. Apply the desired frequency domain filter.
4. Convert the filtered image back to the spatial domain using the Inverse Fourier
Transform.
5. Display the original and filtered images.
MATHEMATICAL EXPRESSION:
1. Fourier Transform:
M −1 N−1 ux vy
−j2π( + )
F (u , v )= ∑ ∑ f (x , y )⋅ e M N ❑
x=0 y=0
x=0 y=0
3. Magnitude Spectrum:
√ 2
M (u , v )= ℜ ( F ( u , v )) + ℑ ( F ( u , v ) )
2
MATLAB CODE:
% EXPERIMENT 8
% Aryaman Purohit 22U02003
33
myimg =imread('image1.png');
if(size(myimg,3)==3)
myimg=rgb2gray(myimg);
end
myimg = imresize(myimg,[256 256]);
myimg=double(myimg);
subplot(2,2,1);
imshow(myimg,[]),title('Original Image');
[M,N] = size(myimg); % Find size
%Preprocessing of the image
for x=1:M
for y=1:N
myimg1(x,y)=myimg(x,y)*((1)^(x+y));
end
end
% Find FFT of the image
myfftimage = fft2(myimg1);
subplot(2,2,2);
imshow(myfftimage,[]); title('FFT Image');
% Define cut off frequency
low = 30;
band1 = 20;
band2 = 50;
%Define Filter Mask
mylowpassmask = ones(M,N);
mybandpassmask = ones(M,N);
% Generate values for ifilter pass mask
for u = 1:M
for v = 1:N
tmp = ((u-(M/2))^2 +(v-(N/2))^2)^0.5;
if tmp > low
mylowpassmask(u,v) = 0;
end
if tmp > band2 || tmp < band1;
mybandpassmask(u,v) = 0;
end
end
end
% Apply the filter H to the FFT of the Image
resimage1 = myfftimage.*mylowpassmask;
resimage3 = myfftimage.*mybandpassmask;
% Apply the Inverse FFT to the filtered image
% Display the low pass filtered image
r1 = abs(ifft2(resimage1));
subplot(2,2,3);
imshow(r1,[]),title('Low Pass filtered image');
% Display the band pass filtered image
r3 = abs(ifft2(resimage3));
subplot(2,2,4);
imshow(r3,[]),title('Band Pass filtered image');
figure;
subplot(2,1,1);imshow(mylowpassmask);
subplot(2,1,2);imshow(mybandpassmask);
ORIGINAL IMAGE:
34
OUTPUT:
CONCLUSION:
35
The implementation of frequency domain filtering allows for effective manipulation of spatial
and intensity resolutions in images. By applying filters in the frequency domain, one can
enhance specific features of the image or suppress noise.
TUTORIAL:
1.) Instead of following pre-processing step in above program use fftshift function to shift
FFT in the center. See changes in the result and write conclusion.
%Preprocessing of the
image for x=1:M
for y=1:N
myimg1(x,y)=myimg(x,y)*((-1)^(x+y));
end
end
MODIFIED RESULTS:
ASSIGNMENT 9
36
AIM: Write MATLAB code to perform edge detection using different edge detection
mask.
[2] Similarity based: Group pixels which has similar characteristics by thresholding, region
growing, region splitting and merging.
First order derivative and second order derivative operators can do edge detection.
First order line detection 3x3 mask are:
MATLAB CODE:
%EXPERIMENT 9
37
% Aryaman Purohit 22U02003
% Program for edge detection using standard masks
A=imread('image2.png');
if(size(A,3)==3)
A=rgb2gray(A);end
imshow(A);figure;
BW = edge(A,'prewitt');
subplot(3,2,1); imshow(BW);
title('Edge detection with prewitt mask');
BW = edge(A,'canny');
subplot(3,2,2);
imshow(BW);
title('Edge detection with canny mask');
BW = edge(A,'sobel');
subplot(3,2,3);
imshow(BW);
title('Edge detection with sobel mask');
BW = edge(A,'roberts');
subplot(3,2,4); imshow(BW);
title('Edge detection with roberts mask');
BW = edge(A,'log');
subplot(3,2,5);imshow(BW);
title('Edge detection with log ');
BW = edge(A,'zerocross');
subplot(3,2,6); imshow(BW);
title('Edge detection with zerocorss');
OUTPUT:
TUTORIAL:
38
1.) Get mask for “Prewitt”, “Canny”, “Sobel” from the literature and write
MATLAB/SCILAB program for edge detection using 2Dconvolution
2.) Write a MATLAB code for edge detection of a grayscale image without using in-built
function of edge detection.
SOL. 1.):
Prewitt Mask:
o Horizontal:
o Vertical:
Sobel Mask:
o Horizontal:
o Vertical:
Canny: The Canny edge detection method involves several steps including noise
reduction, gradient calculation, non-maximum suppression, and edge tracing using
39
hysteresis. It does not have a simple convolution mask but uses Gaussian filtering and
complex algorithms for accurate edge detection.
img = imread('image.jpg');
img_gray = rgb2gray(img);
img_gray = double(img_gray);
prewitt_v = [1 1 1; 0 0 0; -1 -1 -1];
sobel_v = [1 2 1; 0 0 0; -1 -2 -1];
SOL. 2.):
40
img = imread('image.jpg');
img_gray = rgb2gray(img);
img_gray = double(img_gray);
sobel_y = [1 2 1; 0 0 0; -1 -2 -1];
% Compute gradients
G = sqrt(Gx.^2 + Gy.^2);
G = G / max(G(:)) * 255;
imshow(uint8(G));
ASSIGNMENT 10
AIM:
41
Write and execute programs to remove noise using spatial filters
THEORY:
Spatial Filtering is sometimes also known as neighborhood processing.
Neighborhood processing is an appropriate name because you define a center
point and perform an operation (or apply a filter) to only those pixels in
predetermined neighborhood of that center point. The result of the operation is
one value, which becomes the value at the center point's location in the
modified image. Each point in the image is processed with its neighbors. The
general idea is shown below as a "sliding filter" that moves throughout the
image to calculate the value at the center location.
MATLAB CODE:
1D convolution (Useful for 1-D Signal Processing):
% Experiment No. 10
% Aryaman Purohit 22U02003
% Spatial filtering using standard MATLAB function
% To apply spatial filters on given image
%Define spatial filter masks
L1=[1 1 1;1 1 1;1 1 1];
L2=[0 1 0;1 2 1;0 1 0];
L3=[1 2 1;2 4 2;1 2 1];
H1=[-1 -1 -1;-1 9 -1;-1 -1 -1];
H2=[0 -1 0;-1 5 -1;-0 -1 0];
H3=[1 -2 1;-2 5 -2;1 -2 1];
L1 = L1/sum(L1);
filt_image= conv2(double(myimage),double(L1)); subplot(3,2,2);
imshow(filt_image,[]);
title('Filtered image with mask L1');
L2 = L2/sum(L2);
filt_image= conv2(double(myimage),double(L2));
subplot(3,2,3);
42
imshow(filt_image,[]); title('Filtered image with mask L2');
L3 = L3/sum(L3);
filt_image= conv2(double(myimage),double(L3));
subplot(3,2,4); imshow(filt_image,[]); title('Filtered image with mask
L3');
filt_image= conv2(double(myimage),H1);
subplot(3,2,5); imshow(filt_image,[]); title('Filtered image with mask
H1');
filt_image= conv2(double(myimage),H2);
subplot(3,2,6); imshow(filt_image,[]); title('Filtered image with mask
H1');
gaussmask = fspecial('gaussian',3);
filtimg = imfilter(myimage,gaussmask);
subplot(2,2,2); imshow(filtimg,[]),title('Output of Gaussian filter 3 X
3');
avgfilt = [ 1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1];
avgfiltmask = avgfilt/sum(avgfilt);
convimage= conv2(double(myimage),double(avgfiltmask));
filt_image= conv2(double(myimage),H3);
subplot(3,2,6); imshow(filt_image,[]); title('Filtered image with mask
H3');
43
OUTPUT:
CONCLUSION:
MATLAB program spatial filtering is executed.
44
Tutorial:
1.) Write mathematical expression of spatial filtering of image f(x,y)
of size M*N using mask W of size a*b.
2.) What is need for padding? What is zero padding? Why it
isrequired?
3.) What is the effect of increasing size of mask?
SOL1.):
The mathematical expression for spatial filtering of an image f(x,y) of size
M×N times using a mask W of size a×b times can be written as:
Explanation:
f(x,y) is the original image of size M×N times.
The filter mask is centered over the pixel (x,y) in the image, and the convolution is
performed by multiplying corresponding values of the image and the filter mask and
summing them.
The indices iii and jjj represent the relative positions of the mask's elements with
respect to the center of the mask.
This formula assumes that the mask W is symmetric about its center.
45
Zero Padding:
Zero padding is the process of adding extra pixels around the edges of an image, where
the additional pixels are set to zero. This allows the filter to process the entire image,
including the edge pixels.
For example, if an image of size MxN is padded with a border of zeros, the resulting
padded image will be larger than the original. For a 3x3 filter, 1-pixel-wide padding is
commonly added to the borders to maintain the output image size.
Why is Padding Required?
1. Preserve Image Size: Padding ensures that the output image after applying a filter is
of the same size as the input image, avoiding shrinkage of the image due to
convolution.
2. Handle Edge Pixels: Without padding, edge pixels would not be processed
effectively since there are fewer neighboring pixels near the borders.
3. Avoid Information Loss: Applying filters near the image boundaries without
padding can result in missing important details in those areas.
Thus, zero padding helps in keeping the spatial resolution of the image consistent while
performing operations like convolution.
SOL3.):
Effect of Increasing the Size of the Mask in Image Processing:
When the size of the mask (or filter) increases, it affects the image processing operation
in the following ways:
1. Smoothing Effect (Blurring):
- Larger masks tend to average over a larger area of pixels, which leads to greater
smoothing or blurring of the image. This can reduce noise but also causes loss of fine
details and edges.
- For example, a 3x3 filter captures a smaller local area, while a 7x7 filter covers a
larger area, resulting in a stronger smoothing effect.
2. Computational Complexity:
- Increasing the size of the mask increases the number of operations required for
convolution. For an image of size MxN and a mask of size aXb, the number of
multiplications performed for each pixel increases with the mask size. This results in
higher computational time and resource usage.
46
- A larger mask tends to smooth out more details, making it harder to detect edges or
sharp transitions.
4. Noise Reduction:
- Larger masks are more effective at reducing noise, as they involve a broader region
of pixels for averaging or filtering. However, this noise reduction comes at the cost of
detail loss.
5. Change in Feature Detection:
- In edge detection or other feature extraction techniques, using a larger mask can
result in less sensitivity to fine edges or small features, as the mask considers a wider
region. Smaller edges or thin lines may be smoothed out or missed.
In summary, increasing the mask size enhances smoothing, noise reduction, and
filtering effects, but also increases computational cost and can result in the loss of finer
image details and sharp edges.
---------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------
47