0% found this document useful (0 votes)
11 views47 pages

Dip 1 To 8

Uploaded by

kegafok262
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views47 pages

Dip 1 To 8

Uploaded by

kegafok262
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

ASSIGNMENT 1

AIM: Write about introduction to commands and


functions used in Digital Image Processing.
MATLAB (Matrix Laboratory) is a powerful tool that can be widely used for digital image
processing due to its robust set of built-in functions and toolboxes. The Image Processing
Toolbox in provides a comprehensive set of reference-standard algorithms and functions for
image analysis, visualization, and algorithm development.

Category Command Description Example Code


Reading and imread Reads an img = imread('image.jpg');
Displaying image from a
Images file into a
matrix.
imshow Displays an imshow(img);
image in a
figure
window.
imwrite Writes an Imwrite (img,
image to a 'new_image.jpg');
file.
Image imresize Resizes an resized_img = imresize
Transformatio image to a (img, [256 256]);
n specified size.
imrotate Rotates an rotated_img =
image by a imrotate(img, 45);
specified
angle.
imcrop Crops a cropped_img =
portion of an imcrop(img, [50 50 100
image. 100]);
Image rgb2gray Converts an gray_img = rgb2gray(img);
Enhancement RGB image to
grayscale.
histeq Performs enhanced_img =
histogram histeq(gray_img);
equalization
to enhance
contrast.
imadjust Adjusts the adjusted_img =
intensity imadjust(gray_img);
values or
colormap.
Filtering and imfilter Applies a filtered_img =
Noise linear filter to imfilter(img,
Reduction an image. fspecial('sobel'));
medfilt2 Applies a median_filtered_img =
median filter medfilt2(gray_img);
to a 2D
image.

1
Morphological imopen Performs opened_img =
Operations morphological imopen(img, strel('disk',
opening. 5));
imclose Performs closed_img = imclose(img,
morphological strel('disk', 5));
closing.
imerode Erodes an eroded_img =
image. imerode(img, strel('disk',
3));
imdilate Dilates an dilated_img =
image. imdilate(img, strel('disk',
3));
Edge Detection edge Detects edges edges = edge(gray_img,
using 'canny');
different
methods such
as Sobel,
Canny, etc.
Image imbinarize Converts an binary_img =
Segmentation image to a imbinarize(gray_img);
binary image
using
thresholding.
bwlabel Labels [L, num] =
connected bwlabel(binary_img);
components
in a binary
image.
regionprops Measures stats = regionprops(L,
properties of 'Area', 'Centroid');
image
regions.
Color Image rgb2hsv Converts an hsv_img = rgb2hsv(img);
Processing RGB image to
HSV color
space.
ind2rgb Converts an rgb_img =
indexed ind2rgb(indexed_img,
image to RGB colormap);
using a
colormap.

2
ASSIGNMENT 2
AIM: Write a program to read and display images using
MATLAB.
THEORY:
Image processing involves the manipulation of pixel data within an image to achieve a
desired outcome. Reading and displaying images are fundamental tasks in image processing.
MATLAB provides built-in functions like `imread` for reading images and `imshow` for
displaying them. These functions help in understanding the dimensions, data type, and basic
properties of the image.

ALGORITHM:
1. Start the program by clearing the workspace and command window.

2. Define the filename of the image to be read.

3. Use the `imread` function to load the image into a variable.

4. Display the image using the `imshow` function within a figure window.

5. Retrieve and display the size (dimensions) and data type of the image.

CODE:
img = imread(filename); % Reading File %

imshow(img); % Showing Output %

[rows, cols, channels] = size(img); % Storing Output Data %

MATHEMATICAL EXPRESSION:
No specific mathematical expressions are involved in this task. The code primarily handles
image data as arrays of pixel values, and MATLAB functions perform the underlying
operations.

3
ORIGINAL IMAGE:

OUTPUT:

Image size: 272 x 337 x 3


Image data type: uint8

CONCLUSION:
The program successfully reads and displays the specified image using MATLAB. The size
and data type of the image are also correctly determined and displayed.

4
TUTORIAL:
% Experiment 2 %
% Aryaman Purohit 22U02003%
clear;
clc;

filename = 'aryamanPhoto.jpg';

img = imread(filename);

figure; imshow(img);
title('Displayed Image');

[rows, cols, channels] = size(img);


fprintf('Image size: %d x %d x %d\n', rows, cols, channels);
fprintf('Image data type: %s\n', class(img));

5
ASSIGNMENT 3
AIM: To write a MATLAB program to convert an RGB
image to grayscale and display both the original and
grayscale images using the subplot function.
THEORY:
Grayscale conversion is a common image processing task where a color image is transformed
into a grayscale image, reducing the color channels to a single channel representing intensity.
The rgb2gray function in MATLAB performs this conversion. Displaying multiple images in
a single figure can be efficiently done using the subplot function, which allows arranging
images in a grid within the figure window.

ALGORITHM:
1. Start by clearing the workspace and command window.

2. Define the filename of the image to be processed.

3. Use `imread` to load the image into a variable.

4. Convert the color image to grayscale using `rgb2gray`.

5. Create a figure window and use `subplot` to display the original and grayscale images side
by side.

6. Display the image size for both the original and grayscale images in the console.

CODE:
 color_img = imread(filename); % Reading File %

gray_img = rgb2gray(color_img); % Converting RGB to Grayscale %

subplot(1, 2, 1);
imshow(color_img); % Showing Original Image %

subplot(1, 2, 2);
imshow(gray_img); % Showing Grayscale Image %

fprintf('Original Image size: %d x %d x %d\n', size(color_img, 1),


size(color_img, 2), size(color_img, 3)); % Output Image Size %
fprintf('Grayscale Image size: %d x %d\n', size(gray_img, 1),
size(gray_img, 2)); % Output Grayscale Size %

6
MATHEMATICAL EXPRESSION:
The grayscale image is calculated using a weighted ∑ of the RGB channels :
[ Gray=0.2989× R+0.5870 ×G+0.1140 × B ]
Where ( R ) , ( G ) ,∧( B ) are thered , green ,∧blue channels of the original image .

ORIGINAL IMAGE:

OUTPUT:

7
TUTORIAL:
% Experiment 2: RGB to Gray %
% Aryaman Purohit 22U02003%
clear;
clc;

filename = 'aryamanPhoto.jpg';

color_img = imread(filename);

gray_img = rgb2gray(color_img);

figure;

subplot(1, 2, 1);
imshow(color_img);
title('Original Color Image');

subplot(1, 2, 2);
imshow(gray_img);
title('Grayscale Image');

fprintf('Original Image size: %d x %d x %d\n', size(color_img, 1),


size(color_img, 2), size(color_img, 3));
fprintf('Grayscale Image size: %d x %d\n', size(gray_img, 1),
size(gray_img, 2));

CONCLUSION:
The program successfully converts an RGB image to grayscale and displays both the original
and grayscale images using MATLAB's `subplot` function. The sizes of both images are
correctly reported.

8
ASSIGNMENT 4
AIM: Write a program for Histogram Calculation and
equalization.
THEORY:
1. Histogram Calculation

Histogram: In image processing, the histogram is a graphical representation of the


distribution of pixel intensity values in an image. It shows the number of pixels for each
intensity level.

2. Histogram Equalization

Purpose: Histogram equalization is a technique to improve the contrast of an image by


redistributing the intensity levels. It aims to produce an image with a more uniform
distribution of pixel intensities.

ALGO:
How to Calculate Histogram:

 For Grayscale Images: Each pixel value in a grayscale image ranges from 0 (black) to
255 (white). To calculate the histogram:

o Create a vector hist of size 256 (for 8-bit images) initialized to zero.

o Traverse the image pixel by pixel.

o For each pixel intensity value, increment the corresponding index in the
histogram vector.

Example MATLAB code:

img = imread('image.png'); % Read the image

gray_img = rgb2gray(img); % Convert to grayscale if needed

hist = imhist(gray_img); % Calculate histogram

2. Histogram Equalization

Purpose: Histogram equalization is a technique to improve the contrast of an image by


redistributing the intensity levels. It aims to produce an image with a more uniform
distribution of pixel intensities.

9
How it Works:

1. Calculate the Cumulative Distribution Function (CDF):

o The CDF is derived from the histogram and represents the cumulative
probability of each intensity level.

o For each intensity level iii, the CDF is computed as:


CDF(i)=∑j=0ihist(j)NCDF(i) = \frac{\sum_{j=0}^{i} \text{hist}(j)}
{N}CDF(i)=N∑j=0ihist(j) where hist(j)\text{hist}(j)hist(j) is the frequency of
intensity level jjj, and NNN is the total number of pixels.

2. Normalize the CDF:

o Normalize the CDF to the range of intensity values (0 to 255):


Normalized_CDF(i)=round (CDF(i)×(L−1))\text{Normalized\_CDF}(i) = \
text{round}(CDF(i) \times (L - 1))Normalized_CDF(i)=round(CDF(i)×(L−1))
where LLL is the number of intensity levels (256 for 8-bit images).

3. Map the Intensity Levels:

o Replace each pixel's intensity value with the corresponding value from the
normalized CDF to achieve the equalization effect.

MATLAB Functions:

 imhist(I): Computes the histogram of image I.

 histeq(I): Performs histogram equalization on image I.

 Example MATLAB CODE:

img = imread('image.png'); % Read the image

gray_img = rgb2gray(img); % Convert to grayscale

equalized_img = histeq(gray_img); % Perform histogram equalization

subplot(1,2,1), imshow(gray_img), title('Original Image');

subplot(1,2,2), imshow(equalized_img), title('Equalized Image');

10
MATHEMATICAL EXPRESSION:
i

∑ hist ( j )
CDF ( i ) = j=0
N
where hist ( j ) is the frequency of intensity level j,∧N is the total number of pixels .

Normalized CDF ( i ) =round ( CDF ( i ) × ( L−1 ) )

where Lis the number of intensity levels (256 for 8−bitimages ) .

ORIGINAL IMAGE:

(a) Standard MATLAB Function


OUTPUT:

TUTORIAL:

11
% EXPERIMENT 4(a) Image Histogram Calculation using MATLAB Functions
% Aryaman Puorhit 22U02003%
clear;
clc;

image = imread('aryamanPhoto.jpg');
if size (image, 3) == 3
imageGray = rgb2gray(image);
else
imageGray = image;
end

[counts, binLocations] = imhist(imageGray);


figure;
imshow(imageGray);
title ('Grayscale Image');

figure;
bar (binLocations, counts, 'k');
xlabel ('Pixel Intensity');
ylabel ('Frequency');
title ('Histogram of Grayscale Image');
grid on;

(b) Enchancing contrast using histogram equalization


OUTPUT:

12
TUTORIAL:
% EXPERIMENT 4(b) Histogram Calculation and Equalization for Image Contrast
Enhancement
% Aryaman Purohit 22U02003%
clear;
clc;

image = imread('aryamanPhoto.jpg');
if size(image, 3) == 3
imageGray = rgb2gray(image);
else
imageGray = image;
end
[counts, binLocations] = imhist(imageGray);
imageEqualized = histeq(imageGray);
[countsEq, binLocationsEq] = imhist(imageEqualized);
figure;
imshow(imageGray);
title('Original Grayscale Image');
figure;
bar(binLocations, counts, 'k');
xlabel('Pixel Intensity');
ylabel('Frequency');
title('Histogram of Original Grayscale Image');
grid on;
figure;
imshow(imageEqualized);
title('Equalized Grayscale Image');
figure;
bar(binLocationsEq, countsEq, 'k');
xlabel('Pixel Intensity');
ylabel('Frequency');
title('Histogram of Equalized Grayscale Image');
grid on;

CONCLUSION: The experiment successfully demonstrated the process of image


histogram calculation using MATLAB functions. The imhist function proved to be an
effective tool for obtaining and analyzing the histogram of grayscale images, offering insights

13
into pixel intensity distribution. This fundamental analysis is essential for further image
processing operations and understanding the visual characteristics of images. The skills
acquired through this experiment are applicable to a wide range of image processing
applications and will aid in more advanced analysis and manipulation of digital images.

In this experiment, we explored the process of calculating the histogram of an image using
MATLAB functions. The histogram is a crucial tool in image processing that provides insight
into the distribution of pixel intensities, which can be fundamental for various image analysis
tasks.

ASSIGNMENT 5
AIM: Write a program to execute arithmetic operations
on images.
THEORY:
In image processing, images are typically represented as matrices where each element corresponds to
a pixel value. MATLAB provides powerful tools for manipulating these matrices, allowing various
operations such as addition, subtraction, and contrast adjustment. These operations are fundamental
for tasks such as image blending, enhancement, and analysis.

1. Reading and Displaying Images:

o An image is read into MATLAB using the imread function, which loads the image
data into a matrix.

o Displaying the image can be done using the imshow function.

2. Image Addition:

o Adding two images involves adding the corresponding pixel values from each image.
This operation can be used for blending or averaging images.

o If the images are of different sizes, they need to be resized to match each other before
performing the operation.

3. Image Subtraction:

o Subtracting one image from another involves subtracting the pixel values of one
image from the corresponding pixels of the other image. This can highlight
differences between the images.

4. Adding a Constant:

o Adding a constant value to an image increases the brightness of the image by


increasing the pixel values. If the pixel value exceeds the maximum value (255 for 8-
bit images), it is usually clipped to 255.

14
5. Subtracting a Constant:

o Subtracting a constant value from an image decreases the brightness by reducing the
pixel values. If the pixel value goes below the minimum value (0), it is clipped to 0.

6. Image Negation:

o The negative of an image is obtained by subtracting each pixel value from 255. This
operation inverts the brightness levels, producing a negative image.

ALGO:
1. Read and Display Images:

o Use imread to read two images.

o Use imshow to display the images.

2. Add Two Images:

o Use the + operator to add the pixel values of the two images.

o Display the resulting image using imshow.

3. Subtract Two Images:

o Use the - operator to subtract one image from the other.

o Display the result using imshow.

4. Add a Constant to an Image:

o Add 50 to every pixel value of the first image using image + 50.

o Display the modified image.

5. Subtract a Constant from an Image:

o Subtract 100 from every pixel value of the first image using image - 100.

o Display the modified image.

6. Obtain Negative of an Image:

o Subtract each pixel value from 255 using 255 - image.

o Display the negative image.

MATHEMATICAL EXPRESSION:
 Image Addition: Result=I1+I2

15
 Image Subtraction: Result=I1−I2

 Adding a Constant: Result=I+C

 Subtracting a Constant: Result=I−C

 Image Negation: Result=255−I

Where I1 and I2 are the original images, and Cis the constant value

ORIGINAL IMAGES:

OUTPUT:

16
TUTORIAL:
% EXPERIMENT 5
% Aryaman Purohit 22U02003
% 1) Read 2 images and display the images
image1 = imread('aryamanPhoto.jpg');
image2 = imread('aryamanPhoto2.jpg');
figure;
subplot(2, 3, 1);
imshow(image1);
title('Image 1');
subplot(2, 3, 2);
imshow(image2);
title('Image 2');
% Ensure the images have the same size
image2 = imresize(image2, [size(image1, 1), size(image1, 2)]);
% Ensure the images have the same number of channels (grayscale or RGB)
if size(image1, 3) ~= size(image2, 3)
if size(image1, 3) == 3
image2 = cat(3, image2, image2, image2);
else
image1 = rgb2gray(image1);
image2 = rgb2gray(image2);
end
end
% 1) Add the two images and display
added_image = image1 + image2; % Element-wise addition
subplot(2, 3, 3);
imshow(added_image);
title('Added Image');
% 2) Subtract the two images and display
subtracted_image = image1 - image2; % Element-wise subtraction
subplot(2, 3, 4);
imshow(subtracted_image);
title('Subtracted Image');

% 3) Add a constant value of 50 to one of the images and display


constant_addition = image2 + 50; % Adding a constant
subplot(2, 3, 5);
imshow(constant_addition);
title('Image 2 + 50');

% 4) Subtract a constant value of 100 from one of the images and display
constant_subtraction = image1 - 100; % Subtracting a constant

17
subplot(2, 3, 6);
imshow(constant_subtraction);
title('Image 1 - 100');

% 5) Obtain negative of one of the images (Subtract 255 from the image) and
display
negative_image = 255 - image2; % Negative of the image
figure;
imshow(negative_image);
title('Negative Image of Image 2');

CONCLUSION:
This experiment demonstrates fundamental operations in image processing using MATLAB.
By performing arithmetic operations on images, we can blend, enhance, and manipulate them
in various ways. Understanding these basic operations is crucial for more advanced image
processing tasks such as filtering, edge detection, and segmentation. The experiment also
highlights the importance of considering pixel value constraints (0 to 255) to avoid
unintended visual artifacts.

18
ASSIGNMENT 6
AIM:
To write and execute programs for performing the following image logical operations:

 AND operation between two images


 OR operation between two images
 NAND operation between two images
 NOR operation between two images
 EXOR (XOR) operation between two images
 EXNOR (XNOR) operation between two images
 NOT operation on an image
 Calculate intersection of two images

THEORY:
Logical operations on images involve pixel-wise comparisons between two images or
modifications to a single image using basic logical functions. These operations treat image
pixel values as binary numbers and apply logic gates to them.

1. AND Operation: Compares two images bit-by-bit. The result is 1 only if both bits are
1.
2. OR Operation: Compares two images bit-by-bit. The result is 1 if at least one of the
bits is 1.
3. NAND Operation: The opposite of the AND operation. The result is 1 only if at least
one of the bits is 0.
4. NOR Operation: The opposite of the OR operation. The result is 1 only if both bits
are 0.
5. EXOR (XOR) Operation: The result is 1 if the two bits being compared are
different.
6. EXNOR (XNOR) Operation: The result is 1 if the two bits being compared are the
same.
7. NOT Operation: Inverts the bits of the image (i.e., 1 becomes 0 and vice versa).

19
8. Intersection of Two Images: The intersection is computed by performing an AND
operation between the two images.

These operations are useful in tasks such as masking, feature extraction, and image
segmentation.

ALGORITHMS:
1. AND Operation:
 Input two images.
 Perform pixel-wise AND operation.

2. OR Operation:
 Input two images.
 Perform pixel-wise OR operation.

3. NAND Operation:
 Input two images.
 Perform pixel-wise AND operation, then invert the result.

4. NOR Operation:
 Input two images.
 Perform pixel-wise OR operation, then invert the result.

5. XOR Operation:
 Input two images.
 Perform pixel-wise XOR operation.

6. XNOR Operation:
 Input two images.
 Perform pixel-wise XOR operation, then invert the result.

20
7. NOT Operation:
 Input an image.
 Perform pixel-wise inversion (logical NOT).

8. Intersection of two images:


 Input two images.
 Perform pixel-wise AND operation to compute the intersection.

MATHEMATICAL EXPRESSION:

 Result= I1 ∧ I2
1. AND Operation:

 Result= I1 ∨ I2
2. OR Operation:

 Result= ¬ (I1 ∧ I2)


3. NAND Operation:

 Result= ¬ (I1 ∨ I2)


4. NOR Operation:

 Result= I1 ⊕ I2
5. XOR Operation:

 Result= ¬ (I1 ⊕ I2)


6. XNOR Operation:

7. NOT Operation:
 Result= ¬I

 Result= I1 ∧ I2
8. Intersection of two images:

ORIGINAL IMAGES:

21
OUTPUT:

22
MATLAB CODE:
% EXPERIMENT 6
% Aryaman Purohit 22U02003
% Read two binary images
img1 = imread('image1.png');
img2 = imread('image2.png');
% Convert images to grayscale if they are not already
if size(img1, 3) == 3
img1 = rgb2gray(img1);
end
if size(img2, 3) == 3
img2 = rgb2gray(img2);
end
% Convert images to binary
img1 = imbinarize(img1);
img2 = imbinarize(img2);

% Ensure both images are of the same size


img1 = imresize(img1, [size(img2, 1) size(img2, 2)]);

% 1. AND Operation
and_image = img1 & img2;
figure, imshow(and_image), title('AND Operation');

% 2. OR Operation
or_image = img1 | img2;
figure, imshow(or_image), title('OR Operation');

% 3. NAND Operation
nand_image = ~(img1 & img2);
figure, imshow(nand_image), title('NAND Operation');

% 4. NOR Operation
nor_image = ~(img1 | img2);
figure, imshow(nor_image), title('NOR Operation');

% 5. XOR Operation
xor_image = xor(img1, img2);
figure, imshow(xor_image), title('XOR Operation');

% 6. XNOR Operation
xnor_image = ~xor(img1, img2);
figure, imshow(xnor_image), title('XNOR Operation');

% 7. NOT Operation on Image 1


not_image = ~img1;
figure, imshow(not_image), title('NOT Operation (Image 1)');

% 8. Intersection of Two Images (AND Operation)


intersection_image = img1 & img2;
figure, imshow(intersection_image), title('Intersection of Two Images');

23
% Calculate and display mean value of img1
mean_value = mean(img1(:)); % Calculate mean of the binary image
fprintf('Mean value of image 1: %.4f\n', mean_value);

CONCLUSION:
In this lab exercise, we successfully applied various logical operations such as AND, OR,
NAND, NOR, XOR, XNOR, and NOT on images. These operations provide critical
functionality in image segmentation, masking, and feature extraction. Additionally, the
intersection of two images was computed using the AND operation. These operations enable
powerful image analysis and processing, facilitating tasks such as object detection and pattern
recognition.

TUTORIAL:
1.) Prepare any two images of size 256x256 in paint. Save it in JPEG format 256 gray
levels. Perform logical NOR, NAND operations between two images. Write
program and paste your results.

% Read two 256x256 grayscale images


img1 = imread('image1.png');
img2 = imread('image2.png');

% Convert to grayscale if images are not already


if size(img1, 3) == 3
img1 = rgb2gray(img1);
end

if size(img2, 3) == 3
img2 = rgb2gray(img2);
end

% Ensure both images are 256x256


img1 = imresize(img1, [256 256]);
img2 = imresize(img2, [256 256]);

% Perform logical NOR operation


nor_result = ~(img1 | img2);

% Perform logical NAND operation


nand_result = ~(img1 & img2);

% Save the results


imwrite(nor_result, 'nor_result.jpeg');
imwrite(nand_result, 'nand_result.jpeg');

% Display the results


figure, imshow(nor_result), title('NOR Result');
figure, imshow(nand_result), title('NAND Result');

24
ASSIGNMENT 7
AIM:
To write and execute programs for performing the following geometric transformations
on an image:

 Translation
 Rotation
 Scaling
 Reflection
 Shearing
 Shrinking
 Zooming

THEORY:
Geometric transformations modify the spatial arrangement of an image's pixels, allowing for
manipulation in ways that can change the perspective or orientation of the image. These
transformations are fundamental in image processing applications such as computer vision,
augmented reality, and graphics editing.

1. Translation: Moves the image in the x and y directions by specified amounts without
altering its size or orientation.

2. Rotation: Rotates the image about a specified center by a certain angle.

3. Scaling: Changes the size of the image by a specified scale factor, enlarging or
reducing its dimensions.

25
4. Reflection: Flips the image across a specified axis (horizontal or vertical).
5. Shearing: Image can be distorted (sheared) either in x direction or y direction.

6. Shrinking: Reduces the size of the image by scaling it down.


7. Zooming: Enlarges the image by scaling it up.

ALGORITHMS:
1. Translation:
 Define the translation vector (tx, ty).
 Create a transformation matrix for translation.
 Apply the transformation to the image.

2. Rotation:
 Define the angle of rotation.
 Create a transformation matrix for rotation.
 Apply the transformation to the image.

3. Scaling:
 Define the scale factors (sx, sy).
 Create a transformation matrix for scaling.
 Apply the transformation to the image.

4. Reflection:
 Define the axis of reflection (horizontal or vertical).
 Create a transformation matrix for reflection.
 Apply the transformation to the image.

26
5. Shrinking:
 Define the scale factor less than 1.
 Create a transformation matrix for shrinking.
 Apply the transformation to the image.

6. Zooming:
 Define the scale factor greater than 1.
 Create a transformation matrix for zooming.
 Apply the transformation to the image.

MATHEMATICAL EXPRESSION:
1. Translation:
 X’=x+tx
 Y’=y+ty

Where tx and ty are the translation distances along the x and y axes respectively

2. Rotation:
 X’=x.cos(θ)- y.sin(θ)
 Y’=x.sin(θ) +y.cos(θ)

Where θ is the angle of rotation

3. Scaling:
 X’=sx.x
 Y’=sy.y

Where sx and sy are the scaling factors along the x and y axes respectively

4. Reflection:
 X’=-x
 Y’=y
5. Shrinking:
 X’=s.x
 Y’=s.y

Where 0<s<1 is the shrink factor

6. Zooming:
 X’=s.x
 Y’=s.y

27
Where s>1 is the zoom factor

MATLAB CODE:
% EXPERIMENT 7 - Geometric Transformations
% Aryaman Purohit 22U02003
% Read the input image
img = imread('image.png');

% 1. Translation using imtranslate


tx = 50; % Translation along x-axis
ty = 30; % Translation along y-axis
translated_image = imtranslate(img, [tx ty]);
figure, imshow(translated_image), title('Translated Image');

% 2. Scaling using imresize


sx = 1.5; % Scale factor along x-axis
sy = 1.5; % Scale factor along y-axis
scaled_image = imresize(img, [size(img, 1)*sy, size(img, 2)*sx]);
figure, imshow(scaled_image), title('Scaled Image');

% 3. Reflection (across y-axis) using flip


reflected_image = flip(img, 2); % Flip horizontally
figure, imshow(reflected_image), title('Reflected Image');

% 4. Shrinking using imresize


shrink_factor = 0.5; % Shrink factor
shrunk_image = imresize(img, shrink_factor);
figure, imshow(shrunk_image), title('Shrunk Image');

% 5. Zooming using imresize


zoom_factor = 2; % Zoom factor
zoomed_image = imresize(img, zoom_factor);
figure, imshow(zoomed_image), title('Zoomed Image');

28
ORIGINAL IMAGE:

OUTPUT:

MATLAB CODE 2:
% EXPERIMENT 7
% Aryaman Purohit 22U02003

29
clc
x=imread('image1.png');x=rgb2gray(x);
subplot(2,2,1); imshow(x);
title('Orignial Image');y=imrotate(x,45,'bilinear','crop');
subplot(2,2,2);
imshow(y);
title('Image rotated by 45 degree');y=imrotate(x,90,'bilinear','crop');
subplot(2,2,3);
imshow(y);
title('Image rotated by 90 degree');
y=imrotate(x,-45,'bilinear','crop');
subplot(2,2,4);
imshow(y);
title('Image rotated by -45 degree');
x = imread('cameraman.tif');
tform = maketform('affine',[1 0 0; .5 1 0; 00 1]);
y = imtransform(x,tform);
figure;
subplot(2,2,1);
imshow(x);
title('Orignial Image');
subplot(2,2,2);
imshow(y);
title('Shear in X direction');
tform = maketform('affine',[1 0.5 0; 0 1 0; 0 0 1]);
y = imtransform(x,tform);
subplot(2,2,3);
imshow(y);
title('Shear in Y direction');
tform = maketform('affine',[1 0.5 0; 0.5 1 0; 0 0 1]);
y = imtransform(x,tform);
subplot(2,2,4); imshow(y); title('Shear in X-Y direction');

OUTPUT2:

OUTPUT 3:

30
CONCLUSION:
In this lab exercise, we successfully applied various geometric transformations, including
translation, rotation, scaling, reflection, shrinking, and zooming on images. Each
transformation modifies the spatial arrangement of pixels, enabling the manipulation of
images for various applications in image processing and computer vision. Understanding
these transformations is crucial for tasks like image alignment, object detection, and image
enhancement.

TUTORIAL:
1.) In above program, modify matrix for geometric transformation and use imtransform()
function for modified matrix. Show the results and your conclusions.
x = imread('image2.png');

% Shear in X direction (Modified)


tform = maketform('affine', [1 0.2 0; 0 1 0; 0 0 1]);
y = imtransform(x, tform);
figure;
subplot(2,2,1);
imshow(x);
title('Original Image');
subplot(2,2,2);
imshow(y);
title('Shear in X direction (Modified)');
% Shear in Y direction (Modified)
tform = maketform('affine', [1 0 0; 0.2 1 0; 0 0 1]);
y = imtransform(x, tform);
subplot(2,2,3);
imshow(y);
title('Shear in Y direction (Modified)');
% Shear in X-Y direction (Modified)
tform = maketform('affine', [1 0.2 0; 0.2 1 0; 0 0 1]);
y = imtransform(x, tform);
subplot(2,2,4);
imshow(y);
title('Shear in X-Y direction (Modified)');

31
MODIFIED RESULT:

ASSIGNMENT 8

32
AIM:
To understand and implement frequency domain filtering techniques for images, specifically
focusing on analyzing spatial and intensity resolution.

THEORY:
Frequency Domain Filtering involves transforming an image from the spatial domain to
the frequency domain using techniques such as the Fourier Transform. In the frequency
domain, various filtering techniques can be applied to enhance or suppress certain
frequencies, which can affect the overall quality and characteristics of the image.

 Spatial Resolution refers to the detail an image holds. The higher the spatial
resolution, the more detail is present in the image, and this can be affected by the
filtering process.
 Intensity Resolution is related to the number of possible intensity values
(grayscale levels) that each pixel can have. Higher intensity resolution leads to
smoother gradients and better overall image quality.

ALGORITHMS:
1. Read the input image.
2. Convert the image to the frequency domain using the Fourier Transform.
3. Apply the desired frequency domain filter.
4. Convert the filtered image back to the spatial domain using the Inverse Fourier
Transform.
5. Display the original and filtered images.

MATHEMATICAL EXPRESSION:
1. Fourier Transform:
M −1 N−1 ux vy
−j2π( + )
F (u , v )= ∑ ∑ f (x , y )⋅ e M N ❑

x=0 y=0

2. Inverse Fourier Transform:


M −1 N −1 ux vy
1 j2π( + )
F (u , v )=
MN
∑ ∑ f (x , y) ⋅e M N ❑

x=0 y=0

3. Magnitude Spectrum:
√ 2
M (u , v )= ℜ ( F ( u , v )) + ℑ ( F ( u , v ) )
2

MATLAB CODE:
% EXPERIMENT 8
% Aryaman Purohit 22U02003

33
myimg =imread('image1.png');
if(size(myimg,3)==3)
myimg=rgb2gray(myimg);
end
myimg = imresize(myimg,[256 256]);
myimg=double(myimg);
subplot(2,2,1);
imshow(myimg,[]),title('Original Image');
[M,N] = size(myimg); % Find size
%Preprocessing of the image
for x=1:M
for y=1:N
myimg1(x,y)=myimg(x,y)*((1)^(x+y));
end
end
% Find FFT of the image
myfftimage = fft2(myimg1);
subplot(2,2,2);
imshow(myfftimage,[]); title('FFT Image');
% Define cut off frequency
low = 30;
band1 = 20;
band2 = 50;
%Define Filter Mask
mylowpassmask = ones(M,N);
mybandpassmask = ones(M,N);
% Generate values for ifilter pass mask
for u = 1:M
for v = 1:N
tmp = ((u-(M/2))^2 +(v-(N/2))^2)^0.5;
if tmp > low
mylowpassmask(u,v) = 0;
end
if tmp > band2 || tmp < band1;
mybandpassmask(u,v) = 0;
end
end
end
% Apply the filter H to the FFT of the Image
resimage1 = myfftimage.*mylowpassmask;
resimage3 = myfftimage.*mybandpassmask;
% Apply the Inverse FFT to the filtered image
% Display the low pass filtered image
r1 = abs(ifft2(resimage1));
subplot(2,2,3);
imshow(r1,[]),title('Low Pass filtered image');
% Display the band pass filtered image
r3 = abs(ifft2(resimage3));
subplot(2,2,4);
imshow(r3,[]),title('Band Pass filtered image');
figure;
subplot(2,1,1);imshow(mylowpassmask);
subplot(2,1,2);imshow(mybandpassmask);

ORIGINAL IMAGE:

34
OUTPUT:

CONCLUSION:

35
The implementation of frequency domain filtering allows for effective manipulation of spatial
and intensity resolutions in images. By applying filters in the frequency domain, one can
enhance specific features of the image or suppress noise.

TUTORIAL:
1.) Instead of following pre-processing step in above program use fftshift function to shift
FFT in the center. See changes in the result and write conclusion.

%Preprocessing of the
image for x=1:M
for y=1:N
myimg1(x,y)=myimg(x,y)*((-1)^(x+y));
end
end

Remove above step and use following commands.


myfftimage = fft2(myimg);
myfftimage=fftshift(myfftimage);

MODIFIED RESULTS:

ASSIGNMENT 9
36
AIM: Write MATLAB code to perform edge detection using different edge detection
mask.

THEORY: Image segmentation is to subdivide an image into its component regions or


objects. Segmentation should stop when the objects of interest in an application have been
isolated. Basic purpose of segmentation is to partition an image into meaningful regions for
particular application The segmentation is based on measurements taken from the image and
might be grey level, colour, texture, depth or motion.

There are basically two types of image segmentation approaches:

[1] Discontinuity based: Identification of isolated points, lines or edges

[2] Similarity based: Group pixels which has similar characteristics by thresholding, region
growing, region splitting and merging.

Edge detection is discontinuity-based image segmentation approach. Edges play a very


important role in many image-processing applications. It provides outline of an object. In
physical plane, edges are corresponding to changes in material properties, intensity
variations, and discontinuity in depth. Pixels on the edges are called edge points. Edge
detection techniques try to find out grey level transitions.

First order derivative and second order derivative operators can do edge detection.
First order line detection 3x3 mask are:

Popular edge detection masks:

MATLAB CODE:
%EXPERIMENT 9

37
% Aryaman Purohit 22U02003
% Program for edge detection using standard masks
A=imread('image2.png');
if(size(A,3)==3)
A=rgb2gray(A);end
imshow(A);figure;
BW = edge(A,'prewitt');
subplot(3,2,1); imshow(BW);
title('Edge detection with prewitt mask');
BW = edge(A,'canny');
subplot(3,2,2);
imshow(BW);
title('Edge detection with canny mask');
BW = edge(A,'sobel');
subplot(3,2,3);
imshow(BW);
title('Edge detection with sobel mask');
BW = edge(A,'roberts');
subplot(3,2,4); imshow(BW);
title('Edge detection with roberts mask');
BW = edge(A,'log');
subplot(3,2,5);imshow(BW);
title('Edge detection with log ');
BW = edge(A,'zerocross');
subplot(3,2,6); imshow(BW);
title('Edge detection with zerocorss');

OUTPUT:

CONCLUSION: MATLAB program for edge detection is executed.

TUTORIAL:

38
1.) Get mask for “Prewitt”, “Canny”, “Sobel” from the literature and write
MATLAB/SCILAB program for edge detection using 2Dconvolution

2.) Write a MATLAB code for edge detection of a grayscale image without using in-built
function of edge detection.

SOL. 1.):

Masks from Literature:

 Prewitt Mask:
o Horizontal:

o Vertical:

 Sobel Mask:
o Horizontal:

o Vertical:

Canny: The Canny edge detection method involves several steps including noise
reduction, gradient calculation, non-maximum suppression, and edge tracing using

39
hysteresis. It does not have a simple convolution mask but uses Gaussian filtering and
complex algorithms for accurate edge detection.

MATLAB/SCILAB Program for Edge Detection using 2D Convolution:

% Read and convert image to grayscale

img = imread('image.jpg');

img_gray = rgb2gray(img);

img_gray = double(img_gray);

% Define Prewitt and Sobel masks

prewitt_h = [-1 0 1; -1 0 1; -1 0 1];

prewitt_v = [1 1 1; 0 0 0; -1 -1 -1];

sobel_h = [-1 0 1; -2 0 2; -1 0 1];

sobel_v = [1 2 1; 0 0 0; -1 -2 -1];

% Apply convolution for Prewitt

edge_prewitt_h = conv2(img_gray, prewitt_h, 'same');

edge_prewitt_v = conv2(img_gray, prewitt_v, 'same');

edge_prewitt = sqrt(edge_prewitt_h.^2 + edge_prewitt_v.^2);

% Apply convolution for Sobel

edge_sobel_h = conv2(img_gray, sobel_h, 'same');

edge_sobel_v = conv2(img_gray, sobel_v, 'same');

edge_sobel = sqrt(edge_sobel_h.^2 + edge_sobel_v.^2);

% Display the results

subplot(2,2,1); imshow(uint8(img_gray)); title('Original Image');

subplot(2,2,2); imshow(uint8(edge_prewitt)); title('Prewitt Edge Detection');

subplot(2,2,3); imshow(uint8(edge_sobel)); title('Sobel Edge Detection');

SOL. 2.):

% MATLAB Program for Edge Detection without In-built Function

% Read and convert image to grayscale

40
img = imread('image.jpg');

img_gray = rgb2gray(img);

img_gray = double(img_gray);

% Define Sobel mask for x and y direction

sobel_x = [-1 0 1; -2 0 2; -1 0 1];

sobel_y = [1 2 1; 0 0 0; -1 -2 -1];

% Compute gradients

Gx = conv2(img_gray, sobel_x, 'same');

Gy = conv2(img_gray, sobel_y, 'same');

% Compute magnitude of gradient

G = sqrt(Gx.^2 + Gy.^2);

% Normalize and display result

G = G / max(G(:)) * 255;

imshow(uint8(G));

title('Edge Detection without using built-in function');

ASSIGNMENT 10
AIM:

41
Write and execute programs to remove noise using spatial filters

 Understand 1-D and 2-D convolution process


 Use 3x3 Mask for low pass filter and high pass filter

THEORY:
 Spatial Filtering is sometimes also known as neighborhood processing.
Neighborhood processing is an appropriate name because you define a center
point and perform an operation (or apply a filter) to only those pixels in
predetermined neighborhood of that center point. The result of the operation is
one value, which becomes the value at the center point's location in the
modified image. Each point in the image is processed with its neighbors. The
general idea is shown below as a "sliding filter" that moves throughout the
image to calculate the value at the center location.

 In spatial filtering, we perform convolution of data with filter coefficients. In


image processing, we perform convolution of 3x3 filter coefficients with 2-D
image data. In signal processing, we perform convolution of 1-D data with set
of filter coefficients.

MATLAB CODE:
1D convolution (Useful for 1-D Signal Processing):
% Experiment No. 10
% Aryaman Purohit 22U02003
% Spatial filtering using standard MATLAB function
% To apply spatial filters on given image
%Define spatial filter masks
L1=[1 1 1;1 1 1;1 1 1];
L2=[0 1 0;1 2 1;0 1 0];
L3=[1 2 1;2 4 2;1 2 1];
H1=[-1 -1 -1;-1 9 -1;-1 -1 -1];
H2=[0 -1 0;-1 5 -1;-0 -1 0];
H3=[1 -2 1;-2 5 -2;1 -2 1];

% Read the test image and display it


myimage = imread('image1.png');
if(size(myimage,3)==3)
myimage=rgb2gray(myimage);
end
subplot(3,2,1);
imshow(myimage); title('Original Image');

L1 = L1/sum(L1);
filt_image= conv2(double(myimage),double(L1)); subplot(3,2,2);
imshow(filt_image,[]);
title('Filtered image with mask L1');

L2 = L2/sum(L2);
filt_image= conv2(double(myimage),double(L2));
subplot(3,2,3);

42
imshow(filt_image,[]); title('Filtered image with mask L2');

L3 = L3/sum(L3);
filt_image= conv2(double(myimage),double(L3));
subplot(3,2,4); imshow(filt_image,[]); title('Filtered image with mask
L3');

filt_image= conv2(double(myimage),H1);
subplot(3,2,5); imshow(filt_image,[]); title('Filtered image with mask
H1');
filt_image= conv2(double(myimage),H2);
subplot(3,2,6); imshow(filt_image,[]); title('Filtered image with mask
H1');

figure; subplot(2,2,1); imshow(myimage); title('Original Image'); % The


command fspecial() is used to create mask
% The command imfilter() is used to apply the gaussian filter mask to the
image
% Create a Gaussian low pass filter of size 3

gaussmask = fspecial('gaussian',3);
filtimg = imfilter(myimage,gaussmask);
subplot(2,2,2); imshow(filtimg,[]),title('Output of Gaussian filter 3 X
3');

% Generate a lowpass filter of size 7 X 7


% The command conv2 is used the apply the filter
% This is another way of using the filter

avgfilt = [ 1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1;
1 1 1 1 1 1 1];

avgfiltmask = avgfilt/sum(avgfilt);
convimage= conv2(double(myimage),double(avgfiltmask));

subplot(2,2,3); imshow(convimage,[]); title('Average filter with conv2()');

filt_image= conv2(double(myimage),H3);
subplot(3,2,6); imshow(filt_image,[]); title('Filtered image with mask
H3');

43
OUTPUT:

CONCLUSION:
MATLAB program spatial filtering is executed.

44
Tutorial:
1.) Write mathematical expression of spatial filtering of image f(x,y)
of size M*N using mask W of size a*b.
2.) What is need for padding? What is zero padding? Why it
isrequired?
3.) What is the effect of increasing size of mask?

SOL1.):
The mathematical expression for spatial filtering of an image f(x,y) of size
M×N times using a mask W of size a×b times can be written as:

Explanation:
 f(x,y) is the original image of size M×N times.

 W(i,j) is the filter mask (or kernel) of size a×b times

 g(x,y) is the resulting filtered image.

 The filter mask is centered over the pixel (x,y) in the image, and the convolution is
performed by multiplying corresponding values of the image and the filter mask and
summing them.

 The indices iii and jjj represent the relative positions of the mask's elements with
respect to the center of the mask.

This formula assumes that the mask W is symmetric about its center.

SOL 2.): Need for Padding in Image Processing


Padding is required in image processing to ensure that the output image after applying
filters or convolution operations has the same dimensions as the input image. When a
filter (or kernel) is applied to an image, especially near the edges, there are fewer
neighboring pixels to operate on. Without padding, this can cause the output image to
be smaller than the original, or cause the filter not to fully process the border areas.

45
Zero Padding:
Zero padding is the process of adding extra pixels around the edges of an image, where
the additional pixels are set to zero. This allows the filter to process the entire image,
including the edge pixels.
For example, if an image of size MxN is padded with a border of zeros, the resulting
padded image will be larger than the original. For a 3x3 filter, 1-pixel-wide padding is
commonly added to the borders to maintain the output image size.
Why is Padding Required?
1. Preserve Image Size: Padding ensures that the output image after applying a filter is
of the same size as the input image, avoiding shrinkage of the image due to
convolution.
2. Handle Edge Pixels: Without padding, edge pixels would not be processed
effectively since there are fewer neighboring pixels near the borders.
3. Avoid Information Loss: Applying filters near the image boundaries without
padding can result in missing important details in those areas.
Thus, zero padding helps in keeping the spatial resolution of the image consistent while
performing operations like convolution.

SOL3.):
Effect of Increasing the Size of the Mask in Image Processing:
When the size of the mask (or filter) increases, it affects the image processing operation
in the following ways:
1. Smoothing Effect (Blurring):
- Larger masks tend to average over a larger area of pixels, which leads to greater
smoothing or blurring of the image. This can reduce noise but also causes loss of fine
details and edges.
- For example, a 3x3 filter captures a smaller local area, while a 7x7 filter covers a
larger area, resulting in a stronger smoothing effect.

2. Computational Complexity:
- Increasing the size of the mask increases the number of operations required for
convolution. For an image of size MxN and a mask of size aXb, the number of
multiplications performed for each pixel increases with the mask size. This results in
higher computational time and resource usage.

3. Edge Preservation and Detail Loss:


- A smaller mask is more sensitive to small changes and preserves edges and finer
details of the image better.

46
- A larger mask tends to smooth out more details, making it harder to detect edges or
sharp transitions.
4. Noise Reduction:
- Larger masks are more effective at reducing noise, as they involve a broader region
of pixels for averaging or filtering. However, this noise reduction comes at the cost of
detail loss.
5. Change in Feature Detection:
- In edge detection or other feature extraction techniques, using a larger mask can
result in less sensitivity to fine edges or small features, as the mask considers a wider
region. Smaller edges or thin lines may be smoothed out or missed.

In summary, increasing the mask size enhances smoothing, noise reduction, and
filtering effects, but also increases computational cost and can result in the loss of finer
image details and sharp edges.
---------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------

47

You might also like