0% found this document useful (0 votes)
4 views

Chapter - 5.1 Image Segementaion, Object (Face) Detection

The document discusses image segmentation techniques, including simple methods like thresholding, seed growing, and split and merge, as well as advanced techniques such as K-means clustering. It defines segmentation as the process of identifying meaningful regions in an image for analysis or visualization. The document also covers the K-means clustering algorithm, explaining its iterative process for grouping similar data points based on distance metrics.

Uploaded by

Lĩverpōol Ynwa
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Chapter - 5.1 Image Segementaion, Object (Face) Detection

The document discusses image segmentation techniques, including simple methods like thresholding, seed growing, and split and merge, as well as advanced techniques such as K-means clustering. It defines segmentation as the process of identifying meaningful regions in an image for analysis or visualization. The document also covers the K-means clustering algorithm, explaining its iterative process for grouping similar data points based on distance metrics.

Uploaded by

Lĩverpōol Ynwa
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

Computer Vision and Image processing

Chapter 5.1 cont..

Woldia University
IOT
March 2021
Outline
 Introduction to Segmentation: Definition of Segmentation
 Simple segmentation
-Thresholding (Binarization)
-Seed Growing (Gray level images)
-Split and merge (Gray level images)
 Advanced techniques
-K-means Clustering
 Object Detection :-Face Detection using Haar Cascades
Image Segmentation

 Definitions:
Image Segmentation

 Complementary approach to edge detection:


 Edge detection: try to identify boundaries of objects (i.e. locations where there is a
change in “property”)
 Image segmentation: try to identify regions occupied by the objects.
Image Segmentation
Image Segmentation

 Goal of segmentation:
 Find “meaningful” regions, not just artefacts
 Regions for describing the contents of an image…i.e. for purposes of further image
analysis, efficient image compression or just for visualization.
Image Segmentation

 Formally:

Image Segmentation

 Foreground: Object of Interest


 Background: all the other pixels
Image Segmentation

 Figure Ground Segmentation:


 Separate the foreground object (figure) from the background (ground)
Image Segmentation
Image Segmentation
Image Segmentation
Simple Segmentation: 1. Thresholding
 Simple case where the image contains objects with similar gray level, with a uniform
background.
Image Segmentation
Simple Segmentation: 1. Thresholding
 Convert gray level image f(x,y) into a binary image B(x,y) by applying a global threshold
(identified by some optimization method)

B(x,y) = { 1
0
If f(x,y) < T
otherwise
Image Segmentation
Simple Segmentation: 1. Thresholding
 Selection of optimal threshold: Otsu method
 Assumption: the image I contains two classes of pixels (bi-modal histogram)
 Find threshold so that the “overlap” between the two classes in minimized
i.e. their combined spread (intra-class variance), defined as:

(u): variance of pixels p with I(p)≤u


(u): variance of pixels p with I(p)>u
Image Segmentation
Simple Segmentation: 1. Thresholding

P1(u), P2(u): probability of each class for threshold u


(number of pixels of each class divided by total number of pixels of image I)
Image Segmentation
Simple Segmentation: 1. Thresholding

Otsu method: examples


Image Segmentation
Simple Segmentation: 1. Thresholding

Otsu method: examples


Image Segmentation
Simple Segmentation: 1. Thresholding

Otsu method: example of multi-threshold


Image Segmentation
Simple Segmentation: 2. Region Growing/Seed Growing

 Works directly with the image pixels (not histogram)


 Procedure: recursive labelling
1. Start at one seed pixel
2. Add recursively adjacent pixels (not yet labelled) that satisfy a similarity criterion
with pixels already contains in the so-far grown region.
3. Stop if there is no further not-yet labelled pixels, adjacent to the grown region, that can
be merged.
Image Segmentation
Simple Segmentation: 2. Region Growing/Seed Growing

 Need to define the adjacency relation

 Another important question:


 The outcome of the segmentations must be independent of the selected seed pixels (i.e.
choosing different seed pixels must NOT result in a different segmentation)
Image Segmentation
Simple Segmentation: 2. Region Growing/Seed Growing
Image Segmentation
Simple Segmentation: 2. Region Growing/Seed Growing
Image Segmentation
Simple Segmentation: 2. Region Growing/Seed Growing

 Example:

5 6 6 7 7 7 6 6
6 7 6 7 5 5 4 7
6 6 4 4 3 2 5 6
5 4 5 4 2 3 4 6 Suppose, Threshold
0 3 2 3 3 2 4 7 T<3
And, let our seed point is 6.
0 0 0 0 2 2 5 6  Select a seed point which
1 1 0 1 0 3 4 4 value that near to the
1 0 1 0 2 3 5 4 maximum value. Which is
7 here.
Image Segmentation
Simple Segmentation: 2. Region Growing/Seed Growing

 Example:

x x x x x x x x
x x x x x x x
x Suppose, Threshold T < 3
x x x x x
x x x x x x And, let our seed point is 6.
x  Select a seed point which
x
x value that near to the
x maximum value. Which is
x x
7 here.
x x
Image Segmentation
Simple Segmentation: 3. Region Split and Merge
 Works directly with the image pixels (not histogram)
 Try to break the image into a set of disjoint, “uniform” regions.
 Procedure
 Initially consider the whole image as the area of interest.
 Split:
1. If the area of interest is found no homogeneous (according to a similarity “split
criterion”), then it is split into 4 equal quadrants.
2. Apply step 1 to each quadrant until no further splitting occurs.
Image Segmentation
Simple Segmentation: 3. Region Split and Merge
 Merge:
3. Merge adjacent regions satisfying a similarity “merge criterion”
4. Repeat step 3 until no further merges are possible.
 Options
 Optional minimum size of resulting areas
(no further splitting)
 Merge applied after each split step instead.
Image Segmentation
Simple Segmentation: 2. Region Split and Merge
 Some examples of similarity criterion
 Split if:
 Standard deviation of gray levels is above a threshold.
 Merge A with B if: A

 Standard deviation of gray levels of A U B is


below a threshold.
 Difference of mean gray levels of A and B is B
below a threshold.
Image Segmentation
Simple Segmentation: 3. Region Split and Merge

Original image After quad splitting After merging


Image Segmentation
Simple Segmentation: 3. Region Split and Merge
 Notes:
 Unpleasant drawback: square region shape assumption.
 A boundary of a region is not necessarily an edge (and no edge nearby)
 All the “simple segmentation” methods presented separate regions having “uniform”
gray level.
Image Segmentation
Simple Segmentation: 3. Region Split and Merge
Image Segmentation
Simple Segmentation: 3. Region Split and Merge

Example:

 No need of seed point


here.
 Split the image into
equal part.
 Let we split it in 4
part.
Image Segmentation
Simple Segmentation: 3. Region Split and Merge

Example:

 Assume threshold <= 3

 The first step is Splitting.

 Max value – Min value


Image Segmentation
Simple Segmentation: 3. Region Split and Merge

Example:
 Assume threshold <= 3
 The second step is Merging.
 In merging we consider adjacent
regions.
 To merge 2 regions: Max Val – Min
Val and vice versa of the two
regions must satisfy the threshold.
Image Segmentation
Simple Segmentation: 3. Region Split and Merge

 Example:
 Assume threshold <= 3
 The second step is Merging.
 In merging we consider adjacent
regions.
 To merge 2 regions: Max Val – Min
Val and vice versa of the two
regions must satisfy the threshold.
Image Segmentation
K-means clustering
 Clustering!
Clustering is
grouping
similar objects
together.
Image Segmentation
K-means clustering
 Clustering!
Clustering is
grouping
similar objects
together.
Image Segmentation
K-means clustering

Clustering Example!
 Differentiate different species of flower.
Image Segmentation
K-means clustering

Clustering Example!
 Clustering Articles
mostly done by search
engines.
Image Segmentation
K-means clustering

Clustering Example!
 Malignant vs Benign?
Image Segmentation
K-means clustering
 Image segmentation is the classification of an image into different groups.
 Many researches have been done in the area of image segmentation using clustering.
 There are different methods and one of the most popular methods is k-means clustering
algorithm.
 K -means clustering algorithm is an unsupervised algorithm and it is used to segment the
interest area from the background.
Image Segmentation
K-means clustering
 Clustering can be defined as the grouping of data points based on some commonality or
similarity between the points. One of the simplest methods is K-means clustering.
 In this method, the number of clusters is initialized and the center of each of the cluster is
randomly chosen.
 The Euclidean distance between each data point and all the center of the clusters is
computed and based on the minimum distance each data point is assigned to certain
cluster.
 The new center for the cluster is defined and the Euclidean distance is calculated. This
procedure iterates till convergence is reached.
Image Segmentation
K-means clustering algorithm
 Input: K, set of points X1 … Xn
 Place centroids c1 … ck at random locations

 Stop when none of the cluster assignments change


O (#iterations * #clusters * #instances * #dimensions)
Image Segmentation
K-means clustering Example 1

 Place 2 centroids value at


random location.
 Which data points is closer to
which centroid (red or yellow)
 We are using Euclidian distance
measurement to calculate the
distance b/n each data points
and the 2 centroids.
Image Segmentation
K-means clustering Example 1
Image Segmentation
K-means clustering Example 1

iteration 2

iteration 1

iteration 3
iteration 4

- You stop here, because the


algorithm has converge!
Image Segmentation
K-means clustering Example 2

Randomly assign the


centroid value.
Image Segmentation
K-means clustering Example 2
Image Segmentation
K-means clustering Example 2
 Iteration simply means repeating the same steps with in intention to get closer to the desired with re iteration.
Iteration 1
Image Segmentation
K-means clustering Example 2
 Iteration simply means repeating the same steps with in intention to get closer to the desired with re iteration.
Iteration 1
Image Segmentation
K-means clustering Example 2
 Iteration simply means repeating the same steps with in intention to get closer to the desired with re iteration.
Iteration 2
Image Segmentation
K-means clustering Example 2
 Iteration simply means repeating the same steps with in intention to get closer to the desired with re iteration.
Iteration 2

Convergence

When should k-means stop iterating?


 When the centroids and thus the boundaries changes no more than a small
tolerance value.
Image Segmentation
K-means clustering Example 3

Q. Suppose your dataset is {2, 3, 4, 10, 11, 12, 20, 25, 30}, and k=2 , solve it by using k-
mean clustering.
Soln.
Step1: Take mean value (randomly)
Step2: Find nearest number of mean and put in cluster
Step3: Repeat l and 2 until we get same mean (convergence)
Image Segmentation
K-means clustering Example 3
K = {2, 3, 4, 10, 11, 12, 20, 25, 30}
K=2
m1 = 4 m2 = 12
k1 = {2, 3, 4} k2 = {10, 11, 12, 20, 25, 30}
here calculate the mean for each cluster!
m1 = 3 m2 = 18
k1 = {2, 3, 4, 10} k2 = {11, 12, 20, 25, 30}
m1 = 4.755 ~ 5 m2 = 19.6 ~ 20
Image Segmentation
K-means clustering Example 3
K = {2, 3, 4, 10, 11, 12, 20, 25, 30}
K=2
m1 = 4.755 ~ 5 m2 = 19.6 ~ 20
k1 = {2, 3, 4, 10, 11, 12} k2 = {20, 25, 30}
m1 = 7 m2 = 25
k1 = {2, 3, 4, 10, 11, 12} k2 = {20, 25, 30}
m1 = 7 m2 = 25 Thus, we are getting same mean and we have to stop.
So, our final cluster is: k1 = {2, 3, 4, 10, 11, 12}
k2 = {20, 25, 30}
Image Segmentation
K-means clustering

 To the pixel position at (1,1) [24,64,186], the minimum


value is 292.6072 and it belongs to the cluster 1.
Image Segmentation
K-means clustering

Exercise. Use the k-means algorithm and Euclidean distance to cluster the
following 8 examples into 3 clusters:
A1= (2,10), A2= (2,5), A3= (8,4), A4= (5,8), A5= (7,5), A6= (6,4), A7=
(1,2), A8= (4,9).
The distance matrix based on the Euclidean distance (d) is given below:
d is the distance between a and b.
Hint: d (a, b) = sqrt((xb-xa)2+(yb-ya)2))
Image Segmentation
K-means clustering Exercise
Image Segmentation
K-means clustering Exercise

 Suppose that the initial Centroid (centres of each cluster) are C1= A1, C2= A4
and C3= A7. Run the k-means algorithm and at the end of each iteration show:
a) The new clusters (i.e. the examples belonging to each cluster)
b) The centres of the new clusters.
c) Draw a 10 by 10 space with all the 8 points and show the clusters after the first
epoch and the new centroids.
d) How many more iterations are needed to converge? Draw the result for each
epoch.
Image Segmentation
K-means clustering Exercise

Soln:
Image Segmentation
epoch1 – start:
K-means clustering Exercise
 A1:  A3:
 d(A1, seed1)=0 as A1 is seed1  d(A3, seed1)= sqrt(36) = 6
 d(A1, seed2)= sqrt(13) >0  d(A3, seed2)= sqrt(25) = 5 􀃍 smaller
 d(A1, seed3)= sqrt(65) >0  d(A3, seed3)= sqrt(53) = 7.28
􀃎A1 ∈ cluster1 􀃎 A3 ∈ cluster2
 A2:  A4:
 d(A2,seed1)= sqrt(25) = 5  d(A4, seed1)= sqrt(13)
 d(A2, seed2)= sqrt(18) = 4.24  d(A4, seed2)=0 as A4 is seed2
 d(A2, seed3)= sqrt(10) = 3.16 􀃍 smaller  d(A4, seed3)= sqrt(52) >0
􀃎 A2 ∈ cluster3 􀃎 A4 ∈ cluster2
Image Segmentation
K-means clustering Exercise
 A5:  A7:
 d(A5, seed1)= sqrt(50) = 7.07  d(A7, seed1)= sqrt(65) >0
 d(A5, seed2)= sqrt(13) = 3.60 􀃍 smaller  d(A7, seed2)= sqrt(52) >0
 d(A5, seed3)= sqrt(45) = 6.70  d(A7, seed3)=0 as A7 is seed3
􀃎 A5 ∈ cluster2 􀃎 A7 ∈ cluster3
 A6:  A8:
 d(A6, seed1)= sqrt(52) = 7.21  d(A8, seed1)= sqrt(5)
 d(A6, seed2)= sqrt(17) = 4.12 􀃍 smaller  d(A8, seed2)= sqrt(2) 􀃍 smaller
 d(A6, seed3)= sqrt(29) = 5.38  d(A8, seed3)= sqrt(58)
A6 ∈ cluster2
end of􀃎epoch1 􀃎 A8 ∈ cluster2
Image Segmentation
K-means clustering Exercise
new clusters: 1: {A1}, 2: {A3, A4, A5, A6, A8}, 3: {A2, A7}
b) centers of the new clusters: 2nd epoch
C1= (2, 10), C2= ((8+5+7+6+4)/5, (4+8+5+4+9)/5) = (6, 6), C3= ((2+1)/2, (5+2)/2) = (1.5, 3.5)
Image Segmentation
K-means clustering Exercise
c) We would need two more epochs. After the 2nd epoch the results would be:
1: {A1, A8}, 2: {A3, A4, A5, A6}, 3: {A2, A7}
with centers C1=(3, 9.5), C2=(6.5, 5.25) and C3=(1.5, 3.5).
 After the 3rd epoch, the results would be: 1: {A1, A4, A8}, 2: {A3, A5, A6}, 3: {A2,
A7}
with centers C1=(3.66, 9), C2=(7, 4.33) and C3=(1.5, 3.5).
Object Detection
Face Detection using Haar Cascades
 Goal:
 We will see the basics of face detection using Haar Feature-based Cascade Classifiers.
 We will extend the same for eye detection etc.
Face Detection

 To achieve a better performance, algorithms must minimize both the false positive and
false negative rates.
 Viola-Jones invented in 2001 by two computer scientists, Paul Viola and Michael Jones
as the first face detection algorithm.
 In their paper, “Rapid Object Detection using a Boosted Cascade of Simple Features” in
2001.
 It is a machine learning based approach where a cascade function is trained from a lot of
positive and negative images. It is then used to detect objects in other images.
Face Detection

 Initially, the algorithm needs a lot of positive images (images of faces) and negative
images (images without faces) to train the classifier.
 Then we need to extract features from it.
 The Viola Jones algorithm have basically four parts to perform face detection: Those are
Haar features, Integral image, Adaboost classifier, and Cascade classifier.
Face Detection

 Haar features are important to detect the existence of features on the image.
 Each and every one of features in a single sub window is calculated by deducting the
total sum of pixels on white rectangle from the total sum of pixels on black rectangle.
 Suppose I and P indicate an image and a pattern respectively, with a similar size N X N.
The feature with pattern P and image I defined as:
Face Detection

Type 1 Type 2 Type 3 Type 4 Type 5

 Haar features used in Viola Jones, the black color has a value +1, and a white color have a value -1.
 Viola Jones utilize a 24x24 window image, and if we examine all of the possible parameters generated
from the Haar features, such as type, scale, and position; we will get more than 160, 000+ features in this
window.
Face Detection

Applying on a given image


Face Detection

 The second step is an Integral Image, we do not need to sum up with all the pixels
rather use the corner values. Integral image is calculated as i.e. the current value at point
(x, y) is the addition of pixels at the upper side and to the left of (x, y).
Face Detection

 The third step is Adaboost Learning.


 The Haar features have more than 160, 000+ feature values with a 24x24 window.
 But in fact, a few set features are only important among all of the features to identify
weather the image is a face or not.
 Basically, Adaboost is a classifier that consists of a weighted sum of various weak
classifiers.
 Finally, Adaboost will construct a strong classifier by combining these weak classifiers.
Face Detection

 Where F(x) is the strong classifier, and fi(x) is weak classifier. The output of weak
classifier is either 0 or 1. If correctly classify the image as a face means have a value 1,
else have a value 0.
Face Detection

 The last step of Viola Jones algorithm is Cascading.


 After Adaboost; Cascade classifier will follow, and it will take all the features from the
Adaboost.
 Cascade classifier divide the features into stages (each stage are strong classifiers that are
previously taken from Adaboost).
 When a window in a single stage is classified as a non-face, it will be automatically
discarded. But if a window classified as might be a face then it will go to the next stage
in the cascade.
Face Detection

Cascading workflow
Face Detection

(a) (b)

Supervised examples (24 x 24 gray scale images): (a) positive example (b) negative example.
Face Detection

 Apart from these process, the Viola Jones algorithm have some limitations to be
improved.
 Such as data preparation, and feature extraction are the major source of limitation.
 They recommend that using SVM classifier with the cascade will speed up during both
learning time and sample data increased.
Face Detection

 The Viola and Jones create an algorithm that detect up-right faces with a reduced
computational complexity.
 Their detection is based on cascade classifier. But In fact, the cascade detector gradually
finds only the up-right faces, basically they fail when they want to detect faces from
different angles, like from side view, or during occluded faces exist.

This is the motivation for us


to use a deep learning
approach!
Haar-cascade Detection in OpenCV

 OpenCV comes with a trainer as well as detector. If you want to train your own classifier
for any object like car, planes etc. you can use OpenCV to create one.
 OpenCV already contains many pre-trained classifiers for face, eyes, smile etc. Those
XML files are stored in opencv/data/haarcascades/ folder.
 First we need to load the required XML classifiers. Then load our input image (or video)
in grayscale mode.
Haar-cascade Detection in OpenCV
Haar-cascade Detection in OpenCV

 Result looks like:


 Here the input was
just a single
image.
Haar-cascade Detection in OpenCV

…..Let we see in real-time video….

 Reading material
- Paul Viola and Michael Jones, "Robust Real-time Face Detection”, Proceedings Eighth IEEE International Conference
Computer Vision, 07 August 2002, Volume 2, Pages: 747 – 747, Vancouver, BC, Canada
- Binyam Tesfahun Liyew, “Applying a Deep Learning Convolutional Neural Network (CNN) Approach for building a Face
Recognition system: A Review”, Journal of Emerging Technologies and Innovative Research , December 2017, Volume 4
Issue 12, India.

You might also like