0% found this document useful (0 votes)
18 views

AIS412 - Lecture 2

Uploaded by

Shady Ahmed
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

AIS412 - Lecture 2

Uploaded by

Shady Ahmed
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

AIS412

Lecture 3: Corner
Detection
MUSTAFA ELATTAR

*This course material is sourced from Carnegie Mellon


University for Computer Vision and Stanford University for the
CNN for Visual Recognition course.
Detecting corners

2
Overview
• Why detect corners?
• Harris corner detector.
• Multi-scale detection.
• Multi-scale blob detection.

3
Why detect corners?

4
Why detect corners?
• Image alignment (homography, fundamental matrix)
• 3D reconstruction
• Motion tracking
• Object recognition
• Indexing and database retrieval
• Robot navigation

5
Planar object instance recognition
Database of planar objects Instance recognition

6
3D object recognition
Database of 3D objects 3D objects recognition

7
Recognition under occlusion
Location Recognition

9
Robot Localization

10
Image matching

11
NASA Mars Rover images

Where are the corresponding points?


NASA Mars Rover images
Pick a point in the image.
Find it again in the next image.

What type of feature would you select?


Pick a point in the image.
Find it again in the next image.

What type of feature would you select?


Pick a point in the image.
Find it again in the next image.

What type of feature would you select?


a corner
Harris corner detector

17
How do you find a corner?

18
How do you find a corner? [Moravec 1980]

Easily recognized by looking through a small window

Shifting the window should give large change in intensity

19
Easily recognized by looking through a small window

Shifting the window should give large change in intensity

“flat” region: “edge”: “corner”:


no change in all no change along the edge significant change in all
directions direction directions

[Moravec 1980]
Design a program to detect corners
(hint: use image gradients)

21
Finding corners
1.Compute image g r a d i e n t s o v e r
small region

2 . S u b t r a c t mean from e a c h image


gradient

3. Compute t h e c o v a r i a n c e m a t r i x

4. Compute e i g e n v e c t o r s and
eigenvalues

5 . U s e t h r e s h o l d on e i g e n v a l u e s t o
detect corners
(a.k.a. PCA)
22
1. Compute image gradients over a small region
(not just a single pixel)

array of x gradients

array of y gradients

23
visualization of gradients

image

X derivative

Y derivative

24
What does the distribution tell you about the region?
distribution reveals edge orientation and magnitude
How do you quantify orientation and magnitude?
2. Subtract the mean from each image gradient

plot intensities

constant intensity
gradient

intensities along the line

28
2. Subtract the mean from each image gradient
plot intensities

constant intensity
gradient

intensities along the line

subtract mean

data is centered 29
plot of image gradients
(‘DC’ offset is removed)
3. Compute the covariance matrix

=sum( * )
array of x gradients array of y gradients

30
4. Compute eigenvalues and eigenvectors
eigenvalue

eigenvector

1. Compute the determinant of


(returns a polynomial)

2. Find the roots of polynomial


(returns eigenvalues)

3. For each eigenvalue, solve


(returns eigenvectors)

31
Visualization as an ellipse
Since M is symmetric, we have

We can visualize M as an ellipse with axis lengths determined


by the eigenvalues and orientation determined by R

direction of the
fastest change
Ellipse equation:
direction of the
(λmax)-1/2 slowest change

(λmin)-1/2

32
Interpreting eigenvalues
λ2
λ2 >> λ1

What kind of image patch


does each region represent?

λ1 >> λ2
λ1 33
Interpreting eigenvalues
λ2
‘horizontal’ corner
edge

λ2 >> λ1

λ 1 ~ λ2

flat λ1 >> λ2
‘vertical’
edge

λ1 34
Interpreting eigenvalues
λ2
‘horizontal’ corner
edge

λ2 >> λ1

λ 1 ~ λ2

flat λ1 >> λ2
‘vertical’
edge

λ1 35
Interpreting eigenvalues
λ2
‘horizontal’ corner
edge

λ2 >> λ1

λ 1 ~ λ2

flat λ1 >> λ2
‘vertical’
edge

λ1 36
5. Use threshold on eigenvalues to detect corners
λ2

Think of a function to
score ‘cornerness’

flat
37
λ1
5. Use threshold on eigenvalues to detect corners
λ2

strong corner
Think of a function to
score ‘cornerness’

flat
38
λ1
5. Use threshold on eigenvalues to detect corners
λ2
corner

Use the smallest eigenvalue


as the response function

flat
39
λ1
5. Use threshold on eigenvalues to detect corners
λ2
corner

Eigenvalues need to be
bigger than one.

Can compute this more efficiently…

flat
40

λ1
5. Use threshold on eigenvalues to detect corners
λ2
corner

R<0 R>0

R<0
flat
41
λ1
Harris & Stephens (1988)

Kanade & Tomasi (1994)

Nobel (1998)
Harris Detector
C.Harris and M.Stephens. “A Combined Corner and Edge Detector.”1988.

1. Compute x and y derivatives of image

2. Compute products of derivatives at every pixel

3. Compute the sums of the products of derivatives at


each pixel

43
Harris Detector
C.Harris and M.Stephens. “A Combined Corner and Edge Detector.”1988.

4. Define the matrix at each pixel

5. Compute the response of the detector at each pixel

6. Threshold on value of R; compute non-max


suppression.

44
Yet another option:

How do you write this equivalently


using determinant and trace?
Yet another option:
Different criteria

Harris criterion
Corner response
Thresholded corner response
Non-maximal suppression
Harris corner response is invariant to rotation

Ellipse rotates but its shape


(eigenvalues) remains the same

Corner response R is invariant to image rotation

54
Harris corner response is invariant to intensity changes

• Partial invariance to affine intensity change

• Only derivatives are used => invariance to intensity


shift I → I + b

• Intensity scale: I → a I

R R
threshold

x (image coordinate) x (image coordinate) 55


The Harris corner detector is not invariant to scale

edge!
corner!

56
AIS412
Lecture 4: Feature
Detection
MUSTAFA ELATTAR

*This course material is sourced from Carnegie Mellon


University for Computer Vision and Stanford University for the
CNN for Visual Recognition course.
Feature detectors and descriptors

58
Overview
• Why do we need feature descriptors?
• Designing feature descriptors.
• MOPS descriptor.
• GIST descriptor.

59
Why do we need feature descriptors?

60
If we know where the good features are,
how do we match them?
Designing feature descriptors

63
What is the best descriptor for an image feature?
Photometric transformations

65
Geometric transformations

objects will appear at different scales,


translation and rotation
66
Image patch
Just use the pixel values of the patch!

1 2 3
4 5 6 ( 1 2 3 4 5 6 7 8 9 )
vector of intensity values
7 8 9

Perfectly fine if geometry and appearance is unchanged


(a.k.a. template matching)

What are the problems?


How can you be less sensitive to absolute intensity values?
67
Image gradients
Use pixel differences

1 2 3
4 5 6 ( - + + - - + )
vector of x derivatives
7 8 9
‘binary descriptor’

Feature is invariant to absolute intensity values

What are the problems?


How can you be less sensitive to deformations?
68
Color histogram
Count the colors in the image using a histogram

colors

Invariant to changes in scale and rotation

What are the problems?


How can you be more sensitive to spatial layout? 69
Spatial histograms

Compute histograms over spatial ‘cells’

Retains rough spatial layout


Some invariance to deformations

What are the problems?


How can you be completely invariant to rotation?
70
Orientation normalization
Use the dominant image gradient direction to
normalize the orientation of the patch

save the orientation along with


angle

71
MOPS descriptor

72
Multi-Scale Oriented Patches (MOPS)
Multi-Image Matching using Multi-Scale Oriented Patches. M. Brown, R. Szeliski and S. Winder.
International Conference on Computer Vision and Pattern Recognition (CVPR2005). pages 510-517

73
Multi-Scale Oriented Patches (MOPS)

Given a feature
Get 40 x 40 image patch, subsample every 5th pixel
(low frequency filtering, absorbs localization errors)
Subtract the mean, divide by standard deviation
(removes bias and gain)
Haar Wavelet Transform
(low frequency projection)

Multi-Image Matching using Multi-Scale Oriented Patches. M. Brown, R. Szeliski and S. Winder. 74
International Conference on Computer Vision and Pattern Recognition (CVPR2005). pages 510-517
Haar Wavelets
(actually, Haar-like features)
Use responses of a bank of filters as a descriptor

75
GIST descriptor

76
GIST Filter bank

1. Compute filter responses (filter bank of Gabor


filters)

2. Divide image patch into 4 x 4 cells


4 x 4 cell

2. Compute filter response averages for each cell

3. Size of descriptor is 4 x 4 x N, where N is the size of


the filter bank

averaged filter responses

77
Directional edge detectors
79
GIST Filter bank

1. Compute filter responses (filter bank of Gabor


filters)

2. Divide image patch into 4 x 4 cells


4 x 4 cell

2. Compute filter response averages for each cell

3. Size of descriptor is 4 x 4 x N, where N is the size of


the filter bank

What is the GIST descriptor encoding? averaged filter responses


Rough spatial distribution of image gradients
80
Discriminative power

Raw pixels Sampled Locally orderless Global histogram

Generalization power

You might also like